uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,089,254
arxiv
\section{Introduction} Argument systems originate from philosophy (Toulmin 1958). More recently they have also been studied in AI (Bondarenko et al. 1997; Cayrol 1995; Dung 1993; Dung 1995; Fox et al. 1992; Geffner 1994; Hunter 1994; Kraus et al. 1995; Lin \& Shoham 1989; Loui 1987; Loui 1998; Pollock 1987, 1992, 1994; Poole 1988; Prakken 1993; Prakken \& Vreeswijk 1999; Simari \& Loui 1992; Vreeswijk 1991, 1997). When such argument systems are used for reasoning with defeasible rules (Fox et al. 1992; Geffner 1994; Hunter 1994; Kraus et al. 1995; Loui 1987; Pollock 1987; Prakken 1993; Prakken \& Vreeswijk 1999; Simari \& Loui 1992; Vreeswijk 1991, 1997), a rule is viewed as a justification for believing the consequent of the rule whenever we have a justification for believing its antecedent (Toulmin 1958). A justification for believing the antecedent can consist of facts about the world, denoted as evidence or premises, and of propositions that are justified by other defeasible rules. So, we can construct a tree of defeasible rules that justifies the belief in some proposition with respect to some evidence. This tree is called an {\em argument} for the proposition. Since the rules used in the construction of arguments are defeasible, it might be possible to construct an argument for a proposition as well as its negation. Clearly, only one of these arguments can give a valid justification for the proposition it supports. In most argument systems proposed in the literature, one of the arguments supporting the conflicting propositions; i.e.\ for a proposition and its negation, is defeated (Loui 1987; Pollock 1987, 1992, 1994; Prakken 1993; Prakken \& Vreeswijk 1999; Simari \& Loui 1992; Vreeswijk 1991, 1997). Generally, if an argument is defeated, there is a defeated sub-argument (not necessarily a proper sub-argument) that has a single last defeasible rule, and no sub-argument of this sub-argument is defeated. An exception are arguments that are based on defeasible causal relations. Geffner (1994), for example, allows the chain of causal arguments to be broken at any rule of an argument if the proposition supported by the argument conflicts with an observed fact. Defeasible rules representing causal relations have one property, not found in non causal defeasible rules. Causal defeasible rules can be used in contraposition. This raises the following question. {\em If rules cannot be used in contraposition, can a conflict be resolved by defeating a last defeasible rule of the argument supporting one of the conflicting propositions}? In legal argumentation, meta rules are used to resolve conflicts (Prakken 1993). These meta rules determine the valid arguments by considering the last defeasible rule, with respect to the chain of argumentation, of each proposition involved in the conflict. Hence, an argument should not only reflect the information used in an argumentation, but also the structure of the argumentation. Some argument system represent this structure explicitly (Vreeswijk 1991, 1997), while others represent it implicitly (Pollock 1987, 1992). Vreeswijk (1991, 1997) only considers defeasible rules that are definite horn clauses and a special symbol $\perp$ to denote an inconsistency. In this language, each argument for a conflict; i.e.\ for $\perp$, is unique. If we use, however, full propositional or predicate logic, there can be more than one way of argumentation for deriving `the same conflict'. Suppose for example that we have the following three defeasible rules: $\{ a \leadsto \neg d, b \leadsto \neg e, c \leadsto (d \vee e) \}$ and the facts: $\{ a, b, c \}$. Then we can construct arguments for conflicting propositions in at least three different ways. \[ \left. \begin{array}{r} \left. \begin{array}{r} a \\ a \leadsto \neg d \end{array} \right\} \neg d \\ \left. \begin{array}{r} c \\ c \leadsto (d \vee e) \end{array} \right\} d \vee e \end{array} \right\} e \mbox{ and } \left. \begin{array}{r} b \\ b \leadsto \neg e \end{array} \right\} \neg e \] \[ \left. \begin{array}{r} \left. \begin{array}{r} b \\ b \leadsto \neg e \end{array} \right\} \neg e \\ \left. \begin{array}{r} c \\ c \leadsto (d \vee e) \end{array} \right\} d \vee e \end{array} \right\} d \mbox{ and } \left. \begin{array}{r} a \\ a \leadsto \neg d \end{array} \right\} \neg d \] \[ \left. \begin{array}{r} \left. \begin{array}{r} b \\ b \leadsto \neg e \end{array} \right\} \neg e \\ \left. \begin{array}{r} a \\ a \leadsto \neg d \end{array} \right\} \neg d \end{array} \right\} \neg d \wedge \neg e \} \neg(d \vee e) \mbox{ and } \left. \begin{array}{r} c \\ c \leadsto d \vee e \end{array} \right\} d \vee e \] Each of these three couples of arguments supporting conflicting propositions, uses exactly the same information to derive a conflict. We would therefore expect that we have only one argument for the conflict instead of three. The heart of the problem is that the consequences of the three defeasible rules used in the above presented arguments are logically inconsistent. How these three rules are used to derive an inconsistency, should not matter. The derivation of the inconsistency takes place through logically sound deductions which cannot be subject to defeat. The line of reasoning that is followed using the logical sound deductions should not matter. Therefore, in the definition of an argument given in the next section, we abstract away from the actual logical sound deductions. For the same reason, we consider arguments for inconsistencies $\perp$, instead of arguments of the conflicting propositions. As mentioned above, in legal argumentation only the last rules for an inconsistency are considered in order to resolve the inconsistency. Using a preference relations, one of the last rules is identified as the culprit. There are $\frac{1}{2} n (n-1)$ pairs of rules between which there can be a preference, where $n$ is the number of defeasible rules. If we use a preference on the arguments, ignoring the structure of the arguments, there can be at most $\frac{1}{2} 2^n (2^n-1)$ pairs of arguments between which there can be a preference. Clearly, it is easier to evaluate arguments using their last rules, than using the whole argument. We will therefore investigate the following question. {\em Is there a need for considering more than only the last rules of an argument for an inconsistency}? The next section formalizes the arguments that can be constructed using defeasible rules. The here defined arguments do not only represent the defeasible rules that are used, but also the line of reasoning. Section 3 discusses whether we can resolve an inconsistency just by defeating one of the last defeasible rules of the argument for the inconsistency. Section 4 investigates whether we can select the rule (or argument) to be defeated just be looking at the last rules of the argument for the inconsistency. Based on the results of Sections 3 and 4, Section 5 proposes a new argument system. What is essentially new is that inconsistencies are resolved by constructing an argument for the {\em undercutting} defeat of one of the defeasible rules of the argument for the inconsistency. Section 6 discusses how to compute an extension and presents a linear time algorithm for doing so. Section 7 discusses closure properties of the argument system and the relation with default logic. Section 8 presents an extension of the presented argument system that enables reasoning by cases. Finally, Section 9 discusses related work and Section 10 concludes the paper. \section{The argument system} \label{argument-sys} We will derive arguments using a defeasible theory $\langle \Sigma, D \rangle$. Here, $\Sigma$ represents a set of premises and $D$ represents a set of defeasible rules. The set of premises $\Sigma$ is a subset of the propositional logic $L$. $L$ is recursively defined from a set of atomic propositions $At$ and the operators $\neg$, $\wedge$ and $\vee$. For every defeasible rule $\varphi \leadsto \psi \in D$ there holds that $\varphi$ is a proposition in $L$ and that $\psi$ is either a proposition in $L$ or the negation of a defeasible rule in $D$; i.e.\ $\psi = \neg ( \alpha \leadsto \beta)$ and $\alpha \leadsto \beta \in D$. The negation of a defeasible rule $\neg ( \alpha \leadsto \beta)$ will be interpreted as: `$\alpha$ may no longer justify $\beta$'. So the negation of a rule explicitly blocks the conclusive force of the defeasible rule. It will be used to describe the {\em undercutting} defeat of rule. If we have a valid argument for $\neg ( \alpha \leadsto \beta)$, then no argument containing the rule $\alpha \leadsto \beta$ can be valid. For example, we can undercut the rule: `something that looks red is red' by the rule: `something that stands below a red light need not be red if it looks red'. Notice the correspondence of the defeasible rules $\alpha \leadsto \beta$ and $\varphi \leadsto \neg(\alpha \leadsto \beta)$ with respectively the semi- and non-normal default rules $\frac{\alpha : \beta, \omega_1}{\beta}$ and $\frac{\varphi : \omega_2}{\neg \omega_1}$ where $\omega_1$ and $\omega_2$ summarize the exceptions on the default rules. Also notice the difference with Nute's (1988, 1994) defeater rule $\varphi \mathrel{?\kern-4pt\to} \neg \beta$. If we have a valid argument for $\varphi$, Nute's defeater rule $\varphi \mathrel{?\kern-4pt\to} \neg \beta$ defeats {\em any} argument containing a rule of the form $\alpha \leadsto \beta$. We can, however, use the defeater rule $\varphi \wedge \alpha \mathrel{?\kern-4pt\to} \neg \beta$ to describe $\varphi \leadsto \neg ( \alpha \leadsto \beta)$. In an argument system, a defeasible rule is viewed as a justification for believing the consequent of the rule whenever we have a justification for believing its antecedent (Toulmin 1958). A justification for believing the antecedent can consist of facts about the world, denoted as evidence or premises, and of propositions that are justified by other defeasible rules. So, we can construct a tree of defeasible rules that justifies the belief in some proposition with respect to some evidence. This tree is called an {\em argument} for the proposition. Logically sound deduction steps need not be represented explicitly in an argument. None of these deduction steps can be subject to defeat. Only the relations described by defeasible rules need not be valid in all circumstances. \begin{definition} \label{argument} Let $\langle \Sigma, D \rangle$ be a defeasible theory where $\Sigma$ is the set of premises and $D$ is the set of rules. Then an argument\footnote{We will sometimes add the index $\psi$ to an argument ($A_\psi$) to denote that it is an argument for $\psi$. Of course there can be more than one argument for $\psi$.} $A$ for a proposition $\psi$ is recursively defined in the following way: \begin{itemize} \item For each $\psi \in \Sigma$: $A = \{ \langle \emptyset, \psi \rangle\}$ is an argument for $\psi$. \item Let $A_1, ..., A_n $ be arguments for respectively $\varphi_1,...,\varphi_n$. If $\varphi_1,...,\varphi_n \vdash \psi$, then $A = A_1 \cup ... \cup A_n$ is an argument for $\psi$. \item For each $\varphi \leadsto \psi \in D$ if $A'$ is an argument for $\varphi$, then $A = \{ \langle A', \varphi \leadsto \psi \rangle \}$ is an argument for $\psi$. \end{itemize} Let $A = \{ \langle A'_1, \alpha_1 \rangle ,..., \langle A'_n, \alpha_n, \rangle \}$. Then: \[ \begin{array}{l} \vec{A} = \{ \alpha_1,...,\alpha_n \} \cap D ; \\ \hat{A} = \{ c(\alpha_1) ,..., c(\alpha_n) \} \mbox{ where $c(\alpha) = \alpha$ if $\alpha \in L$ and, $c(\alpha \leadsto \beta) = \beta$} ; \\ \tilde{A} = \{ \alpha_i \mid 1 \leq i \leq n,\alpha_i \in D \} \cup \bigcup^n_{i=1} \tilde{A}'_i ; \\ \bar{A} = \{ \alpha_i \mid 1 \leq i \leq n,\alpha_i \in \Sigma \} \cup \bigcup^n_{i=1} \bar{A}'_i \end{array} \] \end{definition} \begin{example} Let $A = \{ \langle \emptyset, \alpha \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \}, \gamma \leadsto \delta \rangle \}$ be an argument for $\varphi$. \[ \left. \begin{array}{r} \alpha \\ \beta \vdash \beta \leadsto \gamma \vdash \gamma \leadsto \delta \end{array} \right | \hspace{-5pt}- \varphi \] Then $\vec{A} = \{ \gamma \leadsto \delta \}$ denotes the last rules used in the argument $A$. Furthermore, $\hat{A} = \{ \alpha, \delta \}$ denotes the propositions that represent the beliefs $Th(\{ \alpha, \delta \})$ supported by the argument $A$. Clearly, $A$ is an argument for every proposition $\varphi \in Th(\{ \alpha, \delta \})$. $\tilde{A} = \{ \gamma \leadsto \delta, \beta \leadsto \gamma \}$ denotes the set of all rules in $A$, and $\bar{A} = \{ \alpha, \beta \}$ denotes the premises used in the argument $A$. \end{example} In the above definition of an argument, we do not apply the contraposition of a defeasible rule in the construction of an argument. In general, the contraposition of a defeasible rule is invalid. A rule describes that its consequent should hold or probably holds in context described by its antecedent. By no means this implies that the antecedent does not hold if the consequent does not hold. If the defeasible rule is interpreted as describing a preference, the negation of the consequent does not imply that the negation the antecedent should hold. A rule describes what should hold in the context described by its antecedent. The converse need not hold. So, knowing that John may not drive a car, we may not conclude that he does not own a driving license. It may just be the case that we have an exceptional situation, e.g. John is drunk, John has collected too many speeding tickets, John may not drive a car on doctors orders, and so. Especially if most people own a driving license, an exceptional situation need not be unlikely. Also if the defeasible rule is interpreted as describing a conditional probability, $Pr(\psi \mid \varphi) >t$ does not imply that $Pr(\neg \varphi \mid \neg \psi) >t$. In fact, if $Pr(\psi \mid \varphi) < 1$, $Pr(\neg \varphi \mid \neg \psi)$ can have any value in the interval $[0,1]$. Only in the event that we also know the a priori probabilities of $Pr(\varphi)$ and of $Pr(\psi)$, we can verify whether $Pr(\neg \varphi \mid \neg \psi) >t$ holds. {\em Causal rules} are a special kind of defeasible rules that do possess a contraposition (Geffner 1994). If, `{\em normally}, $\varphi$ {\bf causes} $\psi$', then $\neg \psi$ implies $\neg \varphi$, unless we have an exceptional situation. Such a rule can be described by a conditional probability, as is done in Bayesian Belief Networks. This description is incomplete unless we know or we can calculate the a priori probabilities of the antecedent and the consequent. Bayesian Belief Networks guarantee the latter. Here, however, we do not have this information. Therefore, to guarantee that the contraposition is applied correctly, we need a specialized approach. Geffner (1994) discusses the properties of such an approach. In the remainder of this paper, however, we will not consider causal rules. Two arguments can be related to each other. The relation that is of interest for us is whether one argument uses the same inference steps as another argument. If so, the former is called a sub-argument of the latter. Though an argument can be viewed as a tree, a sub-argument is not exactly a sub-tree. \begin{definition} An argument $A$ is a sub-argument of $B$, $A \leq B$, if and only if every $\langle A', \alpha \rangle \in A$ is a sub-structure of the argument $B$.\footnote{Notice that we reach the base of the recursion if $A$ is an empty set. If $A$ is an empty set, it is trivial that every $\langle A', \alpha \rangle \in A$ is a sub-structure of the argument $B$.} $\langle A', \alpha \rangle$ is a sub-structure of an argument $B$ if and only if \begin{itemize} \item either there exists a $\langle B', \alpha \rangle \in B$ such that $A'$ is a sub-argument of $B'$; \item or there exists a $\langle B', \beta \rangle \in B$ such that $\langle A', \alpha \rangle$ is a sub-structure of $B'$. \end{itemize} \end{definition} \begin{example} let $A = \{ \langle \emptyset, \alpha \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \}, \gamma \leadsto \delta \rangle \}$ be an argument for $\varphi$. \[ \left. \begin{array}{r} \alpha \\ \beta \vdash \beta \leadsto \gamma \vdash \gamma \leadsto \delta \end{array} \right | \hspace{-5pt}- \varphi \] Then \[ A_1 = \{ \langle \emptyset, \alpha \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \}, \gamma \leadsto \delta \rangle \} \] \[ \left. \begin{array}{r} \alpha \\ \beta \vdash \beta \leadsto \gamma \vdash \gamma \leadsto \delta \end{array} \right | \] \[ A_2 = \{ \langle \emptyset, \alpha \rangle, \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \} \] \[ \left. \begin{array}{r} \alpha \\ \beta \vdash \beta \leadsto \gamma \end{array} \right | \] \[ A_3 = \{ \langle \emptyset, \alpha \rangle, \langle \emptyset, \beta \rangle \} \] \[ \left. \begin{array}{r} \alpha \\ \beta \end{array} \right | \] \[ A_4 = \{ \langle \emptyset, \alpha \rangle \} \] \[ \alpha \mid \] \[ A_5 = \{ \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \}, \gamma \leadsto \delta \rangle \} \] \[ \beta \vdash \beta \leadsto \gamma \vdash \gamma \leadsto \delta \mid \] \[ A_6 = \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \gamma \rangle \} \] \[ \beta \vdash \beta \leadsto \gamma \mid \] \[ A_7 = \{ \langle \emptyset, \beta \rangle \} \] \[ \beta \mid \] are sub-arguments of $A$. \end{example} An argument represents a derivation tree of defeasible rules. Since a rule in an argument $A$ gives a justification for its consequent, the argument can be viewed as a global justification for a proposition $\varphi$, $\hat{A} \vdash \varphi$, that is grounded in the premises $\bar{A}$. Whether an argument is valid depends on whether the argument or one of its sub-arguments is defeated. When an argument $A$ for some proposition $\varphi$ is valid we say that $\varphi$ follows from the premises $\bar{A}$ using the rules $\tilde{A}$. \section{Defeating a last rule of an argument} \label{conflicts} A defeasible rule $\varphi \leadsto \psi$ describes either a preferred or a probabilistic relation. Therefore, there may exist situations in which the relation it represents, is invalid. In these exceptional situations, either $\neg \psi$ must holds or both $\psi$ and $\neg \psi$ must be unknown. Since an argument is basically a tree constructed using defeasible rules, an argument containing a rule that is not valid in the current context, can neither be valid. There are two reasons for an argument to become invalid. Either the argument contains a rule $\alpha \leadsto \beta$ while we have a valid argument for $\neg(\alpha \leadsto \beta)$, or the argument is a sub-argument of an argument for an inconsistency. In the latter situation the question is, which sub-argument(s) of the argument for an inconsistency, can no longer be valid? In the discussion of this question, we will use the term {\em disagreeing arguments} which is defined in the following way. \begin{definition} Let $A_\perp = \{ \langle A'_1, \mu_1 \rangle,...,\langle A'_n, \mu_n \rangle \}$ be an argument for and inconsistency ($\hat{A}_\perp \vdash \perp$). Then, the arguments $A_1 = \{ \langle A'_1, \mu_1 \rangle \},..., A_n = \{ \langle A'_n, \mu_n \rangle \}$ are said to {\em disagree}. \end{definition} Clearly, in order to restore consistency, some of the disagreeing arguments can no longer be valid. These arguments are said to be defeated because of the other arguments. It is also clear that it is sufficient to defeat only one of the disagreeing arguments in order to restore consistency if the argument for the inconsistency is a (subset) minimal argument. Without lost of generality, we may assume that the argument for the inconsistency is a minimal argument. Resolving inconsistencies using the minimal arguments will also resolve inconsistencies based on non minimal arguments. We can therefore reformulate the above raised question. Is it sufficient to defeat a disagreeing argument, but no proper sub-argument of this disagreeing argument, to resolve an inconsistency? We will see that a set of defeasible rules can always be extended such that indeed no proper sub-argument of a disagreeing argument needs to be defeated. Let \[ A_1 = \{ \langle A'_1, \mu_1 \rangle \},..., A_n = \{ \langle A'_n, \mu_n \rangle \} \] be a set of disagreeing arguments; i.e.\ $A =\bigcup^n_{i=1} A_i$ is an argument for $\perp$. Suppose that some proper sub-argument $A_\varphi = \{ \langle A', \alpha \leadsto \varphi \rangle \}$ of the disagreeing argument $A_k = \{ \langle A'_k, \mu_k \rangle \}$ is defeated because of the inconsistency and that no proper sub-argument of $A_\varphi$ is defeated. Then, $\bigcup_{i=1}^n \bar{A}_i$ represents an exceptional situation in which either $\neg \varphi$ holds or $\varphi$ is unknown. Suppose that $\neg \varphi$ holds. We cannot use the contraposition of the rules to derive $\neg \varphi$. We can, however, introduce rules that enable us to construct an argument $A_{\neg \varphi}$ for $\neg \varphi$ such that $\bar{A}_{\neg \varphi} \subseteq \bigcup_{i=1}^n \bar{A}_i$. In that case $A_\varphi$ is a disagreeing argument in another inconsistency. This inconsistency can be used to defeat $A_\varphi$. Hence, there is no need for defeating $A_\varphi$ because of the argument $A$ for $\perp$. For example, let \[ A_\perp = \{ \langle \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \varphi \rangle \}, \varphi \leadsto \eta \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \psi \rangle \}, \psi \leadsto \neg \eta \rangle \} \] \[ \left. \begin{array}{r} \alpha \vdash \alpha \leadsto \varphi \vdash \varphi \leadsto \eta \\ \beta \vdash \beta \leadsto \psi \vdash \psi \leadsto \neg \eta \end{array} \right | \hspace{-4pt} - \perp \] be an argument for $\perp$. Then we can defeat $\alpha \leadsto \varphi$ by introducing the rule $\alpha \wedge \beta \leadsto \neg \varphi$. Now suppose that $\varphi$ is unknown. We cannot introduces rules that enable us to construct an argument $A_{\neg\varphi}$ since $\varphi$ is unknown. We can, however, introduce rules that enable us to construct an argument $A_{\neg (\alpha \leadsto \varphi)} = \{ \langle A', \xi \leadsto \neg (\alpha \leadsto \varphi) \rangle \}$ for $\neg (\alpha \leadsto \varphi)$ such that $\bar{A}_{\neg (\alpha \leadsto \varphi)} \subseteq \bigcup_{i=1}^n \bar{A}_i$. Since $A_\varphi = \{ \langle A', \alpha \leadsto \varphi \rangle \}$ is defeated if $A_{\neg (\alpha \leadsto \varphi)}$ is valid, there is no need for defeating $A_\varphi$ because of the argument $A$ for $\perp$. For example, let \[ A_\perp = \{ \langle \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \varphi \rangle \}, \varphi \leadsto \eta \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \psi \rangle \}, \psi \leadsto \neg \eta \rangle \} \] \[ \left. \begin{array}{r} \alpha \vdash \alpha \leadsto \varphi \vdash \varphi \leadsto \eta \\ \beta \vdash \beta \leadsto \psi \vdash \psi \leadsto \neg \eta \end{array} \right | \hspace{-4pt} - \perp \] be an argument for $\perp$. Then we can defeat $\alpha \leadsto \varphi$ by introducing the rule $\alpha \wedge \beta \leadsto \neg (\alpha \leadsto \varphi)$. Hence, we can avoid the need for defeating a proper sub-argument of a disagreeing argument, if necessary by introducing additional rules. Since a disagreeing argument has a unique last rule, defeating a disagreeing argument implies defeating its {\em last rule}. Hence, it suffice to defeat one of the last rules $\vec{A}$ of an argument $A$ for an inconsistency. \section{A preference relation on rules} \label{evaluate} In the previous section we have seen that no proper sub-argument of one of the disagreeing arguments needs to be subject to defeat. This makes it possible to defeat a disagreeing argument by defeating its {\em last rule}. We will now investigate whether we can determine the rule to be defeated by considering only the last rule of each of the disagreeing arguments; i.e.\ the last rules of the argument for the inconsistency. Defeating the last rule of one of the disagreeing arguments in case of an inconsistency offers three advantages. Firstly, we on longer have to consider a defeat relation between arguments as is done in: (Pollock 1987, 1994; Simari \& Loui 1992; Vreeswijk 1997). This significantly simplifies the preference relation that we must consider. If we use a preference relation on rules, then there are $\frac{1}{2} n (n-1)$ pairs of rules between which there can be a preference, where $n$ is the number of defeasible rules. If we use a preference on the arguments, ignoring the structure of the arguments, there can be $\frac{1}{2} 2^n (2^n-1)$ pairs of arguments between which there can be a preference. Secondly, an argument looses its conclusive force (is defeated) if it contains defeated rules. This simplifies the handling of arguments. And, as we will see in Section \ref{algor}, it enables us to determine a set of valid arguments in linear time. Thirdly, the resolution of inconsistencies will be cumulative. It does not matter whether the antecedent of a last rule is an observed fact or derived through reasoning. This is an important property since an observed fact may be based on some hidden reasoning of which we are not aware. To show that there is no need for a preference relation on arguments, we will show that a dependence on sub-arguments can be removed by reformulating the set of rules. Suppose that we have two different arguments for an inconsistency where both arguments have the same set of last rules. \[ A_\perp = \{ \langle \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \varphi \rangle \}, \varphi \leadsto \eta \rangle, \langle \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \psi \rangle \}, \psi \leadsto \neg \eta \rangle \} \] \[ \left. \begin{array}{r} \alpha \vdash \alpha \leadsto \varphi \vdash \varphi \leadsto \eta \\ \beta \vdash \beta \leadsto \psi \vdash \psi \leadsto \neg \eta \end{array} \right | \hspace{-4pt} - \perp \] and \[ A'_\perp = \{ \langle \{ \langle \emptyset, \varphi \rangle \}, \varphi \leadsto \eta \rangle, \langle \{ \langle \emptyset, \psi \rangle \}, \psi \leadsto \neg \eta \rangle \}. \] \[ \left. \begin{array}{r} \varphi \vdash \varphi \leadsto \eta \\ \psi \vdash \psi \leadsto \neg \eta \end{array} \right | \hspace{-4pt} - \perp \] Also suppose that $\varphi \leadsto \eta$ must be defeated given $A_\perp$ and $\psi \leadsto \neg \eta$ must be defeated given $A'_\perp$. In the former case, the situation described by $\alpha$ and $\beta$ represents an exception on the rule $\varphi \leadsto \eta$. We can, for example, describe this exception by introducing the rule $\alpha \wedge \beta \leadsto \neg \eta$ with preference $\alpha \wedge \beta \leadsto \neg \eta \succ \varphi \leadsto \eta$, or the rule $\alpha \wedge \beta \leadsto \neg (\varphi \leadsto \eta)$. Since each of these rules defeats $\varphi \leadsto \eta$, $\varphi \leadsto \eta$ can no longer defeat $\psi \leadsto \neg \eta$. Hence, we only have to consider the last rules of an argument for an inconsistency. Another possibly problematic situation arises when a set of arguments supporting a proposition, is stronger than each individual argument. This is known as {\em accrual of reasons}. Such a situation suggest that we need to consider preferences between sets of rules. We can, however, handle such situations by using a rule that combines the last rules of each argument for that proposition. To illustrate this, suppose that we have the following defeasible rules: $\alpha \leadsto \psi$, $\beta \leadsto \psi$ and $\gamma \leadsto \neg \psi$. Let the last rule be preferred to the first two rules. Then $\neg \psi$ must hold if $\gamma$ and either $\alpha$ or $\beta$ hold. By introducing a rule $\alpha \wedge \beta \leadsto \psi$ and by preferring it to $\gamma \leadsto \neg \psi$, we can assure that $\psi$ holds whenever $\alpha$, $\beta$ and $\gamma$ hold. Another problem arises when a set of arguments for a proposition weakens the support for the proposition. The approach presented here offers no solution for such situations. Fortunately, a set of arguments that weakens the support for a proposition, seems to be counter-intuitive. A last motivation for using a preference relation on rules, comes from Prakken's (1993) investigation of legal argumentation. He points out that in legal argumentation, meta rules, such as `lex superior' and `lex posterior', are used to resolve the inconsistency. These meta rules define a preference relation on legal norms (the defeasible rules). When arguments disagree, the meta rules are applied to the last rules of the disagreeing arguments in order to determine the argument to be defeated. Prakken illustrates this with legal examples. Also notice that meta rules can also be subject to defeat in situations where they specify incompatible relations between rules (Brewka 1994). From the above discussion, we can draw the following conclusion. \begin{quote} Let $\langle \Sigma, D \rangle$ be a defeasible theory and let $\succ$ be a partial preference relation on $D$. Furthermore, let \[ A_\perp = \{ \langle A'_1, \eta_1 \leadsto \psi_1 \rangle ,\ldots, \langle A'_k, \eta_k \leadsto \psi_k \rangle \langle \emptyset, \sigma_{1} \rangle ,\ldots, \langle \emptyset, \sigma_{j} \rangle \} \] be an argument for an inconsistency. So, \[ \begin{array}{l} A_1 = \{ \langle A'_1, \eta_1 \leadsto \psi_1 \rangle \} ,...,A_k = \{ \langle A'_k, \eta_k \leadsto \psi_k \rangle \}, \\ A_{k+1} = \{ \langle \emptyset, \sigma_{1} \rangle \},\ldots, A_{n} = \{ \langle \emptyset, \sigma_{j} \rangle \} \end{array} \] are disagreeing arguments. Then, if $\eta_i \leadsto \psi_i$ is the least preferred last rule in $\vec{A}_\perp$, $\eta_i \leadsto \psi_i$ must be defeated. \end{quote} Since we are using a preference relation on the set of defeasible rules in order to resolve conflicts, we should extend the definition of a defeasible theory $\langle \Sigma, D \rangle$ with the preference relation $\succ$, i.e.\ $\langle \Sigma, D, \succ \rangle$. Certainly, to describe legal argumentation, this extension is necessary. If we restrict ourselves to one specific preference relation, namely {\em specificity}, there is also no need to extend the definition of a defeasible theory. The {\em specificity} preference relation can be derived from the set of defeasible rules of a defeasible theory.\footnote{Since the set of rules $D$ is usually considered as background knowledge, we can determine the specificity preference relation in advance.} Specificity is the principle by which rules applying to a more specific situation override those applying to more general ones. The most specific situation to which a rule can be applied is the situation in which only its antecedent is known to hold. In that situation, its consequent must hold. The following preference relation is based on the fact that the most specific situation to which a rule can be applied is the situation in which only its antecedent is known to hold. \begin{definition} \label{spec} Let $K \subseteq L$ be some general background knowledge, let $D$ be a set of defeasible rules, and let $\varphi \leadsto \psi, \eta \leadsto \mu$ be two rules in $D$. $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \mu$ if and only if, given the premises $\{ \varphi \}$, there is an argument $A_\eta$ for $\eta$ such that $\bar{A}_{\eta} \subseteq \{ \varphi \} \cup K$. $\varphi \leadsto \psi$ is strictly more specific than $\eta \leadsto \mu$, $\varphi \leadsto \psi \succ_{\rm spec} \eta \leadsto \mu$, if and only if $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \mu$ and $\eta \leadsto \mu$ in not more specific than $\varphi \leadsto \psi$. \end{definition} \begin{example} Let $\varphi \leadsto \psi$, $\varphi \leadsto \eta$ and $\eta \leadsto \neg \psi$ be three defeasible rules. Given the premises $\{ \varphi \}$, we can derive the argument $A_\eta = \{ \langle \{ \langle \emptyset, \varphi \rangle \}, \varphi \leadsto \eta \rangle \}$. Since $\bar{A}_\eta \subseteq \{ \varphi \} \cup K$, $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \neg \psi$. Furthermore, since, given the premises $\{ \eta \}$, there is no argument for $\varphi$, $\varphi \leadsto \psi$ is strictly more specific than $\eta \leadsto \neg \psi$. \end{example} The above defined specificity preference relation corresponds with definition of specificity implied by the axioms of conditional logics (Geffner \& Pearl 1992). This definition of specificity is relatively weak. Vreeswijk (1991) presents an example showing that a slightly stronger definition can result in counter intuitive conclusions. \section{The belief set} \label{belief-set} An inconsistency can be resolved considering the last rules of the argument for the inconsistency. This implies that in case the inconsistency is resolved, one of these last rules may no longer justify the belief in its consequent; i.e.\ the rule is defeated. For this rule we can construct an argument supporting the {\em undercutting} defeat of this rule. \begin{definition} \label{defeat-arg} Let $A_\perp$ be an argument for an inconsistency and let $\varphi \leadsto \psi \in min_\succ(\vec{A}_\perp)$ be a least preferred last rule for the inconsistency. If $\langle A_\varphi, \varphi \leadsto \psi \rangle \in A_\perp$, then $A_{\neg (\varphi \leadsto \psi)} = (A_\perp \backslash \{ \langle A_\varphi, \varphi \leadsto \psi \rangle \} ) \cup A_\varphi$ is an argument for the defeat of $\varphi \leadsto \psi$. \end{definition} \begin{example} Let \[ A_\perp= \{ \langle \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \varphi \rangle \}, \varphi \leadsto \psi \rangle, \langle \{ \langle \emptyset, \eta \rangle \}, \eta \leadsto \mu \rangle \} \] \[ \left. \begin{array}{r} \alpha \vdash \alpha \leadsto \varphi \vdash \varphi \leadsto \psi \\ \eta \vdash \eta \leadsto \mu \end{array} \right | \hspace{-4pt} - \perp \] If $\eta \leadsto \mu$ is preferred to $\varphi \leadsto \psi$, then \[ A_{\neg (\varphi \leadsto \psi)} = \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \varphi \rangle, \langle \{ \langle \emptyset, \eta \rangle \}, \eta \leadsto \mu \rangle \} \] \[ \left. \begin{array}{r} \alpha \vdash \alpha \leadsto \varphi \\ \eta \vdash \eta \leadsto \mu \end{array} \right | \hspace{-4pt} \circ \neg (\varphi \leadsto \psi) \] We use the symbol $\mid \hspace{-5pt} \circ$ to denote that $\neg (\varphi \leadsto \psi)$ does not deductively follow from $\varphi$ and $\mu$. $\neg (\varphi \leadsto \psi)$ ``follows'' from $\varphi \leadsto \psi$, $A_\varphi$, $A_\mu$ and $\succ$. \end{example} Given these arguments for the defeat of rules, we can define an {\em extension}. Here an extension is a set of propositions for which we have valid arguments. A valid argument is an argument of which the rules are not defeated. This also holds for the arguments for the defeat of rules. A rule is defeated if the argument for its defeat is valid; i.e.\ the argument does not contain defeated rules. \begin{definition} \label{defeat} Let $\cal A$ be a set of all derivable arguments, let $\Gamma$ be a set of defeasible rules and let \[ \textsl{Defeat}(\Gamma) = \{ \alpha \leadsto \beta \mid A_{\neg (\alpha \leadsto \beta)} \in {\cal A}, \tilde{A}_{\neg (\alpha \leadsto \beta)} \cap \Gamma = \emptyset \}. \] Then the set of defeated rules $\Omega$ is defined as: \[ \Omega = \textsl{Defeat}(\Omega). \] \end{definition} \begin{proposition} \label{unique} The set of defeated rules $\Omega$ are incomparable. I.e.\ for each $\Lambda \not= \Omega$ such that $\Lambda = \textsl{Defeat}(\Lambda)$, neither $\Lambda \subset \Omega$ nor $\Lambda \supset \Omega$ holds. \end{proposition} Proofs can be found in Appendix A. Notice that the set of defeated rules need not be unique. Even if every inconsistency has a unique least preferred last rule, the set of defeated rules need not be unique. Consider for example the facts $\alpha$ and $\beta$ and rules $\alpha \leadsto \gamma$, $\beta \leadsto \delta$, $\gamma \leadsto \neg \delta$ and $\delta \leadsto \neg \gamma$, where the last two rules are preferred to the first two. Here there are two sets of defeated rules $\Omega$; $\{ \alpha \leadsto \gamma \}$ and $\{ \beta \leadsto \delta \}$. Given the sets of defeated rules, the extensions and the belief set can be defined. An extension consists of all propositions for which we have a valid argument. Following Pollock (1987), these propositions are said to be {\em warranted}.\footnote{Some proposals made in the literature do not consider multiple extensions. Instead, they consider provisionally defeated arguments. These are arguments that are valid in some extensions but not in all extensions. For a discussion see (Vreeswijk 1997; Prakken \& Vreeswijk 1999).} \begin{definition} \label{argextend} Let $\Omega$ be a set of defeated rules and let $\cal A$ be a set of all derivable arguments. Then an extension $\cal E$ is defined as: \[ {\cal E} = \{ \varphi \mid A_\varphi \in {\cal A}, \tilde{A}_\varphi \cap \Omega = \emptyset \}. \] \end{definition} The belief set contains the propositions that hold in every extension. This is the skeptical view in which the belief set consists of every proposition for which we have a valid argument in every extension. \begin{definition} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory. Furthermore, let ${\cal E}_1, ..., {\cal E}_n$ be the corresponding extensions. Then the belief set $B$ is defined as: \[ B = \bigcap^n_{i=1} {\cal E}_i . \] \end{definition} It is possible to have a set of arguments for which no extension exists. Such a situation can arise when the set of arguments contain self-defeating arguments. In its most simple form, self-defeat is related to one argument $\alpha \leadsto \beta \in \tilde{A}_{\neg (\alpha \leadsto \beta)}$.\footnote{Prakken \& Vreeswijk (1999) present an instance of the liar's paradox as an example of self-defeat. With a different formulation of the example, however, we can solve the paradox by using defeasible rules without introducing self-defeat.} Since the set of defeasible rules belong to the background knowledge, self-defeat seems to present a defect in our knowledge. Hence, a revision of the set of rules $D$ is needed. I.e.\ some rules must be removed or reformulated. Though this is an important topic, it does not help us much in practical situations. Hence, we need a way to draw useful conclusions even if self-defeat occurs. Pollock (1994) introduces partial status assignments to deal with the problem. Here, we can do something similar. Firstly, we will reformulate Definition \ref{defeat} in terms of a status assignment. \begin{quote} A status assignment is an assignment of {\em defeated} and {\em undefeated} to rules in $D$ based on the following condition. A rule $\varphi \leadsto \psi \in D$ is assigned {\em defeated} if and only if there is an argument $A_{\neg (\varphi \leadsto \psi)}$ such that every $\alpha \leadsto \beta \in \tilde{A}_{\neg (\varphi \leadsto \psi)}$ is assigned {\em undefeated}. Otherwise, the rule $\varphi \leadsto \psi \in D$ is assigned {\em undefeated} \end{quote} $\Omega$ is the set of rules that are assigned the status {\em defeated}. \begin{proposition} A set of rules $\Omega$ is a fixed point of \textsl{Defeat} if and only if there is a status assignment such that $\Omega$ is the set of rules that are assigned the status {\em defeated}. \end{proposition} To deal with self-defeat, following Pollock (1994), we can use a partial status assignment. \begin{quote} A {\em partial} status assignment is an assignment of {\em defeated} and {\em undefeated} to a largest subset of the rules in $D$ based on the following conditions. \begin{itemize} \item A rule $\varphi \leadsto \psi \in D$ is assigned {\em defeated} if and only if there is an argument $A_{\neg (\varphi \leadsto \psi)}$ such that every $\alpha \leadsto \beta \in \tilde{A}_{\neg (\varphi \leadsto \psi)}$ is assigned {\em undefeated}. \item A rule $\varphi \leadsto \psi \in D$ is assigned {\em undefeated} if and only if for every argument $A_{\neg (\varphi \leadsto \psi)}$ there is a rule $\alpha \leadsto \beta \in \tilde{A}_{\neg (\varphi \leadsto \psi)}$ that is assigned {\em defeated}. \end{itemize} A rule $\varphi \leadsto \psi \in D$ that is not is assigned {\em defeated} or {\em undefeated} are denoted as being {\em undetermined} \end{quote} Since we should only consider conclusions based on arguments containing {\em undefeated} rules, $\Omega$ is the set of rules that are {\em not} assigned the status {\em undefeated}. In the remainder of this paper, with exception of the next section, we will assume that status assignments are complete. \section{Determination of the fixed point of \textsl{Defeat}} \label{algor} The determination of the fixed points of \textsl{Defeat} can be viewed as a labeling problem of a JTMS (Doyle 1979). A JTMS consists of nodes representing propositions, and of justifications. A node is either labeled {\sc in} or {\sc out}, which corresponds with respectively `is believed' and `is not believed'. A justification is a triple consisting of a set of {\em in}-nodes, a set of {\em out}-nodes and a consequent node. The consequent node is labeled {\sc in} if all {\em in}-nodes are labeled {\sc in} and no {\em out}-node is labeled {\sc in}. Such a JTMS must contain a node $N$ for every proposition of the form $\neg (\alpha \leadsto \beta)$ for which we have an argument in $\cal A$. Furthermore, for each node $N_{\neg (\alpha \leadsto \beta)}$ representing $\neg (\alpha \leadsto \beta)$ and for each argument in $\cal A$ supporting $\neg (\alpha \leadsto \beta)$, the JTMS contains a justification $\langle \mbox{{\em in}-nodes}, \mbox{{\em out}-nodes}, N_{\neg (\alpha \leadsto \beta)} \rangle$. Such a justification consists of an {\em empty set} of {\em in}-nodes and a set of {\em out}-nodes. If $A$ is an argument for $\neg (\alpha \leadsto \beta)$, then \[ \langle \emptyset, \{ N_{\neg(\varphi \leadsto \psi)} \mid \varphi \leadsto \psi \in \tilde{A} \}, N_{\neg (\alpha \leadsto \beta)} \rangle \] is a justification for $N_{\neg (\alpha \leadsto \beta)}$. It is not difficult to verify that each valid labeling of the nodes corresponds one to one with status assignment to the corresponding rules. A rule is assigned the status {\em defeated} if and only if the corresponding node of the JTMS is labeled {\sc in}. Much research has been done on algorithms for labeling nodes in a JTMS network (Doyle 1979; Goodwin 1987; Reinfrank 1989; Witteveen \& Brewka 1993). Some also deal with situations in which no admissible labeling exists (Witteveen \& Brewka 1993). Partial labeling has been proposed for these situations. When no admissible labeling exists, the set of arguments $\cal A$ contains self-defeating arguments. In its most simple form, self-defeat is related to one argument $\alpha \leadsto \beta \in \tilde{A}_{\neg (\alpha \leadsto \beta)}$. In general, self-defeat is represented by {\em odd loops} in the corresponding JTMS. Odd loops in the network can be determined in linear time with respect to $n \cdot d$ where $n$ is the number of nodes and $d$ is the maximum number of outgoing justifications of any node. After detecting an odd loop we can mark the nodes involved as being `{\sc undetermined}', as well as the nodes that necessarily depend on nodes in an odd loop. This labeling of some of the nodes can subsequently be replaced by {\sc in} or {\sc out} if the labeling of the remaining nodes enforces this. Hopefully, after labeling all nodes, no {\sc undetermined} nodes remain. By doing so, we handle odd loops in a pragmatic way. Finding a labeling of a JTMS network is, in general, an NP-Hard problem. Fortunately, for the above presented JTMS networks without {\em odd loops}, we can find a labeling in {\em linear time} with respect to $n \cdot d + j$ where $n$ is the number of nodes, $d$ is the maximum number of outgoing justifications of any node and $j$ is the total number of justifications. An algorithm for finding a labeling will be given in Appendix B. Although a labeling can be found in linear time, the number of possible labelings can be exponential in number of minimal arguments for inconsistencies if no preference relation over the defeasible rules is specified; i.e. $\succ = \emptyset$. \section{Properties} \subsubsection{Default logic} \label{default-logic} In Section \ref{conflicts}, we have seen that it suffices to consider only the last rules of an argument for an inconsistency. This property enables us to define a default logic that is equivalent with respect to the belief set. This default logic will be based on Brewka's prioritized default logic (Brewka 1994). Brewka argues that it is sufficient to use only normal default rules in combination with a preference relation on these rules. Semi-normal and non-normal default rules are used to realize undercutting defeat or to define preferences between default rules. Using semi-normal and non-normal default rules for the encoding of preferences is not very elegant. A more important problem is, however, that we cannot specify preferences between default rules that cause an inconsistency because of contingent information. Prioritized default logic does not have this drawback. The prioritized default logic proposed below is similar to Brewka's prioritized default logic. We will, however, use the preference relation in a different way as Brewka proposes. Since we only consider normal default rules, it suffices to verify whether a rule is applicable --{\em its antecedents hold}--, and whether it is not defeated by other rules --{\em its consequent holds}--. The consequences of a set of applicable rules, together with the premises, may form an inconsistent set of propositions. Since defeasible rules are viewed as normal default rules, one of these rules must be defeated. The partial preference relations on the rules will be used to determine the rule that must be defeated. If an applicable rule is defeated, there must be a set of non-defeated applicable rules that implies, together with the premises, the negation of its consequent. Furthermore, the defeated rule may not be preferred to any of rules that causes its defeat. \begin{definition} \label{extensional} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory. Let $\Gamma (S)$ be a smallest set, with respect to the inclusion relation ($\subseteq$), for which the following conditions hold: \begin{enumerate} \item $\Sigma \subseteq \Gamma (S)$; \item $\Gamma (S) = Th(\Gamma (S))$; \item if there is a $\Delta \subset D$ that defeats $\varphi \leadsto \psi$ with respect to $\succ$, then $\neg ( \varphi \leadsto \psi ) \in \Gamma (S)$; \item if $\varphi \in \Gamma (S)$, $\varphi \leadsto \psi \in D$ and $\neg ( \varphi \leadsto \psi ) \not\in S$, then $\psi \in \Gamma (S)$. \end{enumerate} $\Delta$ defeats $\varphi \leadsto \psi$ with respect to $\succ$ if and only if \begin{itemize} \item $\varphi \in \Gamma (S)$, \item $\Delta \subseteq \{ \eta \leadsto \mu \in D \mid \{ \eta, \mu \} \subseteq \Gamma (S) \}$, \item $\{ \mu \mid \eta \leadsto \mu \in \Delta \} \cup \Sigma \vdash \neg \psi$ and \item for no $\eta \leadsto \mu \in \Delta$ there holds: $\varphi \leadsto \psi \succ \eta \leadsto \mu$. \end{itemize} ${\cal E}$ is an extension of the default theory if and only if ${\cal E} = \Gamma({\cal E})$ \end{definition} \begin{theorem} \label{equivalence} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory. The set of extensions determined by the argument system is equal to the set of extensions determined by the default logic. \end{theorem} \begin{example} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory where $\Sigma = \{ \alpha, \beta \}$, $D = \{ \alpha \leadsto \delta, \beta \leadsto \neg \delta \}$ and $\succ = \{ (\beta \leadsto \neg \delta, \alpha \leadsto \delta) \}$. Then we can construct the following arguments. \[ \begin{array}{l} A_{\delta} = \{ \langle \{ \langle \emptyset, \alpha \rangle \}, \alpha \leadsto \delta \rangle \}; \\ A_{\neg \delta} = \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \neg \delta \rangle \}; \\ A_{\neg (\alpha \leadsto \delta)} = \{ \langle \{ \langle \emptyset, \beta \rangle \}, \beta \leadsto \neg \delta \rangle, \langle \emptyset, \alpha \rangle \} \end{array} \] This set of arguments result in one fixed point, $\Omega = \{ \alpha \leadsto \delta \}$ and for the function \textsl{Defeat}. So, we have an extension \[ {\cal E} = Th(\{ \alpha, \beta, \neg \gamma, \neg \delta, \neg (\alpha \leadsto \delta) \}) . \] According to the default logic given in this section, an extension must at least contain the premises $\Sigma = \{ \alpha, \beta \}$. Suppose now that we cannot defeat $\alpha \leadsto \delta$. Then $\delta$ must belong to the extension. Furthermore, since $\beta \leadsto \neg \delta \succ \alpha \leadsto \delta$, $\beta \leadsto \neg \delta$ will not be defeated either. Therefore, $\neg \delta$ will belong to the extension. But then $\alpha \leadsto \delta$ will be defeated. Contradiction. Hence, $\alpha \leadsto \delta$ must be defeated. Since we cannot defeat $\beta \leadsto \neg \delta$, $\neg \delta$ will belong to the extension. Therefore we can derive $\neg (\alpha \leadsto \delta)$. So, we have one extension \[ {\cal E}' = Th(\{ \alpha, \beta, \neg \gamma, \neg \delta, \neg (\alpha \leadsto \delta) \}) . \] \end{example} We can establish a relation between this new prioritized default logic and Reiter's default logic. Firstly, we can translate defeasible rules to default rules. Since we must be able to denote that the application of a default rule is no longer valid, $\neg (\alpha \leadsto \beta)$, we will associate a name with each default rule. This name will be used to denote that the rule may no longer be applied. So if $n_{\alpha \leadsto \beta}$ is the name of the translation of $\alpha \leadsto \beta$, then $\neg n_{\alpha \leadsto \beta}$ will be the translation of $\neg (\alpha \leadsto \beta)$. To ensure that a default rule named $n_{\alpha \leadsto \beta}$ will not be applied if $\neg n_{\alpha \leadsto \beta}$ holds, we must use the name of the default rule as one of the justifications of this default rule. Hence, we translate a defeasible rule $\alpha \leadsto \beta $ to the default rule \[ \frac{\alpha : \beta, n_{\alpha \leadsto \beta}} {\beta}. \] The translations of the defeasible rules are all semi-normal default rules. It is not difficult to verify that every extension according to Definition \ref{extensional} is also a Reiter-extension. Since the preference relation on the defeasible rules is not taken into account, some Reiter-extensions need not be extensions according to Definition \ref{extensional}. To eliminate these extensions, we must encode the preference relation using default rules. To do this properly, we must also change the translation of $\alpha \leadsto \beta$. \begin{quote} For every rule $\alpha \leadsto \beta \in D$, introduce a non-normal default rule: \[ \frac{\alpha : n_{\alpha \leadsto \beta}}{\beta}. \] For every set of rules $\{ \eta_1 \leadsto \mu_1,...,\eta_k \leadsto \mu_k \} \subseteq D$ and for every $\varphi \leadsto \psi \in D$ such that $\Sigma \cup \{ \mu_1,..., \mu_k \} \vdash \neg \psi$ and for no $\eta_i \leadsto \mu_i$ there holds: $\varphi \leadsto \psi \succ \eta_i \leadsto \mu_i$, introduce a default rule: \[ \frac{\eta_1 \wedge ... \wedge \eta_k : n_{\eta_1 \leadsto \mu_1} ,...,n_{\eta_k \leadsto \mu_k}, \neg n_{\varphi \leadsto \psi}} {\neg n_{\varphi \leadsto \psi}}. \] \end{quote} A disadvantage of this translation is that it depends on the premises $\Sigma$. We can also translate default rules to defeasible rules, with the exception of non-normal default rules. Consider a normal or semi-normal default rule of the form: \[ \displaystyle \frac{\alpha : \beta_1,..., \beta_k, \gamma}{\gamma}. \] We can translate this default rule to the following defeasible rules: \[ \alpha \leadsto \gamma, \neg \beta_1 \leadsto \neg(\alpha \leadsto \gamma),..., \neg \beta_k \leadsto \neg(\alpha \leadsto \gamma). \] \subsubsection{Specificity} Poole (1985) gives a semantic definition of specificity based on the comparison of arguments (theories). His definition does not use the last rules of an argument as a starting point. Instead, Poole compares sets of rules. A Poole-argument $\langle D, \alpha \rangle$ for a proposition $\alpha$ describes a set of rules $D$ needed to derive $\alpha$; $F_c \cup D \cup F_n \models \alpha$. Here, the defeasible rules $D$ are represented by implications. Furthermore, $F_c$ and $F_n$ denote the contingent and the necessary facts respectively. According to Poole (1985), $\langle D_1, \psi \rangle$ is more specific than $\langle D_2, \mu \rangle$, if for every set of possible fact $F_p$: \begin{quote} if $F_p \cup D_1 \cup F_n \models \psi$ and $F_p \cup D_2 \cup F_n \not\models \psi$, then $F_p \cup D_2 \cup F_n \models \mu$. \end{quote} In this paper, another definition has been given. This definition can be related to Poole's definition of specificity. \begin{theorem} Let $\varphi \leadsto \psi$ and $\eta \leadsto \mu$ be two rules. If $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \mu$ according to Definition \ref{spec}, then there are two Poole-arguments $\langle D_1, \psi \rangle$ and $\langle D_2, \mu \rangle$ with $\varphi \leadsto \psi \in D_1$ and $\eta \leadsto \mu \in D_2$ for which there hold that $\langle D_1, \psi \rangle$ is more specific than $\langle D_2, \mu \rangle$. \end{theorem} The converse of this theorem need not hold. The reason why the converse need not hold is because a set of rules ($D_1$ or $D_2$) need not uniquely determine a single argument. Furthermore, $D_1 = \{ \varphi \leadsto \psi \}$ is more specific than $D_2 = \{ \eta \leadsto \mu \}$ according to Poole's definition while it is only more specific according to the definition given in this paper if there exists an argument $A_\eta$ such that $\tilde{A}_\eta \subseteq \{\varphi\}$. \subsubsection{Closure properties} Gabbay (1985) has initiated the study of the closure properties of the non-monotonic derivability relation: `$\mid\joinrel\sim$' (Gabbay 1985; Kraus et al. 1990; Makinson 1988). Here, the non-monotonic derivability relation is defined as: \begin{list}{}{} \item $\Sigma \mid\joinrel\sim \varphi$ if and only if $B$ is the belief set of $\langle \Sigma, D, \succ \rangle$ and $\varphi \in B$. \end{list} Gabbay (1985) argues that there are three axioms that must be satisfied by all non-monotonic logics. \begin{description} \item[{\it Reflexivity}] \ \\ if $\varphi \in \Sigma$, then $\Sigma \mid\joinrel\sim \varphi$; \item[{\it Cut}] \ \\ if $\Sigma \mid\joinrel\sim \varphi$ and $\Sigma \cup \{\varphi\} \mid\joinrel\sim \psi$, then $\Sigma \mid\joinrel\sim \psi$; \item[{\it Cautious Monotonicity}] \ \\ if $\Sigma \mid\joinrel\sim \varphi$ and $\Sigma \mid\joinrel\sim \psi$, then $\Sigma \cup \{\varphi\} \mid\joinrel\sim \psi$; \end{description} These axioms characterize the property called {\em cumulativity}. We wish, of course, that all logical consequences of the set of premises are also derivable. \begin{description} \item[{\it Deduction}] \ \\ if $\Sigma \vdash \varphi$, then $\Sigma \mid\joinrel\sim \varphi$; \end{description} This axiom implies {\em Reflexivity}, it implies together with {\em Cut} the axiom {\em Right Weakening}, and it implies together with {\em Cautious Monotonicity} and {\em Cut} the axiom {\em Left Logical Equivalence}. The latter two axioms have been proposed by Kraus, Lehmann and Magidor (1990). They also proposed an axiom characterizing reasoning by cases. \begin{description} \item[{\it Or}] \ \\ if $\Sigma \cup \{\varphi\} \mid\joinrel\sim \eta$ and $\Sigma \cup \{\psi\} \mid\joinrel\sim \eta$, then $\Sigma \cup \{\varphi \vee \psi\} \mid\joinrel\sim \eta$; \end{description} Non-monotonic logics satisfying {\em Deduction}, {\em Cautious Monotonicity}, {\em Cut} and {\em Or} are said to belong to system {\bf P}. \begin{theorem} \label{closure-prop} The defeasible theory $\langle \Sigma, D, \succ \rangle$ satisfies: \\ {\em Reflexivity, Deduction, Cut} and, in the absence of odd loops, {\em Cautious Monotony}. \noindent An \emph{odd loop} is an odd number of arguments $A_1,\ldots,A_n$ where every $A_{i+1}$ defeats a rule in $\tilde{A}_i$, and $A_1$ defeats a rule in $\tilde{A}_n$. \end{theorem} A defeasible theory does not satisfy the closure property {\em Or}, and therefore does not allow for {\em reasoning by cases}. \section{Reasoning by cases} \label{RbC} To enable reasoning by cases in an argument system, the usual approach is to use indirect argumentation. Indirect argumentation involves subsidiary arguments that justify a conclusion with respect to the premises and some assumptions. If we have an argument for $\varphi$ under the assumption $\alpha$, an argument for $\varphi$ under the assumption $\beta$ and an argument for $\alpha \vee \beta$, then we can construct an argument for $\varphi$ without the assumptions $\alpha$ and $\beta$ using reasoning by cases. Most argument systems, however, do not allow for subsidiary argumentation and therefore are not able to reason by cases. This also holds for the argument system proposed in the previous sections. We can of course extend the argument system by (\textit{i}) allowing for subsidiary arguments and (\textit{ii}) introducing a rule for combining arguments through reasoning by cases. If arguments were not defeasible, such a simple extension would suffice. Unfortunately, arguments are defeasible, so we must also address the defeasibility of an argument when reasoning by cases. The question that we have to address is whether an argument can be defeated by a subsidiary argument when reasoning by cases. If an argument is defeated by a subsidiary argument in every case described by a disjunction, then the answer is clearly Yes. Roos (1997a, 1998) has argued that the answer must also be Yes when an argument is defeated by only one subsidiary argument corresponding with a case of a disjunction. He illustrates the necessity for this with the following example. Suppose that we have the following rules: \begin{itemize} \item A person who injures another person must be punished. \item A person who injures another person in self-defense, should not be punished. \item A person who is dragged into a fight against his/her will, is acting in self-defense. \end{itemize} Now suppose that John has injured Peter and that a reliable witness testifies that either John or Paul has been dragged into the fight against his will. If the argument for not punishing John in case he acted in self-defense, would not defeat the argument for punishing John, we will conclude that John must be punished. This would be most unfortunate for John if he was dragged into the fight against his will. When reasoning by cases, we should be able to apply defeasible rules in a case. The above example suggests that we should also resolve conflicts within the context of a case. There is one issue with resolving conflicts within the context of a case, as is illustrated by the following example. \begin{quote} John normally attends a party when he is invited: $ji \leadsto jp$. Bob and John never attend the same party: $\neg (jp \wedge bp)$. John is invited to a party: $ji$. \end{quote} The proposition $\neg (jp \wedge bp)$ implies two cases, one in which John does not attend the party, and one in which Bob does not. Clearly the former case conflicts with the conclusion of the rule $ji \leadsto jp$. Since facts defeat defeasible rules, we would conclude that John will not attend the party, in the former case. This conclusion is not valid because the conclusion of the rule $ji \leadsto jp$ is consistent with the other case described by the proposition $\neg (jp \wedge bp)$. To address the above described problem, we will use the following principles for reasoning by cases: \begin{itemize} \item Conclusions drawn in a case may not change when other cases are eliminated because of additional information. Of course the overall conclusions may change because they depend on all cases. \item Conflicts must be evaluated using the initial information and the conclusions of applied defeasible rules. The defeasible rules may be applied in a case implied by one or more propositions. \end{itemize} A possible way to enable reasoning by case proposed in (Roos 1997a) is by introducing special defeasible rules, called \emph{hypotheses}, that generate cases. To avoid that we consider cases $\alpha$ and $\beta$ that follow from the disjunction $\alpha \vee \beta$, simultaneously, we ensure that the cases are mutually exclusive. Considering cases $\alpha$ and $\beta$ simultaneously corresponds to the case $\alpha \wedge \beta$, which is only one of the three possibilities implied by the disjunction $\alpha \vee \beta$. The following set of {\em hypotheses} can be used to introduce the mutually exclusive cases. \[ H = \{ \alpha \vee \beta \leadsto \alpha \wedge \neg \beta, \alpha \vee \beta \leadsto \alpha \wedge \beta, \alpha \vee \beta \leadsto \neg \alpha \wedge \beta \mid \alpha \vee \beta \in L \; \} \] For every proposition $\psi$ that is unknown with respect to the partial models, we can derive $\psi \vee \neg \psi$ and $\varphi \vee \psi$ where $\varphi$ is a known proposition. These disjunctions should not be considered for reasoning by cases. If we would, we could defeat the rule `birds fly' using the disjunction $penguins \vee \neg penguins$ and the rule `penguins do not fly'. Clearly, we do not want this. The reason why we should not consider these disjunctions is because of the difference between {\em unknown} and {\em uncertain}. Uncertainty is expressed by multiple cases, while unknown is expressed by a single case of which we do not (yet) know the truth-value some atomic propositions. Some of the rules may fill in the yet unknown information. We may apply a disjunction $\alpha \vee \beta$ for reasoning by cases if $\alpha \vee \beta$ is not a derived tautology and if $\alpha \vee \beta$ has not been derived from $\alpha$ or $\beta$. A characteristic of these requirements is that $\alpha \vee \beta$ may not contain more atomic propositions than the proposition from which it is derived. This requirement blocks the possibility of introducing irrelevant cases. Furthermore, the only tautologies that are allowed according to this requirement, cannot do any harm as we will see below. Hence, we can formulate the following modified definition of an argument. \begin{definition} \label{argument2} {\em Definition \ref{argument} revised.} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory where $\Sigma$ is the set of premises and $D$ is the set of rules. Then an argument $A$ for a proposition $\psi$ is recursively defined in the following way: \begin{itemize} \item For each $\psi \in \Sigma$: $A = \{ \langle \emptyset, \psi \rangle\}$ is an argument for $\psi$. \item Let $A_1, ..., A_n $ be arguments for respectively $\varphi_1,...,\varphi_n$. If $\varphi_1,...,\varphi_n \vdash \psi$, then $A = A_1 \cup ... \cup A_n$ is an argument for $\psi$. \item For each $\varphi \leadsto \psi \in D$ if $A'$ is an argument for $\varphi$, then $A = \{ \langle A', \varphi \leadsto \psi \rangle \}$ is an argument for $\psi$. \item For each $\varphi \leadsto \psi \in H$ if $A'$ is an argument such that $\hat{A}' \vdash \varphi$ and $At(\varphi) \subseteq At(\hat{A}')$, then $A = \{ \langle A', \varphi \leadsto \psi \rangle \}$ is an argument for $\psi$. \end{itemize} The function $At(\cdot)$ denotes the set of atomic propositions used in a propositions or a set of propositions. \end{definition} If we have, for example, and argument for $\alpha \wedge \beta$, we can derive an argument for $\alpha \vee \neg \alpha$ and for $\alpha \vee \neg \beta$. It is, however, invalid to apply $\alpha \vee \neg \alpha$ and $\alpha \vee \neg \beta$ for reasoning by cases. Fortunately, since any argument for the cases $\neg \alpha$ and $\neg \beta$ will be inconsistent with the argument for $\alpha \wedge \beta$, we can resolve the problem using preferences. By preferring any defeasible rule in $D$ to any hypothesis in $H$, we guarantee that a case described by a disjunction will not be considered if another case described by the disjunction is derivable. \begin{definition} Let $D$ be a set of defeasible rules and let $H$ be a set of hypotheses. For each $\alpha \leadsto \beta \in D$ and for each $\gamma \leadsto \delta \in (H \backslash D)$: $\alpha \leadsto \beta \succ \gamma \leadsto \delta$. \end{definition} The above definitions allows us to construct an argument for a case described by a disjunction. The preference relation ensures that we can apply reasoning by cases if no constituent of a disjunction is derivable. Furthermore, since the cases introduced by the hypotheses are mutually exclusive, each case will be represented by a separate extension. So, disjunctions can be viewed as describing {\em possible extensions}. Viewing a disjunction as describing possible extensions is an important deviation from the `normal' interpretation of a disjunction. In argument systems multiple extensions arise because there is no preference between two or more conflicting arguments; e.g. the Nixon diamond. This can be interpreted as a disjunction stating that one of the arguments is valid. For each case described by this disjunction, we create an extension describing that case. For real disjunctions we can do the same. We can introduce an extension for each case described by a disjunction. Above we have realized this by using hypotheses. These hypotheses create an extension for each case described by a disjunction. To illustrate reasoning by cases using defeasible rules, we will apply the above presented results to the example of John who might be dragged into a fight. \[ \begin{array}{l} \it John\_injures\_Peter \\ \it John\_dragged\_into\_fight \vee Paul\_dragged\_into\_fight \\ \it \neg (John\_dragged\_into\_fight \wedge Paul\_dragged\_into\_fight) \\ \it John\_dragged\_into\_fight \leadsto self\_defense\_John \\ \it John\_injures\_Peter \leadsto John\_must\_be\_punished \\ \it self\_defense\_John \leadsto \neg John\_must\_be\_punished \end{array} \] \begin{small} $\it (self\_defense\_John \leadsto \neg John\_must\_be\_punished) \succ (John\_injures\_Peter \leadsto John\_must\_be\_punished) $ \end{small} Using these facts and rules, we can construct arguments. Two of these argument are: \begin{tabbing} mmm\=mmm\=mmm\=mmm\=mmm\=mmm\=mmm\= \kill \> $\it A_{John\_must\_be\_punished} =$ \\ \> \> $\it John\_injures\_Peter \vdash$ \\ \> \> \> $\it John\_injures\_Peter \leadsto John\_must\_be\_punished \vdash$ \\ \> \> \> \> $\it John\_must\_be\_punished$ \\ \\ \> $\it A_{\neg John\_must\_be\_punished} =$ \\ \> \> $\it John\_dragged\_into\_fight \vee Paul\_dragged\_into\_fight \vdash$ \\ \> \> \> $\it John\_dragged\_into\_fight \vee Paul\_dragged\_into\_fight \leadsto$ \\ \> \> \> \> $\it John\_dragged\_into\_fight \wedge \neg Paul\_dragged\_into\_fight \vdash$ \\ \> \> \> \> \> $\it John\_dragged\_into\_fight \leadsto self\_defense\_John \vdash$ \\ \> \> \> \> \> \> $\it self\_defense\_John \leadsto \neg John\_must\_be\_punished \vdash$ \\ \> \> \> \> \> \> \> $\it \neg John\_must\_be\_punished$ \end{tabbing} Using all derivable arguments, we can determine the following two extensions. \begin{tabbing} mmm\= \kill \> $\it E_1 = Th(\{ \begin{array}[t]{l} \it John\_injures\_Peter, \\ \it John\_dragged\_into\_fight, \\ \it \neg Paul\_dragged\_into\_fight, \\ \it self\_defense\_John, \\ \it \neg John\_must\_be\_punished, \\ \it \neg (John\_injures\_Peter \leadsto John\_must\_be\_punished) \; \}) \end{array} $ \\ \\ \> $\it E_2 = Th(\{ \begin{array}[t]{l} \it John\_injures\_Peter, \\ \it Paul\_dragged\_into\_fight, \\ \it \neg John\_dragged\_into\_fight, \\ \it John\_must\_be\_punished \; \}) \end{array} $ \end{tabbing} Since in only one of the two situations John must be punished, we do not know whether John must be punished. Additional information should be collected to enable us to make a choice between the two situations that are represented by the two extensions. Reasoning by cases does not guarantee that the closure property {\em Or} holds because cases are mutually exclusive. We can, however, proof an {\em Exclusive Or} property. \begin{theorem} The defeasible theory $\langle \Sigma, D, \succ \rangle$ satisfies {\em Exclusive Or}: \\ if $\Sigma \cup \{\varphi \wedge \neg\psi\} \mid\joinrel\sim \eta$, $\Sigma \cup \{\neg\varphi \wedge \psi\} \mid\joinrel\sim \eta$, then $\Sigma \cup \{\varphi \mathrel{\vee\kern-0.73em-} \psi \} \mid\joinrel\sim \eta$; \end{theorem} \section{Related work} \label{RL} In the literature, several argument systems that apply defeasible rules have been proposed (Fox et al. 1992; Geffner 1994; Krause et al. 1995; Pollock 1987; Prakken 1993; Prakken \& Vreeswijk 1999; Simari \& Loui 1992; Vreeswijk 1991; Vreeswijk 1997). These related papers can roughly be divided in three groups; those that discuss the strength of an argument (Fox et al. 1992; Krause et al. 1995), those that discuss the evaluation of arguments supporting conflicting propositions (Geffner 1994; Prakken 1993; Simari \& Loui 1992; Vreeswijk 1991) and those that discuss validity of arguments (Pollock 1987; Pollock 1994; Prakken \& Vreeswijk 1999; Simari \& Loui 1992; Vreeswijk 1997). Krause, Ambler, Elvang-G{\o}ransson and Fox (Fox et al. 1992; Krause et al. 1995) present argument systems that enable us to determine the strength of an argument for a proposition They use a simple logic consisting of atoms, including $\perp$, and Horn clauses. For this logic they develop an argument system that enables them to evaluate the strength of arguments for a consistent set of propositions probabilistically\footnote{A rule is not interpreted as representing a conditional probability.}. Furthermore, the argument system enables them to evaluate the strength of arguments for an inconsistent set of propositions symbolically. Krause et al. do not, however, discuss how to defeat one of the disagreeing arguments. Closely related to the strength of an argument is the evaluation of disagreeing arguments that support an inconsistency. Simari and Loui (1992) have proposed to apply Poole's definition of specificity for this purpose. In this definition it is necessary to consider all the rules of the disagreeing arguments. The same approach is taken by Prakken (1993). Prakken argues that in legal argumentation only the last rules of an argument for an inconsistency are considered. In case of specificity, however, he uses Poole definition. Vreeswijk (1991) discusses some general principles to evaluate disagreeing arguments. He proposes a scheme for evaluating disagreeing arguments based on the types of these arguments. He derives these types from the structure of the arguments. Furthermore, he argues that beside these weak general principles there are no general guild-lines to evaluate arguments. The definition of specificity given in Section \ref{evaluate} correspond with the general principles of Vreeswijk. However, applying it as a preference relation does not. Geffner (1994) argues that any rule of an argument for a proposition $\varphi$ can be defeated if $\neg \varphi$ is a known fact. As we have seen in Section \ref{argument-sys}, Geffner uses causal rules which are a special kind of defeasible rules. These kind of rules have been excluded from this paper. Many defeasible rules are not causal rules. Furthermore, a discussion of causal rules would also require a study of causality. The theory of warrant is concerned with the validity of arguments. These are the arguments that are not defeated by other arguments. In (Pollock 1987), Pollock introduces the theory of inductive warrant. Simari and Loui (1992) combine the theory of inductive warrant with Poole's definition of specificity and study the mathematical properties of the resulting system. Pollock (1990) observes that his theory of inductive warrant is not without problems. Therefore, he introduces a new theory of warrant based on the idea of multiple extensions. Vreeswijk (1991) has made a similar proposal. In (Vreeswijk 1997), Vreeswijk relates the theory of inductive warrant to a theory of warrant based on extensions. He discusses several ways of defining a theory of warrant and discusses the mutual relation. Dung (1995) discusses the theory of warrant on an abstract level. He presents several notions of acceptable arguments based on a set of arguments and a binary attack relation on the set of arguments. Here, arguments are considered as atomic entities. No relation with an argument system is specified. The different notions of acceptability correspond with different ways of dealing with self-defeat and with multiple extensions. The extensions based on Definition \ref{defeat} correspond with Dung's {\em stable extensions} and the extensions based on the partial status assignment correspond with Dung's {\em preferred extensions}. Prakken and Vreeswijk (1999) give an overview of argument systems proposed in the literature. In their overview they discuss the strong and weak points of many theories of warrant that have been proposed in the literature. One of the aspect they look at is the handling of self-defeat. Furthermore, they discuss several arguments systems in detail. The theories of warrant presented by Pollock (1987, 1994), Dung (1995), Vreeswijk (1997), and Prakken and Vreeswijk (1999), start from a {\em defeat relation} on the set of derived arguments. This relation is the result of resolving conflicts between the propositions supported by the arguments. The theory of warrant as described by these authors is concerned with selecting a set of valid arguments. It seems more natural, however, to express the validity of an argument in terms of the validity of the defeasible steps that are used in the argument. In this respect, the theory of warrant proposed in this paper differs from the above mentioned proposals. Nute (1988, 1994) proposes a {\em defeasible logic} that is closely related to argument systems for reasoning with defeasible rules. Nute's logic, which seems to be inspired by logic programming, does not derive arguments that are subsequently evaluated to determine the valid conclusions. Instead, Nute introduces a proof system that guarantees that only valid conclusions are derived. The proof system consists of four rules for deriving formulas that hold and three rules for formulas that cannot hold (Nute 1994). Since the preference relation `specificity' is an integral part of these rules, the formulation of the rules is rather complex. Furthermore, the approach is less flexible. Adding other preference relations requires a reformulation of the rules. Conclusions that follow form the defeasible logic can be weaker than the conclusions one would expect. Ideally, in case of multiple extensions, conclusions should be based on those arguments that are valid in all extensions. Prakken \& Vreeswijk (1999) point out that in Nute's defeasible logic, this is not always the case. Nute's approach sometimes allows for a smaller number of conclusions than necessary. An advantage of Nute's approach is its suitability for realizing an implementation. His approach gives us a recursive procedure for the determination of validity of a conclusion. This is in contrast with the procedure proposed by Loui (1998). Loui views the procedure of determining the validity of a conclusion as a dialectics satisfying some protocol. \section{Conclusion} A defeasible rule describes a preferred or a probabilistic relations between propositions. Such defeasible rules can be used to construct arguments for propositions. For both interpretations of a defeasible rule, we conclude that an inconsistency can be resolved by defeating one of the last rules of the argument supporting the inconsistency. Furthermore, we conclude that it suffices to consider only the rules are candidates for defeat, to select the rule to be defeated. For this purpose, a preference relation on the set of rules has been proposed. A definition of {\em specificity} that generates such a preference relation on the set of rules has been given. Since one of the last rules of the argument for an inconsistency must be defeated, we can formulate an argument for the defeat of this rule. Such an argument {\em undercuts} the application of the rule. Hence, {\em rebutting} defeat is reformulated as {\em undercutting} defeat after determining the rule to be defeated. Although this approach does not lead to new results, it is more intuitive. An argument gives a valid justification for a conclusion, if all step (the rules) of the justification are valid. Furthermore, it enables us to determine the extensions of valid beliefs using a Reason Maintenance System. A relation between default logic and the proposed argument system has been established and closure properties have been studied. Finally, an extension of the argument system enabling reasoning by cases has been proposed. \section*{Appendix A} \setcounter{theorem}{0} \setcounter{proposition}{0} \begin{proposition} The set of defeated rules $\Omega$ are incomparable. i.e.\ for each $\Lambda \not= \Omega$ such that $\Lambda = \textsl{Defeat}(\Lambda)$, neither $\Lambda \subset \Omega$ nor $\Lambda \supset \Omega$ holds. \end{proposition} \begin{proof} Suppose $\Lambda \subset \Omega$. Then, by the definition of \textsl{Defeat}: $\textsl{Defeat}(\Lambda) \supseteq \textsl{Defeat}(\Omega)$. Hence, $\Lambda \supseteq \Omega$. Contradiction. Suppose $\Omega \subset \Lambda$. Then, by the definition of \textsl{Defeat}: $\textsl{Defeat}(\Omega) \supseteq \textsl{Defeat}(\Lambda)$. Hence, $\Omega \supseteq \Lambda$. Contradiction. \end{proof} \begin{proposition} A set of rules $\Omega$ is a fixed point of \textsl{Defeat} if and only if there is a status assignment such that $\Omega$ is the set of rules that is assigned the status {\em defeated}. \end{proposition} \begin{proof} Let $\Omega = \{ \varphi \leadsto \psi \mid \varphi \leadsto \psi \mbox{ is assigned the status {\em defeated}} \}$. Suppose that $\Omega$ is not a fixed point. Then, for some $\varphi \leadsto \psi$, $\varphi \leadsto \psi \in \textsl{Defeat}(\Omega)$ and $\varphi \leadsto \psi \not\in \Omega$ or $\varphi \leadsto \psi \not\in \textsl{Defeat}(\Omega)$ and $\varphi \leadsto \psi \in \Omega$. Suppose that $\varphi \leadsto \psi \in \textsl{Defeat}(\Omega)$ and $\varphi \leadsto \psi \not\in \Omega$. Since $\varphi \leadsto \psi \in \textsl{Defeat}(\Omega)$, there is an argument $A_{\neg(\varphi \leadsto \psi)}$ such that $A_{\neg(\varphi \leadsto \psi)} \cap \Omega = \emptyset$. But then, by the definition of a status assignment, $\varphi \leadsto \psi$ is assigned the status {\em defeated}. Contradiction. Suppose that $\varphi \leadsto \psi \not\in \textsl{Defeat}(\Omega)$ and $\varphi \leadsto \psi \in \Omega$. Since $\varphi \leadsto \psi \not\in \textsl{Defeat}(\Omega)$, there is no argument $A_{\neg(\varphi \leadsto \psi)}$ such that $A_{\neg(\varphi \leadsto \psi)} \cap \Omega = \emptyset$. But then, by the definition of a status assignment, $\varphi \leadsto \psi$ is assigned the status {\em undefeated}. Contradiction. Hence, $\Omega$ is a fixed point. Now let $\Omega$ be a fixed point of \textsl{Defeat}. We assign the status {\em defeated} to all rules in $\Omega$ and {\em undefeated} to all rules not in $\Omega$. Suppose that this is not a valid status assignment. Then there is a rule $\varphi \leadsto \psi$ that is assigned the status {\em defeated} while there is no argument $A_{\neg(\varphi \leadsto \psi)}$ such that every $\alpha \leadsto \beta \in \tilde{A}_{\neg(\varphi \leadsto \psi)}$ is assigned the status {\em undefeated}, or $\varphi \leadsto \psi$ that is assigned the status {\em undefeated} while there in an argument $A_{\neg(\varphi \leadsto \psi)}$ such that every $\alpha \leadsto \beta \in \tilde{A}_{\neg(\varphi \leadsto \psi)}$ is assigned the status {\em undefeated}. In the former case, for every argument $A_{\neg(\varphi \leadsto \psi)}$ there is a rule $\alpha \leadsto \beta \in \tilde{A}_{\neg(\varphi \leadsto \psi)}$ that is assigned the status {\em defeated}. Therefore, for every argument $A_{\neg(\varphi \leadsto \psi)}$, $\tilde{A}_{\neg(\varphi \leadsto \psi)} \cap \Omega \not= \emptyset$. Hence $\varphi \leadsto \psi \not\in \Omega$ and therefore $\varphi \leadsto \psi$ is assigned the status {\em undefeated}. Contradiction. In the latter case, there is an argument $A_{\neg(\varphi \leadsto \psi)}$ such that every $\alpha \leadsto \beta \in \tilde{A}_{\neg(\varphi \leadsto \psi)}$ is assigned the status {\em undefeated}. Therefore, $\tilde{A}_{\neg(\varphi \leadsto \psi)} \cap \Omega = \emptyset$. Hence $\varphi \leadsto \psi \in \Omega$ and therefore $\varphi \leadsto \psi$ is assigned the status {\em defeated}. Contradiction. \end{proof} To prove Theorem \ref{equivalence}, the following lemmas will be used. \begin{lemma} \label{l3} Let $\Gamma$ be a set of propositions and let $\cal E$ be the deductive closure of $\Gamma$. Furthermore, let there be an argument for each proposition in $\Gamma$. Then for each proposition in $\cal E$ we can construct an argument $A$. \end{lemma} \begin{proof} For each $\varphi \in {\cal E} \backslash \Gamma$ there holds that $\Gamma \vdash \varphi$. Hence, $\bigcup \{ A_\psi \mid \psi \in \Gamma \}$ is an argument for $\varphi$. \end{proof} \begin{lemma} \label{l4} Let $\cal E$ be an extension according to Definition \ref{extensional} and let $\Omega = \{ \alpha \leadsto \beta \mid \neg (\alpha \leadsto \beta) \in {\cal E} \}$. Furthermore, let there be an argument $A$ for each proposition in $\cal E$ and let $\tilde{A} \cap \Omega = \emptyset$. Then $\Omega$ satisfies Definition \ref{defeat}, $\Omega = \textsl{Defeat}(\Omega)$. \end{lemma} \begin{proof} Suppose that $\alpha \leadsto \beta \in \Omega$ and $\alpha \leadsto \beta \not\in \textsl{Defeat}(\Omega)$. Since $\alpha \leadsto \beta \in \Omega$, either there exists a $\gamma \leadsto \neg ( \alpha \leadsto \beta)$ and $\gamma \in {\cal E}$, or there exists a $\Delta$ that defeats $\alpha \leadsto \beta$, $\alpha \in {\cal E}$ and $\Delta \subseteq \{ \eta \leadsto \mu \in D \mid \{ \eta, \mu \} \subseteq {\cal E} \}$ such that $\{ \mu \mid \eta \leadsto \mu \in \Delta \} \cup \Sigma \vdash \neg \beta$ and for no $\eta \leadsto \mu \in \Delta$ there holds: $\alpha \leadsto \beta \succ \eta \leadsto \mu$. In the former case there exists an argument $A_{\neg(\alpha \leadsto \beta)} = \{ \langle A_\gamma, \gamma \leadsto \neg ( \alpha \leadsto \beta) \rangle \}$ and $\tilde{A}_{\neg(\alpha \leadsto \beta)} \cap \Omega = \emptyset$. Hence, $\alpha \leadsto \beta \in \textsl{Defeat}(\Omega)$. Contradiction. In the latter case there exists an argument $A_{\neg(\alpha \leadsto \beta)} = \{ \langle A_\eta, \eta \leadsto \mu \rangle \mid \eta \leadsto \mu \in \Delta \} \cup \{ \langle \emptyset, \varphi \rangle \mid \varphi \in \Sigma \} \cup A_\alpha$. Furthermore, $\tilde{A}_{\neg(\alpha \leadsto \beta)} \cap \Omega = \emptyset$. Hence, $\alpha \leadsto \beta \in \textsl{Defeat}(\Omega)$. Contradiction. Hence, $\Omega \subseteq \textsl{Defeat}(\Omega)$. Suppose that $\alpha \leadsto \beta \not\in \Omega$ and $\alpha \leadsto \beta \in \textsl{Defeat}(\Omega)$. Then there exists an argument $A_{\neg (\alpha \leadsto \beta)}$ such that $A_{\neg (\alpha \leadsto \beta)} \cap \Omega = \emptyset$. This implies that either there exists an argument $A_{\alpha}$ for $\alpha$ such that $A_{\alpha} \cap \Omega = \emptyset$ and an argument $A_{\neg \beta}$ for $\neg \beta$ such that $A_{\neg \beta} \cap \Omega = \emptyset$, or that $\gamma \leadsto \neg (\alpha \leadsto \beta) \in D$ and there exists an argument $A_\gamma$ for $\gamma$ such that $A_\gamma \cap \Omega = \emptyset$. In the former case, $\alpha \in {\cal E}$ and $\neg \beta \in {\cal E}$. But then $\neg (\alpha \leadsto \beta) \in \mathcal{E}$. Contradiction. In the latter case, $\gamma \in {\cal E}$. But then $\neg (\alpha \leadsto \beta) \in \mathcal{E}$. Contradiction. Hence, $\Omega = \textsl{Defeat}(\Omega)$. \end{proof} \begin{theorem} Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory. The set of extensions determined by the argument system is equal to the set of extensions determined by the default logic. \end{theorem} \begin{proof} Let $\cal E$ be an extension according to Definition \ref{argextend}. We will proof that ${\cal E}$ is also an extension according to Definition \ref{extensional} by showing that it is a fixed point satisfying the four requirements of Definition \ref{extensional}; i.e., we assume that \ ${\cal E} = \Gamma({\cal E})$. \begin{enumerate} \item Clearly for each $\alpha \in \Sigma$ we have an argument $\{ \langle \emptyset, \alpha \rangle \}$. Since it contains no rules, it cannot be defeated. Therefore, $\Sigma \subseteq {\cal E}$. \item According to the definition of an argument, $\cal E$ is deductively closed. \item Let $\Delta = \{ \eta_1 \leadsto \mu_1,...,\eta_n \leadsto \mu_n \}$ defeat $\alpha \leadsto \beta$ given $\cal E$. Then $\{ \mu_1,..., \mu_n, \alpha \} \subseteq \Gamma({\cal E}) = {\cal E}$ and for no $\eta_i \leadsto \mu_i \in \Delta$: $\alpha \leadsto \beta \succ \eta_i \leadsto \mu_i$. Since $\{ \mu_1,..., \mu_n, \alpha \} \subseteq {\cal E}$, we have valid arguments $A_{\mu_1},...,A_{\mu_n}, A_\alpha$. Hence, we have a valid argument $A_{\neg (\alpha \leadsto \beta)}$ for $\neg (\alpha \leadsto \beta)$. Therefore, $\neg (\alpha \leadsto \beta) \in {\cal E}$. \item Let $\alpha \in \Gamma({\cal E}) = {\cal E}$ and $\neg (\alpha \leadsto \beta) \not\in {\cal E}$. Then there exists a valid argument $A_\beta = \{ \langle A_\alpha, \alpha \leadsto \beta \rangle \}$ for $\beta$. Hence, $\beta \in {\cal E} = \Gamma({\cal E})$. \end{enumerate} Hence, $\Gamma ({\cal E}) \subseteq {\cal E}$. Suppose that ${\cal E}$ is not a minimal set satisfying the requirements of $\Gamma({\cal E})$. Then there is a $\varphi \in {\cal E} \backslash \Gamma({\cal E})$ and a corresponding valid argument $A_\varphi$. Let $A_\psi$ be the smallest sub-argument such that $\psi \not\in \Gamma({\cal E})$. Suppose that $A_\psi = \{ \langle \emptyset, \psi \rangle \}$. Since $\psi \in \Sigma$, $\psi \in \Gamma({\cal E})$. Contradiction. Suppose that $\varphi_1,...,\varphi_n \vdash \psi$ and $\{ A_{\varphi_1},...,A_{\varphi_n} \} \subseteq {\cal A}$. Then, since $A_\psi$ is the smallest sub-argument, $\{ \varphi_1,...,\varphi_n \} \subseteq \Gamma({\cal E})$. Therefore $\psi \in \Gamma({\cal E})$. Contradiction. Suppose that $A_\psi$ with $\psi = \neg(\eta_i \leadsto \mu_i)$ is the result of an argument $A_\perp = \{ \langle A'_1, \eta_1 \leadsto \mu_1 \rangle ,\ldots, \langle A'_k, \eta_k \leadsto \mu_k \rangle \langle \emptyset, \sigma_{1} \rangle ,\ldots, \langle \emptyset, \sigma_{\ell} \rangle \}$ for an inconsistency. Clearly for no $j \not = i$: $\eta_j \leadsto \mu_j \succ \eta_i \leadsto \mu_i$. Since $A_\psi$ is the smallest sub-argument, $\{ \mu_1,...,\mu_{i-1},\mu_{i+1},...,\mu_k, \sigma_{1},...,\sigma_\ell \} \subseteq \Gamma({\cal E})$. Therefore $\psi = \neg(\eta_i \leadsto \mu_i) \in \Gamma({\cal E})$. Contradiction. Suppose that $A_\psi = \{ \langle A_\mu, \eta \leadsto \psi \rangle \}$. Since $\cal E$ is an extension according to Definition \ref{argextend}, there is no valid argument for $\neg (\eta \leadsto \psi)$. Therefore, $\neg (\eta \leadsto \psi) \not\in {\cal E}$. Hence, $\psi \in \Gamma({\cal E})$. Contradiction. Hence, ${\cal E}$ is a fixed point of $\Gamma$. \vspace{2mm} Let $\cal E$ be an extension according to Definition \ref{extensional} and let $\Omega = \{ \alpha \leadsto \beta \mid \neg (\alpha \leadsto \beta) \in {\cal E} \}$. So, ${\cal E} = \Gamma({\cal E})$. We will proof that ${\cal E}$ is an extension according to Definition \ref{argextend} by showing that for each proposition in ${\cal E}$ there is a valid argument and for each proposition not in ${\cal E}$ there is no such argument. We will show that there is a valid argument for each $\varphi \in {\cal E}$ by showing that we can construct an argument $A$ for each $\varphi \in {\cal E}$ such that $\tilde{A} \cap \Omega = \emptyset$. If we have an argument for each $\varphi \in {\cal E}$, then, by Lemma \ref{l4}, $\Omega$ satisfies Definition \ref{defeat}, i.e.\ $\Omega = \textsl{Defeat}(\Omega)$. Since for each $\varphi \in {\cal E}$, we have an argument $A$ such that $\tilde{A} \cap \Omega = \emptyset$, $A$ must be a valid argument for $\varphi$ Let $\Gamma_0 = \Sigma$ and let ${\cal E}_0 \subseteq {\cal E}$ be a smallest deductively closed subset such that $\Gamma_0 \subseteq {\cal E}_0$. For each $\varphi \in \Gamma_0$ we can construct an argument $A_\varphi = \{ \langle \emptyset, \varphi \rangle \}$. Furthermore, by Lemma \ref{l3}, we can construct an argument for each $\varphi \in {\cal E}_0$. Clearly, $\tilde{A}_\varphi \cap \Omega = \emptyset$. Proceeding inductively, let ${\cal E}_i \subseteq {\cal E}$ be a smallest deductively closed subset such that $\Gamma_i \subseteq {\cal E}_i$. Suppose that ${\cal E}_i \subset {\cal E}$. Then there is a $\varphi \in ({\cal E} \backslash {\cal E}_i)$ such that either $\varphi = \neg (\alpha \leadsto \beta)$ and $\alpha \leadsto \beta$ is defeated given ${\cal E}_i$, or $\alpha \in {\cal E}_i$, $\alpha \leadsto \varphi \in D$ and $\neg (\alpha \leadsto \varphi) \not\in {\cal E}$, or neither of these two possibilities. In the third case, $\Gamma({\cal E})$ is not a minimal set. Hence, this case is impossible. In the first case there is a $\Delta \subseteq \{ \eta \leadsto \mu \in D \mid \{ \eta,\mu \} \subseteq {\cal E}_i \}$ that defeats $\alpha \leadsto \beta$. Hence we can construct an argument $A_{\neg(\alpha \leadsto \beta)}$ for $\varphi$ such that $\tilde{A}_{\neg(\alpha \leadsto \beta)} \cap \Omega = \emptyset$. In the second case $A_\varphi = \{ \langle A_\alpha, \alpha \leadsto \varphi \rangle \}$ is an argument for $\varphi$ such that $\tilde{A}_\varphi \cap \Omega = \emptyset$. Let ${\cal E}_{i+1}$ be the deductive closure of $\Gamma_{i+1} = \Gamma_i \cup \{ \varphi \}$. According to Lemma \ref{l3}, for every proposition in ${\cal E}_{i+1}$ we can construct an argument $A$ such that $\tilde{A} \cap \Omega = \emptyset$. Hence, for every proposition in ${\cal E}$ we can construct an argument $A$ such that $\tilde{A} \cap \Omega = \emptyset$. Given these arguments, there holds according to Lemma \ref{l4} that $\Omega = \{ \alpha \leadsto \beta \mid \neg (\alpha \leadsto \beta) \in {\cal E} \}$ satisfies Definition \ref{defeat}, i.e. $\Omega = \textsl{Defeat}(\Omega)$. Hence, the arguments for the propositions in ${\cal E}$ are valid arguments. \vspace{2mm} Now suppose that we can construct a valid argument $A_\varphi$ for a proposition $\varphi \not\in {\cal E}$, i.e., $\tilde{A}_\varphi \cap \Omega = \emptyset$. Since $\bar{A}_\varphi \subseteq \Sigma$, either for some rule $\alpha \leadsto \beta \in \tilde{A}_\varphi$ there holds: $\alpha \in {\cal E}$ and $\beta \not\in {\cal E}$. So, $\alpha \leadsto \beta \in \Omega$. Contradiction. Hence, ${\cal E}$ is an extension according to Definition \ref{argextend}. \end{proof} \begin{theorem} Let $\varphi \leadsto \psi$ and $\eta \leadsto \mu$ be two rules. If $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \mu$ according to Definition \ref{spec}, then there are two Poole-arguments $\langle D_1, \psi \rangle$ and $\langle D_2, \mu \rangle$ with $\varphi \leadsto \psi \in D_1$ and $\eta \leadsto \mu \in D_2$ for which there hold that $\langle D_1, \psi \rangle$ is more specific than $\langle D_2, \mu \rangle$. \end{theorem} \begin{proof} We must prove that for every set of possible facts $F_p$ there must hold: if $F_p \cup D_1 \cup F_n \models \psi$ and $F_p \cup D_2 \cup F_n \not\models \psi$, then $F_p \cup D_2 \cup F_n \models \mu$. Let $D_2 = \tilde{A}_\eta \cup \{ \eta \leadsto \mu \}$ and $D_1 = \{ \varphi \leadsto \psi \}$. Since $\varphi \leadsto \psi$ is more specific than $\eta \leadsto \mu$, given the premise $\{ \varphi \}$ there must exist an argument $A_\eta$ for $\eta$. Suppose that $\bar{A}_\eta = \emptyset$. Then $\langle D_1, \psi \rangle$ is more specific than $\langle D_2, \mu \rangle$. Suppose that $\bar{A}_\eta = \{ \varphi \}$. Then, any possible fact $F_p$ for which the antecedent of Poole's definition holds, must imply $\varphi$. Hence, $\langle D_1, \psi \rangle$ is more specific than $\langle D_2, \mu \rangle$. \end{proof} \begin{theorem} The defeasible theory $\langle \Sigma, D, \succ \rangle$ satisfies: \\ {\em Reflexivity, Deduction, Cut} and, in the absence of odd loops, {\em Cautious Monotony}. \noindent An \emph{odd loop} is an odd number of arguments $A_1,\ldots,A_n$ where every $A_{i+1}$ defeats a rule in $\tilde{A}_i$, and $A_1$ defeats a rule in $\tilde{A}_n$. \end{theorem} \begin{proof} {\em Reflexivity}. For each $\varphi \in \Sigma$, $A= \{ \langle \emptyset, \varphi \rangle \}$ is an argument for $\varphi$. Since $A$ contains no rule, it cannot be defeated. Therefore, $\varphi \in B$. {\em Deduction}. For each $\varphi$ such that $\Sigma \vdash \varphi$, $A= \{ \langle \emptyset, \psi \rangle \mid \psi \in \Sigma \}$ is an argument for $\varphi$. Since $A$ contains no rules, it cannot be defeated. Therefore, $\varphi \in B$. {\em Cut}. Let $\cal E$ be an extension of the defeasible theory $\langle \Sigma, D, \succ \rangle$, let $B$ be the belief set of $\langle \Sigma, D, \succ \rangle$, and let $\varphi \in B$ Suppose that $\cal E$ is no longer an extension after adding some $\varphi \in B$ to $\Sigma$. Let $\Omega$ be the set of defeated rules that correspond with the extension $\cal E$. Then after adding $\varphi$ there must be a new argument $A_{\neg (\alpha \leadsto \beta)}$ such that $\tilde{A}_{\neg (\alpha \leadsto \beta)} \cap \Omega = \emptyset$ and $\alpha \leadsto \beta \not\in \Omega$. Since $\tilde{A}_{\neg (\alpha \leadsto \beta)} \cap \Omega = \emptyset$ and $\alpha \leadsto \beta \not\in \Omega$, $\{ \langle \emptyset, \varphi \rangle \}$ must be a sub-argument of $A_{\neg (\alpha \leadsto \beta)}$ Now three situations are possible. \begin{itemize} \item $A_{\neg (\alpha \leadsto \beta)} = \{ \langle A_\xi, \xi \leadsto \neg (\alpha \leadsto \beta) \rangle \}$. Then there is an $A^*_{\neg (\alpha \leadsto \beta)}$ in which $\{ \langle \emptyset, \varphi \rangle \}$ is replaced by $A_\varphi$. Since $A_\varphi$ is valid; i.e.\ $\tilde{A}_\varphi \cap \Omega = \emptyset$, there holds that $\alpha \leadsto \beta \in \Omega$. Contradiction. \item $A_{\neg (\alpha \leadsto \beta)}$ is derived from $A_\perp$ and $\{ \langle \emptyset, \varphi \rangle \}$ is not a disagreeing argument. Since $\{ \langle \emptyset, \varphi \rangle \}$ is a sub-argument of $A_\perp$, there is an $A^*_\perp$ in which $\{ \langle \emptyset, \varphi \rangle \}$ is replaced by $A_\varphi$. Clearly, $\vec{A}_\perp = \vec{A}^*_\perp$. Hence, since $A_\varphi$ is valid, $\alpha \leadsto \beta \in \Omega$. Contradiction. \item $A_{\neg (\alpha \leadsto \beta)}$ is derived from $A_\perp$ and $\{ \langle \emptyset, \varphi \rangle \}$ is a disagreeing argument. Then there is an $A^*_\perp$ in which $\{ \langle \emptyset, \varphi \rangle \}$ is replaced by $A_\varphi$. Hence, $A_\varphi \subseteq A^*_\perp$. Since $A_\varphi$ is valid, for no $\eta \leadsto \mu \in \vec{A}^*_\perp$: $\alpha \leadsto \beta \succ \eta \leadsto \mu$. Therefore, there is an $A^*_{\neg (\alpha \leadsto \beta)} = (A^*_\perp \backslash \{ \langle A_\alpha, \alpha \leadsto \beta \rangle \}) \cup A_\alpha$ and $A^*_{\neg (\alpha \leadsto \beta)} \cap \Omega = \emptyset$. Hence, $\alpha \leadsto \beta \in \Omega$. Contradiction. \end{itemize} {\em Cautious Monotonicity}. Let $\langle \Sigma, D, \succ \rangle$ be a defeasible theory, and let $B$ be the belief set of $\langle \Sigma, D, \succ \rangle$. Suppose that $\cal E$ is an extension of the defeasible theory $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$ for some $\varphi \in B$, but not of $\langle \Sigma, D, \succ \rangle$. Let $\Omega$ be the set of defeasible rules determining the extension $\mathcal{E}$. Every extension $\mathcal{E}'$ of $\langle \Sigma, D, \succ \rangle$ determined by the defeasible rules $\Lambda$, is also an extension of the defeasible theory $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$ according to the property \emph{Cut}. Therefore, $\Lambda \not\subseteq \Omega$ and $\Omega \not\subseteq \Lambda$ according to Proposition \ref{unique}. Consider an extension $\mathcal{E}'$ of the defeasible theory $\langle \Sigma, D, \succ \rangle$ determined by the defeasible rules $\Lambda$. Since $\varphi \in B$, there is an argument $A_\varphi$ generated by $\langle \Sigma, D, \succ \rangle$ such that $\tilde{A}_\varphi \cap \Lambda = \emptyset$. Every argument determined by the defeasible theory $\langle \Sigma, D, \succ \rangle$ is also an argument of the defeasible theory $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$. Moreover, every argument $A_\psi$ determined by the defeasible theory $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$ is either an argument of the defeasible theory $\langle \Sigma, D, \succ \rangle$, or contains $\{ \langle \emptyset, \varphi \rangle \}$ as a sub-argument. If we replace every sub-argument $\{ \langle \emptyset, \varphi \rangle \}$ in $A_\psi$ by $A_\varphi$, denoted by $A^*_\psi$, then we get an argument of the defeasible theory $\langle \Sigma, D, \succ \rangle$. Consider the above mentioned extension $\mathcal{E}$ of $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$ determined by the defeated rules $\Omega$. Clearly, $\tilde{A}_\varphi \cap \Omega \not= \emptyset$ otherwise $\mathcal{E}$ would als be an extension of $\langle \Sigma, D, \succ \rangle$. Therefore, there is a defeasible rule $\alpha \leadsto \beta \in (\tilde{A}_\varphi \cap \Omega)$ and a corresponding argument $A_{\neg(\alpha \leadsto \beta)}$ of the defeasible theory $\langle \Sigma \cup \{\varphi\}, D, \succ \rangle$. In no extension $\mathcal{E}'$, $A^*_{\neg(\alpha \leadsto \beta)}$ is a valid argument. This is only possible if the validity of $A_{\neg(\alpha \leadsto \beta)}$ given $\Omega$ depends, directly or indirectly through arguments defeating other arguments, on $\varphi$. So, $A_{\neg(\alpha \leadsto \beta)}$ depends, directly or indirectly, on an argument that has $\{ \langle \emptyset, \varphi \rangle \}$ as a sub-argument. Hence, if we replace all arguments $A$ on which $A_{\neg(\alpha \leadsto \beta)}$ depends by $A^*$, then $A^*_{\neg(\alpha \leadsto \beta)}$ is part of an odd loop. This contradicts the condition of the theorem. \end{proof} \begin{theorem} The defeasible theory $\langle \Sigma, D, \succ \rangle$ satisfies {\em Exclusive Or}: \\ if $\Sigma \cup \{\varphi \wedge \neg\psi\} \mid\joinrel\sim \eta$, $\Sigma \cup \{\neg\varphi \wedge \psi\} \mid\joinrel\sim \eta$, then $\Sigma \cup \{\varphi \mathrel{\vee\kern-0.73em-} \psi \} \mid\joinrel\sim \eta$; \end{theorem} \begin{proof} Let $r_1 = \varphi \vee \psi \leadsto \varphi \wedge \neg \psi $, $r_2 = \varphi \vee \psi \leadsto \varphi \wedge \psi$ and $r_3 = \varphi \vee \psi \leadsto \neg \varphi \wedge \psi$. To proof the theorem, we must prove that for every extension $\cal E$ of the defeasible theory $\langle \Sigma \cup \{ \varphi \vee \psi \}, D, \succ \rangle$, either that ${\cal E}$ is an extension of $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$ or that ${\cal E}$ is an extension of $\langle \Sigma \cup \{\neg\varphi \wedge \psi\}, D, \succ \rangle$. Since for every extension ${\cal E}'$ of $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$ and of $\langle \Sigma \cup \{\neg\varphi \wedge \psi\}, D, \succ \rangle$, $\eta \in {\cal E}'$ holds, and since $ B = \bigcap_{i} {\cal E}_i$, $\Sigma \cup \{\varphi \vee \psi\} \mid\joinrel\sim \eta$. Let ${\cal E}$ be an extension of $\langle \Sigma \cup \{\varphi \vee \psi\}, D, \succ \rangle$. Then because of the set of hypotheses $H$, $\varphi \wedge \neg \psi \in {\cal E}$ or $\neg \varphi \wedge \psi \in {\cal E}$. Notice that for no $\cal E$, $\varphi \wedge \psi \in {\cal E}$ unless $\Sigma$ is inconsistent. Suppose that $\varphi \wedge \neg \psi \in {\cal E}$. Then $\Omega = \{ \alpha \leadsto \beta \mid \neg(\alpha \leadsto \beta) \in {\cal E} \}$. To prove that ${\cal E}$ is an extension of $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$, we have to prove that $\alpha \in {\cal E}$ if and only if there is an argument $A^*_\alpha$ such that $A^*_\alpha \cap \Omega = \emptyset$ given $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$. Let $\alpha \in {\cal E}$. Then there is an $A_\alpha$ given $\langle \Sigma \cup \{ \varphi \vee \psi \}, D, \succ \rangle$ with $\tilde{A}_\alpha \cap \Omega = \emptyset$. Therefore, we can construct an $A^*_\alpha$ given $\langle \Sigma \cup \{\varphi \}, D, \succ \rangle$ such that $A^*_\alpha \cap \Omega = \emptyset$ by first replacing each sub-argument $\{ \langle \{ \langle \emptyset, \varphi \vee \psi \rangle \}, r_1 \rangle \}$ in $A_\alpha$ by $\{ \langle \emptyset, \varphi \rangle \}$ and subsequently by replacing each remaining sub-argument $\{ \langle \emptyset, \varphi \vee \psi \rangle \}$ also by $\{ \langle \emptyset, \varphi \rangle \}$. Hence, there is an argument $A^*_\alpha$ given $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$ with $\tilde{A}^*_\alpha \cap \Omega = \emptyset$. Let $A^*_\alpha$ be an argument given $\langle \Sigma \cup \{\varphi \wedge \neg\psi\}, D, \succ \rangle$ with $\tilde{A}^*_\alpha \cap \Omega = \emptyset$. Then we can construct an $A_\alpha$ given $\langle \Sigma \cup \{\varphi \vee \psi\}, D, \succ \rangle$ by replacing each sub-argument $\{ \langle \emptyset, \varphi \rangle \}$ in $A^*_\alpha$ by $\{ \langle \{ \langle \emptyset, \varphi \vee \psi \rangle \}, r_1 \rangle \}$. To make sure that $A_\alpha \cap \Omega = \emptyset$, we must make sure that $r_1$ is not defeated. If $r_1$ is defeated, there must be a valid argument for $\neg (\varphi \wedge \neg \psi)$. Since $\varphi \wedge \neg \psi \in {\cal E}$, there is no such argument. \vspace{2mm} In case $\neg \varphi \wedge \psi \in {\cal E}$, the proof is similar to the one given above. \end{proof} \section*{Appendix B} Associate with each node and with each justification of the JTMS a counter. Initially, set the counter of a node equal to the number of incoming justifications and the counter of each justification equal to the number of the of out-nodes of the justification. Determine all the nodes that have a justification with an empty set of out-nodes. Label these nodes {\sc in}, and place them on the in-list. Next execute {\em propagate}. \begin{tabbing} mmm\=mmm\=mmm\=mmm\=mmm\=mmm\=mmm\=mmm\= \kill {\em propagate}: \\ \> {\bf for} each node on the in-list \\ \> and for each out-going justification {\bf do} \\ \>\> decrement the counter of its consequent node; \\ \>\> remove the justification; \\ \>\> {\bf if} the counter of the node is equal to 0 {\bf then} \\ \>\>\> label the node OUT; \\ \>\>\> place the node on the out list; \\ \>\> {\bf end} \\ \> {\bf end} \\ \> delete the in-list; \\ \> {\bf for} each node on the out-list \\ \> and for each out-going justification {\bf do} \\ \>\> decrement the counter of the justification; \\ \>\> {\bf if} the counter of the justification is equal to 0 {\bf then} \\ \>\>\> label its consequent node IN; \\ \>\>\> place its consequent node on the in-list; \\ \>\> {\bf end} \\ \> {\bf end}; \\ \> delete the out-list; \\ \> {\bf if} the in-list is not empty {\bf then} \\ \>\> {\bf repeat} {\em propagate}; \\ \> {\bf end} \\ {\bf end} \end{tabbing} The above described procedure need not result in a complete labeling of the JTMS. When this is the case, more than one labeling exist. To create a complete labeling, we must choose one of the unlabeled nodes, a node that is not labeled {\sc in}, {\sc out} or {\sc undetermined}, and label in {\sc in} or {\sc out}. If we label the node {\sc in}, we place the node on the in-list, if we label it {\sc out,} we place it on the out-list. Subsequently, we must execute the procedure {\em propagate}. We repeat the selection of a node, giving it a label and propagating the consequences, till all nodes are labeled. By backtracking on the choices that are made, we determine every labeling of the JTMS. \section*{\rm Acknowledgment} I thank the reviewers and Cees Witteveen for their comments which helped me to improve the paper.
2,877,628,089,255
arxiv
\section{Introduction} The AdS/CFT correspondence \cite{Maldacena:1997re} in principle gives a precise and non-perturbative formulation of quantum gravity in terms of large $N$ gauge theories. In practice, our understanding of quantum gravity using AdS/CFT has been largely limited by difficulties in solving strongly coupled large $N$ gauge theories. Thus, exactly solvable models of strongly coupled gauge theories with a semi-classical gravity dual are highly desirable. In two dimensions, there are lots of exactly solvable conformal field theories. Most of them do not have a large $N$ limit that allows for a weakly coupled gravity dual. In \cite{Gaberdiel:2010pz}, Gaberdiel and Gopakumar proposed that $W_N$ minimal model, which may be constructed as the coset model $$ {SU(N)_k\times SU(N)_1\over SU(N)_{k+1}} $$ has a 't Hooft-like large $N$ limit, where $N,k$ are taken to infinity, while the 't Hooft coupling $$ \lambda = {N\over k+N} $$ is held fixed (between 0 and 1). The $W_N$ minimal model was conjectured to be dual to a higher spin gauge theory in $AdS_3$, coupled to massive scalar fields. It was observed that, in particular, a class of $W_N$ primary operators in the CFT has order 1 conformal weight in the large $N$ limit. The central charge of the CFT is \begin{equation}\begin{aligned} c&=(N-1)\left(1-{N(N+1)\over (N+k)(N+k+1)}\right) =N(1-\lambda^2)+{\mathcal O}(N^0). \end{aligned}\end{equation} The linear dependence on $N$ is characteristic of a vector model. It was not at all obvious, however, that the primary operators can be classified as single-trace or multi-trace operators in the large $N$ limit, which are supposed to be dual to single elementary particle states or their bound states in the bulk. It was also not obvious that the correlation functions of the primaries obey large $N$ factorization, according to the appropriate classification of single-trace and multi-trace operators. There is the further complication with the existence of a large set of ``light primaries", whose conformal dimension go to zero in the infinite $N$ limit, which nonetheless do not decouple in the correlation functions \cite{Papadodimas:2011pf}. The identification of a number of single-trace versus multi-trace operators, and the large $N$ factorization of their correlation functions, are demonstrated in \cite{Chang:2011vka} by analyzing exact results of three-point functions. There, a complete picture of the single-trace/one-particle spectrum and the interpretation of the single-trace light primaries was still missing. In this paper, we will present a conjecture on the complete spectrum of single-trace operators of finite conformal weight in the infinite $N$ limit. Importantly, we will argue that there are a large set of hidden symmetries in $W_N$ minimal model that emerge in the infinite $N$ limit, and are broken by $1/N$ effects, in a manner that is similar to higher spin symmetries in three-dimensional Chern-Simons vector models. The currents that generate these hidden symmetries come from $W_N$ descendants of light primaries at finite $N$, but effectively become primary fields in the infinite $N$ limit. They are dual to gauge fields in $AdS_3$ of various spins, under which the massive scalars are charged. The corresponding gauge transformations are incompatible with the boundary conditions on the massive scalars, which leads to the breaking of symmetry. Our conjecture on the large $N$ spectrum, combined with the identification of the gauge generators acting on the matter scalars, leads to a dramatically new picture of the holographic dual of the $W_N$ minimal model. We propose that the dual higher spin gauge theories is a ``semi-local"\footnote{The terminology comes from analogy with the holographic theory of semi-local quantum liquids \cite{Iqbal:2011in}.} theory living on $AdS_3\times S^1$. This is not an ordinary four-dimensional field theory, however. At each point of the $S^1$, there is a tower of higher spin gauge fields in $AdS_3$, coupled to a single complex massive scalar field, of the type described by Vasiliev's system in three dimensions. The different Vasiliev theories at different points on the $S^1$ appear to be decoupled at the level of bulk equations of motion. Rather, they interact only through the boundary condition which mixes scalar fields living at different points on the circle $S^1$. Essentially, while all the scalars classically have the same mass in $AdS_3$, the boundary condition assigns one scaling dimension $\Delta_+$ on right-moving modes of the scalar on the circle, and the complementary scaling dimension $\Delta_-=2-\Delta_+$ on left-moving modes of the scalar on the circle. While our proposal for the holographic dual is rather unconventional due to the large degeneracy in the bulk fields, it seems to be unavoidable due to peculiarities in the structure of large $N$ factorization in $W_N$ minimal model. We believe that it is characteristic of gauged vector models on non-simply connected spaces \cite{Banerjee:2012gh,Banerjee:2012aj}. Presumably, what we see here is the field theory of the tensionless limit of a more conventional string theory in $AdS_3$, dual to quiver-like generations of the $W_N$ minimal model, and the $S^1$ should come from a topological sector of the string theory in this limit. In the next section, we briefly review the construction of $W_N$ minimal model CFT, and what's previously known about the structure of single-trace and multi-trace operators in this CFT in the infinite $N$ limit. In section 3 and 4, we present some new examples of single-trace operators and operator relations involving light primaries at large $N$. In section 5, we argue that the operator relations that seemed to be in conflict with large $N$ factorization should in fact be interpreted as current non-conservation relations for currents that generate approximate ``hidden" symmetries in the large $N$ limit. Further data on higher spin currents of this sort are presented in section 6. In section 7, we state our conjecture on the complete spectrum of single-trace operators in the CFT at infinite $N$, or single-particle states in the bulk. These include the infinite family of massive scalars $\phi_n, \tilde\phi_n$, light scalars $\omega_n$, and the hidden higher spin currents $j_n^{(s)}$, all of which are complex. Various checks based on partition functions and characters are given by section 8. In section 9, we determine the gauge generators associated with the hidden symmetry currents, and reveal the picture of semi-local higher spin theory on $AdS_3\times S^1$. We discuss the implication of our results in section 10. \section{Summary of previous results} The $W_N$ minimal model has a holomorphic higher spin current $W^{(s)}$ and an anti-holomorphic current $\overline W^{(s)}$ for each spin $s=2,3,4,\cdots,N$. The Fourier modes of $W^{(s)}$ generate the $W_N$ algebra, which is a higher spin generalization of the Virasoro algebra. In the large $N$ limit, the $W_N$ algebra turns into the $W_\infty[\lambda]$ algebra that contains generators with arbitrary spins. In $W_N$ minimal model, the $W_N$ primary operators, the primaries with respect to the $W_N$ algebra, can be labeled by two representations $(\Lambda_+,\Lambda_-)$, where $\Lambda_\pm$ are the highest weight representations of $SU(N)_k$ and $SU(N)_{k+1}$, respectively.\footnote{A prior, the primary should also depend on the highest weight representation $\Lambda_0$ of $SU(N)_1$. However, $\Lambda_0$ can be determining by requiring $\Lambda_++\Lambda_0-\Lambda_-$ being inside the root lattice of $SU(N)$.} For fixed representations $\Lambda_+, \Lambda_-$ at sufficiently large $N$,\footnote{Namely representations that are found in the tensor product of finitely many fundamental or anti-fundamental representations of $SU(N)$, at large $N$.} the fusion coefficients for the primary operators in the $W_N$ minimal model is simply given by the product of the fusion coefficients in the $SU_k$ and $SU_{k+1}$ WZW models, i.e. \begin{equation}\begin{aligned}\label{FRC} {\mathcal N}^{W_N}_{(\Lambda_+^1,\Lambda_-^1)(\Lambda_+^2,\Lambda_-^2)}{}^{(\Lambda_+^3,\Lambda_-^3)}={\mathcal N}^{(k)}_{\Lambda_+^1\Lambda^2_+}{}^{\Lambda_+^3}{\mathcal N}^{(k+1)}_{\Lambda_-^1\Lambda^2_-}{}^{\Lambda_-^3}, \end{aligned}\end{equation} where ${\mathcal N}^{(k)}_{\Lambda^1\Lambda^2}{}^{\Lambda^3}$ is the fusion coefficient of $SU(N)_k$ WZW model. The gravity dual of $W_N$ minimal model at large $N$ must be a higher spin gauge theory, containing a tower of gauge fields of spins $s=2,3,4,\cdots, \infty$ that are dual to the higher spin currents $W^{(s)}$. The pure higher spin gauge theory on $AdS_3$ can be described by the Chern-Simons action with $hs(\lambda)\times hs(\lambda)$ gauge algebra. The higher spin algebra $hs(\lambda)$ is an infinite dimensional Lie algebra, and by a Brown-Henneaux type computation, it was shown, in \cite{Henneaux:2010xg,Campoleoni:2010zq,Gaberdiel:2011wb}, that the asymptotic symmetry algebra of $hs(\lambda)$ is $W_\infty[\lambda]$. It also follows from this computation that the bulk coupling constant is proportional to inverse the square root of the central charge, i.e. \begin{equation}\begin{aligned} g_{bulk}\sim{1\over \sqrt{c}}\sim{1\over \sqrt{N}}. \end{aligned}\end{equation} The primary operators in the $W_N$ minimal model, constructed from the diagonal modular invariant, do not carry spin. They should be dual to scalar elementary particles and their bound states with zero angular momentum, that become unbound in the infinite $N$ (zero bulk coupling) limit. In particular, the primary operator $\phi_1=(\yng(1),0)$ is dual to a scalar field with left and right conformal weight equal to \begin{equation}\begin{aligned} h_{(\yng(1),0)}={1\over 2}(1+\lambda) \end{aligned}\end{equation} in the large $N$ limit. The primary $\bar\phi_1=(\bar{\yng(1)},0)$ has the same dimension in the large $N$ limit, and is dual to the anti-particle of $(\yng(1),0)$. The primary operators $(\yng(1,1),0)$ and $(\yng(2),0)$ have conformal weights \begin{equation}\begin{aligned} h_{(\yng(1,1),0)}=1+\lambda,~~~h_{(\yng(2),0)}=2+\lambda \end{aligned}\end{equation} in the large $N$ limit. Note that $h_{(\yng(1,1),0)}$ and $h_{(\yng(2),0)}$ are twice the dimension of $(\yng(1),0)$ plus a non-negative integer. This allows for the identification of $(\yng(1,1),0)$ and $(\yng(2),0)$ as two-particle states of $\phi_1$'s. In general, the primary operators of the form $(\Lambda,0)$ are dual to the multi-particle states of $B(\Lambda)$ $\phi_1$'s, where $B(\Lambda)$ is the number of boxes of the Young tableaux of the representation $\Lambda$ (here we assume that $B(\Lambda)$ does not scale with $N$). The $W_N$ minimal model in the large $N$ limit has a symmetry that exchanges $\Lambda_+$ with $\Lambda_-$, while flipping the sign of $\lambda$. Hence, the primary $\tilde\phi_1=(0,\yng(1))$ is also dual to a scalar elementary particle, with dimension \begin{equation}\begin{aligned} h_{(0,\yng(1))}={1\over 2}(1-\lambda), \end{aligned}\end{equation} and the primaries $(0,\Lambda)$ are dual to the multi-particle states of $\tilde\phi_1$. The fusion coefficients \eqref{FRC} implies that the primaries of the form $(\Lambda,0)$ (or $(0,\Lambda)$) are closed under the OPE, as long as $\Lambda$ is small compared to $N$. They form a closed subsector of the $W_N$ minimal model in the large $N$ limit. Either one of these two subsectors has a consistent set of $n$-point functions on the sphere, in the sense that they factorize through only operators within the same subsector. In \cite{Chang:2011mz}, we proposed a bulk dual for each of the subsectors. The classical bulk theory is described by Vasiliev's system in three dimensions \cite{Prokushkin:1998bq,Prokushkin:1998vn,Chang:2011mz}, which is a higher spin gauge theory of gauge fields of spin $s = 2, 3,\cdots,\infty$ based on the higher spin algebra $hs(\lambda)$, coupled to a complex massive scalar field of mass squared $m^2=-(1-\lambda^2)$. This conjecture has also been checked by computations of the three-point function $\vev{\phi_1\bar\phi_1W^{(s)}}$ on both side of the correspondence \cite{Chang:2011mz,Ammon:2011ua}. The bulk dual of other primary operators was first studied in \cite{Papadodimas:2011pf}, and later extended in \cite{Chang:2011vka}. In \cite{Chang:2011vka}, we computed the three-point functions of $W_N$ primaries $(\yng(1),0)$, $(\yng(1),\yng(1))$, and/or their charge conjugates, with the primary $(\Lambda_+,\Lambda_-)$ where $\Lambda_\pm$ are $\yng(2)$ or $\yng(1,1)$. This result allowed us to identify the primary operators $(\Lambda_+,\Lambda_-)$, for $\Lambda_\pm$ being one- or two-box representations, with the single-particles or multi-particle states in the bulk in large $N$ limit. The result in \cite{Chang:2011vka} can be summarized in the following table: \noindent\begin{tabular}{|x{1.2cm}|x{0.4cm}|x{2.4cm}|x{4.6cm}|x{4.6cm}|}\hline \diag{0cm}{1.2cm}{~~~$\Lambda_+$}{$\Lambda_-$} & 0 & $\yng(1)$ & $\yng(2)$ & $\yng(1,1)$ \\ \hline 0 & 0 & $\tilde\phi_1$ & $L_{\tilde\phi_1}$ & $\tilde\phi_1^2$ \\ \hline $\yng(1)$ & $\phi_1$ & $\omega_1$ & ${1\over \sqrt{2}}(\tilde\phi_1\omega_1+\tilde\phi_2)$ & ${1\over \sqrt{2}}(\tilde\phi_1\omega_1-\tilde\phi_2)$ \\ \hline $\yng(2)$ & $L_{\phi_1}$ & ${1\over \sqrt{2}}(\phi_1\omega_1+\phi_2)$ & ${1\over 2}(\omega^2_1+\sqrt{2}\omega_2)$ & ${1\over \sqrt{2}}(L_{\omega_1}-{1\over \sqrt{2}}(\phi_1\tilde\phi_2-\phi_2\tilde\phi_1))$ \\ \hline $\yng(1,1)$ & $\phi_1^2$ & ${1\over \sqrt{2}}(\phi_1\omega_1-\phi_2)$ & ${1\over \sqrt{2}}(L_{\omega_1}+{1\over \sqrt{2}}(\phi_1\tilde\phi_2-\phi_2\tilde\phi_1))$ & ${1\over 2}(\omega^2_1-\sqrt{2}\omega_2)$ \\ \hline \end{tabular} \noindent where the $\phi_1,\tilde\phi_1,\omega_1,\phi_2,\tilde\phi_2,\omega_2$ are operators that dual to the elementary particles in the bulk: \begin{equation}\begin{aligned} &\phi_1=(\yng(1),0),~~~\tilde\phi_1=(0,\yng(1)),~~~\omega_1=(\yng(1),\yng(1)), \\ &\phi_2={1\over \sqrt{2}}\left[(\yng(2),\yng(1))-(\yng(1,1),\yng(1))\right],~~~\tilde\phi_2={1\over \sqrt{2}}\left[(\yng(1),\yng(2))-(\yng(1),\yng(1,1))\right], \\ &\omega_2={1\over \sqrt{2}}\left[(\yng(2),\yng(2))-(\yng(1,1),\yng(1,1))\right]. \end{aligned}\end{equation} Two comments about this identification: first note that the expressions only make sense in the large $N$ limit since each term in the linear combination has different dimension in the subleading order of $1/N$. In the large $N$ limit, we conjecture that each term in the above linear combination has the same dimensions and higher spin charges. This conjecture has been checked up to spin 5; see appendix A. Second, in the table, the products of the operators are well-defined because one can check that the OPE's of the them have no singularity in the large $N$ limit. The operator $L_{{\mathcal O}}$ is defined as \begin{equation}\begin{aligned} &L_{{\mathcal O}}={1\over 2 \sqrt{2}h_{{\mathcal O}}}\left({\mathcal O}\partial\bar\partial{\mathcal O}-\partial {\mathcal O} \bar\partial{\mathcal O}\right). \end{aligned}\end{equation} Again, the products are well-defined since there is no singularity in the OPE. This table is further subject to a relation \cite{Papadodimas:2011pf}: \begin{equation}\begin{aligned}\label{OMR1} &{1\over 2h_{\omega_1}}\partial\bar\partial\omega_1= \phi_1\tilde\phi_1. \end{aligned}\end{equation} The bulk physical meaning of this relation will be explain in detail in the section 5. \section{New single-trace operators/elementary particles} Let us extend this table to the the representation with three boxes. Before diving into the computation of three-point functions, there are some principles can help us to determine whether a primary operator ${\mathcal O}_A$ can be dual to the two-particle state of two elementary particles that are dual to ${\mathcal O}_B$ and ${\mathcal O}_C$. First, the primary ${\mathcal O}_A$ must appear in the OPE of the primary ${\mathcal O}_B$ and ${\mathcal O}_C$. Second, the dimension of the primary ${\mathcal O}_A$ must be equal to the sum of the dimension of ${\mathcal O}_B$ and ${\mathcal O}_C$ up to higher order corrections in $1/N$. Following is a table summarizing the dimension of the primary operator up to representation of three boxes. \noindent\begin{tabular}{|x{1.2cm}|x{1.9cm}|x{1.7cm}|x{1.7cm}|x{1.2cm}|x{1.9cm}|x{1.9cm}|x{1.2cm}|}\hline \diag{0cm}{1.2cm}{~~~$\Lambda_+$}{$\Lambda_-$} & 0 & $\yng(1)$ & $\yng(2)$ & $\yng(1,1)$ & $\yng(3)$ & $\yng(2,1)$ & $\yng(1,1,1)$ \\ \hline 0 & 0 & $1-\lambda\over 2$ & $(1-\lambda)+1$ & $1-\lambda$ & $3\left({1-\lambda\over 2}\right)+3$ & $3\left({1-\lambda\over 2}\right)+1$ & $3\left({1-\lambda\over 2}\right)$ \\ \hline $\yng(1)$ & $1+\lambda\over 2$ & ${\lambda^2\over 2N}$ & ${1-\lambda\over 2}$ & ${1-\lambda\over 2}$ & $(1-\lambda)+1$ & $1-\lambda$ & $1-\lambda$ \\ \hline $\yng(2)$ & $(1+\lambda)+1$ & ${1+\lambda\over 2}$ & ${\lambda^2\over N}$ & 1 & ${1-\lambda\over 2}$ & ${1-\lambda\over 2}$ & ${1-\lambda\over 2}+1$ \\ \hline $\yng(1,1)$ & $1+\lambda$ & ${1+\lambda\over 2}$ & 1 & ${\lambda^2\over N}$ & ${1-\lambda\over 2}+2$ & ${1-\lambda\over 2}$ & ${1-\lambda\over 2}$ \\ \hline $\yng(3)$ & $3\left({1+\lambda\over 2}\right)+3$ & $(1+\lambda)+1$ & ${1+\lambda\over 2}$ & ${1+\lambda\over 2}+2$ & ${3\lambda^2\over 2N}$ & 1 & 3 \\ \hline $\yng(2,1)$ & $3\left({1+\lambda\over 2}\right)+1$ & $1+\lambda$ & ${1+\lambda\over 2}$ & ${1+\lambda\over 2}$ & 1 & ${3\lambda^2\over 2N}$ & 1 \\ \hline $\yng(1,1,1)$ & $3\left({1+\lambda\over 2}\right)$ & $1+\lambda$ & ${1+\lambda\over 2}+1$ & ${1+\lambda\over 2}$ & 3 & 1 & ${3\lambda^2\over 2N}$ \\ \hline \end{tabular} Let us first focus on the light states: $(\yng(3),\yng(3)),(\yng(2,1),\yng(2,1)),(\yng(1,1,1),\yng(1,1,1))$. By the fusion rule and the additivity of the dimension, two linear combinations of these three operators can be identified with the multi-particle states $\omega_1^3$ and $\omega_1\omega_2$. Let us see this explicitly in terms of structure constants. A formula of a large class of the structure constants is given in \cite{Chang:2011vka}. By explicitly evaluating the formula, we find out that, in the large $N$ limit, the OPE of $(\yng(1),\yng(1))$ and $(\yng(2),\yng(2))$ has no singularity, hence the product $(\yng(1),\yng(1))(\yng(2),\yng(2))$ is well-defined, which in the large $N$ limit is \begin{equation}\begin{aligned} &(\yng(1),\yng(1))(\yng(2),\yng(2))=(\yng(3),\yng(3))+(\yng(2,1),\yng(2,1)). \end{aligned}\end{equation} Similarly, in the large $N$ limit, we have \begin{equation}\begin{aligned} &(\yng(1),\yng(1))(\yng(1,1),\yng(1,1))=(\yng(2,1),\yng(2,1))+(\yng(1,1,1),\yng(1,1,1)). \end{aligned}\end{equation} Rewriting the equation in terms of $\omega_1,\omega_2$, we have \begin{equation}\begin{aligned} \omega_1\omega_2&=(\yng(3),\yng(3))-(\yng(1,1,1),\yng(1,1,1)), \\ \omega_1^3&=(\yng(3),\yng(3))+2(\yng(2,1),\yng(2,1))+(\yng(1,1,1).\yng(1,1,1)). \end{aligned}\end{equation} There is one linear combination of $(\yng(3),\yng(3)),(\yng(2,1),\yng(2,1)),(\yng(1,1,1),\yng(1,1,1))$, which cannot be expressed as $\omega_1\omega_2,\omega_1^3$. This operator should be dual to a new light elementary particle. Hence, we define \begin{equation}\begin{aligned} \omega_3&={1\over \sqrt{3}}\left[(\yng(3),\yng(3))-(\yng(2,1),\yng(2,1))+(\yng(1,1,1),\yng(1,1,1))\right], \end{aligned}\end{equation} which is orthonormal to $\omega_1\omega_2,\omega_1^3$ and is a new elementary light particle. Next, let us look at the primaries with dimension ${1-\lambda\over 2}$ and with three boxes representations. They are $(\yng(2),\yng(3)),(\yng(2),\yng(2,1)),(\yng(1,1),\yng(2,1)),(\yng(1,1),\yng(1,1,1)).$ From the additivity of the dimension, three linear combinations of these four operators can be dual to the multi-particle states $\tilde\phi_1\omega_2,\tilde\phi_1\omega_1^2,\tilde\phi_2\omega_1$. Again, we can see this explicitly from the structure constants. From the structure constant computation, we have the following products at large $N$: \begin{equation}\begin{aligned} &(0,\yng(1))(\yng(2),\yng(2))={1\over \sqrt{3}}(\yng(2),\yng(3))+\sqrt{2\over 3}(\yng(2),\yng(2,1)), \\ &(0,\yng(1)) (\yng(1,1),\yng(1,1))=\sqrt{2\over 3}(\yng(1,1),\yng(2,1))+{1\over \sqrt{3}}(\yng(1,1),\yng(1,1,1)), \\ &(\yng(1),\yng(1))(\yng(1),\yng(2))=\sqrt{2\over 3}(\yng(2),\yng(3))+{1\over 2\sqrt{3}}(\yng(2),\yng(2,1))+{\sqrt{3}\over 2}(\yng(1,1),\yng(2,1)), \\ &(\yng(1),\yng(1))(\yng(1),\yng(1,1))={\sqrt{3}\over 2}(\yng(2),\yng(2,1))+{1\over 2\sqrt{3}}(\yng(1,1),\yng(2,1))+\sqrt{2\over 3}(\yng(1,1),\yng(1,1,1)). \end{aligned}\end{equation} Expressing them in terms of $\tilde\phi_1,\tilde\phi_2,\omega_1,\omega_2$, we obtain \begin{equation}\begin{aligned} \tilde\phi_1\omega_2&={1\over\sqrt{6}}\left[(\yng(2),\yng(3))+\sqrt{2}(\yng(2),\yng(2,1))-\sqrt{2}(\yng(1,1),\yng(2,1))-(\yng(1,1),\yng(1,1,1))\right], \\ {1\over \sqrt{2}}\tilde\phi_1\omega^2_1&=\tilde\phi_1{(\yng(2),\yng(2))+(\yng(1,1),\yng(1,1))\over \sqrt{2}}={1\over\sqrt{6}}\left[(\yng(2),\yng(3))+\sqrt{2}(\yng(2),\yng(2,1))+\sqrt{2}(\yng(1,1),\yng(2,1))+(\yng(1,1),\yng(1,1,1))\right] \\ &={1\over \sqrt{2}}\omega_1{\left[(\yng(1),\yng(2))+(\yng(1),\yng(1,1))\right]\over \sqrt{2}}={1\over\sqrt{6}}\left[(\yng(2),\yng(3))+\sqrt{2}(\yng(2),\yng(2,1))+\sqrt{2}(\yng(1,1),\yng(2,1))+(\yng(1,1),\yng(1,1,1))\right], \\ \tilde\phi_2\omega_1&={1\over\sqrt{6}}\left[\sqrt{2}(\yng(2),\yng(3))-(\yng(2),\yng(2,1))+(\yng(1,1),\yng(2,1))-\sqrt{2}(\yng(1,1),\yng(1,1,1))\right]. \end{aligned}\end{equation} There is one linear combination of $(\yng(2),\yng(3)),(\yng(2),\yng(2,1)),(\yng(1,1),\yng(2,1)),(\yng(1,1),\yng(1,1,1))$, which is linear independent of $\tilde\phi_1\omega_2,\tilde\phi_1\omega_1^2,\tilde\phi_2\omega_1$, and should be dual to a new elementary particle in the bulk. Hence, we can define \begin{equation}\begin{aligned} \tilde\phi_3={1\over \sqrt{6}}\left[\sqrt{2}(\yng(2),\yng(3))-(\yng(2),\yng(2,1))-(\yng(1,1),\yng(2,1))+\sqrt{2}(\yng(1,1),\yng(1,1,1))\right], \end{aligned}\end{equation} which is orthonormal to $\tilde\phi_1\omega_2,{1\over \sqrt{2}}\tilde\phi_1\omega^2_1,\tilde\phi_2\omega_1$. Similarly, by exchanging the left and right representations, we have \begin{equation}\begin{aligned} \phi_1\omega_2&={1\over\sqrt{6}}\left[(\yng(3),\yng(2))+\sqrt{2}(\yng(2,1),\yng(2))-\sqrt{2}(\yng(2,1),\yng(1,1))-(\yng(1,1,1),\yng(1,1))\right], \\ {1\over \sqrt{2}}\phi_1\omega^2_1&={1\over\sqrt{6}}\left[(\yng(3),\yng(2))+\sqrt{2}(\yng(2,1),\yng(2))+\sqrt{2}(\yng(2,1),\yng(1,1))+(\yng(1,1,1),\yng(1,1))\right], \\ \phi_2\omega_1&={1\over\sqrt{6}}\left[\sqrt{2}(\yng(3),\yng(2))-(\yng(2,1),\yng(2))+(\yng(2,1),\yng(1,1))-\sqrt{2}(\yng(1,1,1),\yng(1,1))\right], \end{aligned}\end{equation} and we define \begin{equation}\begin{aligned} \phi_3={1\over \sqrt{6}}\left[\sqrt{2}(\yng(3),\yng(2))-(\yng(2,1),\yng(2))-(\yng(2,1),\yng(1,1))+\sqrt{2}(\yng(1,1,1),\yng(1,1))\right]. \end{aligned}\end{equation} Next, let us focus on the primaries $(\yng(1),\yng(2,1)),(\yng(1),\yng(1,1,1))$. By the fusion rule and the additivity of the dimension, it is not hard to see that they must be identified with the two linear combinations of $\tilde\phi_1\tilde\phi_2$ and $\omega_1\tilde\phi_1^2$, which are dual to two- and three-particle states. Similarly, the primaries $(\yng(2,1),\yng(1)),(\yng(1,1,1),\yng(1))$ are identified with the two linear combinations of $\phi_1\phi_2$ and $\omega_1\phi_1^2$. All the other primaries: $(\yng(1),\yng(3))$, $(\yng(2),\yng(1,1,1))$, $(\yng(1,1),\yng(3))$, $(\yng(3),\yng(2,1))$, $(\yng(3),\yng(1,1,1))$, $(\yng(2,1), \yng(1,1,1))$, and the primaries with left and right representations exchanged, are also dual to multi-particle states. We will show this in section 8. \section{Large $N$ operator relations involving $\omega_2$ and $\omega_3$} There is a new relation involving the descendant of $\omega_2$, similar to the relation \eqref{OMR1}. By the following two structure constants: \begin{equation}\begin{aligned} &C_{nor}\left((0,\yng(1)),(\yng(1,1),\yng(1)),(\overline{\yng(1,1)},\overline{\yng(1,1)})\right)={\sqrt{2}\over N}+{\mathcal O}({1\over N^2}), \\ &C_{nor}\Big((0,\yng(1)),(\yng(2),\yng(1)),(\overline{\yng(2)},\overline{\yng(2)})\Big)={\sqrt{2}\over N}+{\mathcal O}({1\over N^2}), \end{aligned}\end{equation} we have the three-point functions: \begin{equation}\begin{aligned}\label{OPhPh2} \vev{\bar\omega_2(z)\phi_1(w)\tilde\phi_2(0)}=\vev{\bar\omega_2(z)\phi_2(w)\tilde\phi_1(0)}={1\over \sqrt{2}N}{1\over |z-w|^{2\lambda}|w|^2|z|^{-2\lambda}}, \end{aligned}\end{equation} in the large $N$ limit. Taking $\partial\bar\partial$ on $\bar\omega_2$, we obtain: \begin{equation}\begin{aligned}\label{OOPPLL} \vev{\partial\bar\partial\bar\omega_2(z)\phi_1(w)\tilde\phi_2(0)}=\vev{\partial\bar\partial\bar\omega_2(z)\phi_2(w)\tilde\phi_1(0)}={\lambda^2\over \sqrt{2}N}\left({1\over |z-w|^{2(1+\lambda)}}\right)\left({1\over |z|^{2(1-\lambda)}}\right). \end{aligned}\end{equation} The two factors on the right hand side of \eqref{OOPPLL} are precisely given by the two-point functions of $\vev{\phi_2\bar\phi_2}$ and $\vev{\tilde\phi_1\bar{\tilde\phi}_1}$, or $\vev{\phi_1\bar\phi_1}$ and $\vev{\tilde\phi_2\bar{\tilde\phi}_2}$. Hence, this suggests the following relation in the large $N$ limit: \begin{equation}\begin{aligned}\label{OMR2} {1\over 2h_{\omega_2}}\partial\bar\partial\omega_2={1\over \sqrt{2}}(\phi_1\tilde\phi_2+\tilde\phi_1\phi_2). \end{aligned}\end{equation} To make sure that there are no extra term on the left hand side, one can compute the two-point function for the right hand side of \eqref{OMR2} with its charge conjugate, and the two-point function for the left hand side of \eqref{OMR2} with its charge conjugate, and find that they agree. Form the previous analysis on $\omega_1,\omega_2$, it suggests that there is also a relation involving the descendant of $\omega_3$. We postulate such relation should be \begin{equation}\begin{aligned}\label{OMR3} {1\over 2h_{\omega_3}}\partial\bar\partial \omega_3= {1\over \sqrt{3}}(\phi_1\tilde\phi_3+\phi_2\tilde\phi_2+\phi_3\tilde\phi_1). \end{aligned}\end{equation} We give an argument for this relation. In the large $N$ limit, we have the following structure constants \begin{equation}\begin{aligned} &C_{nor}\Big((0,\yng(1)),(\yng(3),\yng(2)),(\overline{\yng(3)},\overline{\yng(3)})\Big)={\sqrt{3}\over N},~~~C_{nor}\Big((0,\yng(1)),(\yng(1,1,1),\yng(1,1)),(\overline{\yng(1,1,1)},\overline{\yng(1,1,1)})\Big)={\sqrt{3}\over N} \\ &C_{nor}\Big((0,\yng(1)),(\yng(2,1),\yng(2)),(\overline{\yng(2,1)},\overline{\yng(2,1)})\Big)=\sqrt{3\over 2}{1\over N},~~~C_{nor}\Big((0,\yng(1)),(\yng(2,1),\yng(1,1)),(\overline{\yng(2,1)},\overline{\yng(2,1)})\Big)=\sqrt{3\over 2}{1\over N}. \end{aligned}\end{equation} These structure constants give the three-point functions: \begin{equation}\begin{aligned}\label{OPhPh3} \vev{\bar\omega_3(z)\phi_1(w)\tilde\phi_3(0)}=\vev{\bar\omega_3(z)\phi_3(w)\tilde\phi_1(0)}={1\over \sqrt{3}N}{1\over |z-w|^{2\lambda}|w|^2|z|^{-2\lambda}}, \end{aligned}\end{equation} in the large $N$ limit. Taking $\partial\bar\partial$ on $\bar\omega_2$, the three-point function again factorizes as a product of two two-point functions: \begin{equation}\begin{aligned}\label{TPFFF} \vev{\partial\bar\partial\bar\omega_3(z)\phi_1(w)\tilde\phi_3(0)}=\vev{\partial\bar\partial\bar\omega_3(z)\phi_3(w)\tilde\phi_1(0)}={\lambda^2\over \sqrt{3}N}{1\over |z-w|^{2(1+\lambda)}|z|^{2(1-\lambda)}}. \end{aligned}\end{equation} The three-point functions \eqref{TPFFF} imply the relation \begin{equation}\begin{aligned} {1\over 2h_{\omega_3}}\partial\bar\partial \omega_3= {1\over \sqrt{3}}(\phi_1\tilde\phi_3+\phi_3\tilde\phi_1+\cdots). \end{aligned}\end{equation} By comparing the two-point functions of the left and right hand sides with their charge conjugates, we know that the ``$\cdots$" must take the form as a single term $\phi_n\tilde\phi_m$ with $n,m\neq 1,3$, and the only candidate is $\phi_2\tilde\phi_2$. \section{Hidden symmetries} In this section, we give physical interpretation of the relations \eqref{OMR1}, \eqref{OMR2}, \eqref{OMR3}, and provide a bulk mechanism of producing such relations. The key observation is that the dimension of $\omega_n$ goes to zero in the large $N$ limit. Therefore, it should effectively behave like a free boson, whose derivative is a conversed current. Hence, we define the holomorphic current $(j^{(1)}_n)_z=\partial\omega_n/\sqrt{2h_{\omega_n}}$ and also the antiholomorphic current $(j^{(1)}_n)_{\bar z}=\bar\partial\omega_n/\sqrt{2h_{\omega_n}}$, for $n=1,2,3$, which has normalized two-point function with itself. For simplicity, we will sometimes suppress the index by simply denoting $(j^{(1)}_n)_{ z}$ as $j^{(1)}_n$ in the following. However, since the dimensions of $\omega_n$ are not exactly equal to zero, the currents $j_n^{(1)}$ are not exactly conserved. The relations \eqref{OMR1}, \eqref{OMR2}, \eqref{OMR3} are then naturally interpreted as current non-conservation equations\footnote{The current non-conservations equation for theories in one higher dimension have been studied in \cite{Giombi:2011kc,Maldacena:2012sf,Chang:2012kt}.}: \begin{equation}\begin{aligned}\label{CNEn} \bar\partial j_n^{(1)}={\lambda\over\sqrt{N}}(\phi_1\tilde \phi_n+\phi_2\tilde \phi_{n-1}+\cdots+\phi_n\tilde\phi_1). \end{aligned}\end{equation} The bulk interpretation of these current non-conservation equations is simple. Let us illustrate this by considering the case of $j_n^{(1)}$. In this case the current non-conservation equation is simply \begin{equation}\begin{aligned}\label{CNE1} \bar\partial j_1^{(1)}={\lambda\over\sqrt{N}}\phi_1\tilde \phi_1. \end{aligned}\end{equation} Following the $AdS/CFT$ dictionary, the bulk dual of the current $j_1^{(1)}$ is a $U(1)$ Chern-Simons gauge field $A_\mu$, and the bulk dual of the operators $\phi_1,\tilde\phi_1$ are two scalars $\Phi,\widetilde\Phi$. These two scalars have different but complementary dimensions, hence they have the same mass but different boundary conditions. They can be minimally coupled to the gauge field $A_\mu$. The action of this system up to cubic order is \begin{equation}\begin{aligned} S&={k_{CS}\over 4\pi}\int AdA+2i\int d^2xdz\sqrt{g} A^\mu\left[\widetilde\Phi \partial_\mu \Phi-\Phi \partial_\mu \widetilde\Phi\right]. \end{aligned}\end{equation} Using this action, we can compute the three-point function of $\bar\partial j_1^{(1)}$ with $\phi_1,\tilde\phi_1$. The boundary to bulk propagator of the Chern-Simons gauge field takes a pure gauge form $A_\mu=\partial_\mu\Lambda$. The cubic action, hence, can be written as \begin{equation}\begin{aligned} \lim_{z\to 0} {2\over z}\int d^2x \Lambda\left[\Phi \partial_z \widetilde\Phi-\widetilde\Phi \partial_z \Phi\right]. \end{aligned}\end{equation} The three-point function is then given by \begin{equation}\begin{aligned} &\vev{ j_1^{(1)}(\vec x_3)\phi_1(x_1)\tilde\phi_1(x_2)} \\ &=\lim_{z\to 0} {2\over z}\int d^2x\Lambda(x-x_3)\big[K_{1+\lambda}(x-x_1) \partial_z K_{1-\lambda}(x-x_2)-K_{1-\lambda}(x-x_2) \partial_z K_{1+\lambda}(x-x_1)\big] \\ &= -16\pi \lambda\int d^2x {1\over (x^+-x_3^+)} {1\over |\vec x-\vec x_2|^{2(1-\lambda)}} {1\over |\vec x-\vec x_1|^{2(1+\lambda)}}, \end{aligned}\end{equation} where $K_\Delta$ and $\Lambda$ are the boundary to bulk propagators for the scalar and gauge function: \begin{equation}\begin{aligned} K_\Delta=\left(z\over z^2+|\vec x|^2\right)^\Delta,~~~\Lambda={4\pi\over x^+}. \end{aligned}\end{equation} Taking the derivative ${\partial\over \partial x^+_3}$ on the above expression, we obtain \begin{equation}\begin{aligned}\label{cstpf} \vev{\bar\partial j_1^{(1)}(\vec x_3)\phi_1(x_1)\tilde\phi_1(x_2)}&=-16\pi^2 \lambda\int d^2x \delta^2(x-x_3) {1\over |\vec x-\vec x_2|^{2(1-\lambda)}} {1\over |\vec x-\vec x_1|^{2(1+\lambda)}} \\ &=-16\pi^2 \lambda{1\over |\vec x_3-\vec x_2|^{2(1-\lambda)} |\vec x_3-\vec x_1|^{2(1+\lambda)}}, \end{aligned}\end{equation} which factories into a product of two two-point functions of scalars with dimension $\Delta=1+\lambda$ and $1-\lambda$. This matches exactly with what we expected from \eqref{CNE1} provided the identification of the Chern-Simons level $k_{CS}=N$. In section 8, we will show that every $(j_n^{(1)})_z$ gives a $U(1)$ Chern-Simons gauge field, and combined with the gauge field dual to $(j_n^{(1)})_{\bar z}$, they form a $U(1)^\infty\times U(1)^\infty$ Chern-Simons gauge theory in the bulk. \section{Approximately conserved higher spin currents} The approximately conserved spin-1 current $(j_n^{(1)})_z$ generates a tower of approximately conserved higher spin currents, by the action of $W_N$ generators on $(j_n^{(1)})_z$. For example, $(j_1^{(1)})_z$ has a level-one $W$-descendent \begin{equation}\begin{aligned} (j_1^{(2)})_z&={1\over \sqrt{2(1-\lambda^2)}}\left(W^{(3)}_{-1}-{3\over 2}i\lambda L_{-1}\right)(j_1^{(1)})_z \\ &=\sqrt{N\over 2\lambda^2(1-\lambda^2)}(W^{(3)}_{-2}-i\lambda\partial^2)\omega_1, \end{aligned}\end{equation} which is also a Virasoro primary\footnote{In appendix B, we fix the normalization of $(j^{(1)}_1)_z$ and check that it is a Virasoro primary.}. This is an approximately conserved stress tensor. The current non-conservation equation of $(j^{(1)}_1)_z$ then descends to the current non-conservation equation of $(j^{(2)}_1)_z$: \begin{equation}\begin{aligned}\label{NCET} \bar\partial (j_1^{(2)})_z&={1\over \sqrt{2(1-\lambda^2)}}\left(W^{(3)}_{-1}-{3\over 2}i\lambda L_{-1}\right)\bar\partial j_1^{(1)} \\ &={i\lambda\over \sqrt{2N(1-\lambda^2)}}\left[(1-\lambda)\partial\phi_1\tilde\phi_1-(1+\lambda)\phi_1\partial\tilde\phi_1\right], \end{aligned}\end{equation} where we have used the null-state equations in appendix C. In general, the approximately conserved spin-1 current $(j_1^{(1)})_z$ has exactly one $W$-descendant Virasoro primary $(j^{(s)}_1)_z$ at each level $s$, which takes the form as \begin{equation}\begin{aligned}\label{ACHSC} (j^{(s)}_1)_z=\sqrt{N} (a_1W^{(s+1)}_{-s}+a_2\partial W^{(s)}_{-s+1}+\cdots+a_{s}\partial^{s})\omega_1, \end{aligned}\end{equation} where $a_i$ are some constants depending on $\lambda$, and can be fixed by requiring $(j^{(s)}_1)_z$ being a Virasoro primary. The $(j^{(s)}_1)_z$'s are approximately conserved higher spin currents. They satisfy the current non-conservation equations taking the form as \begin{equation}\begin{aligned}\label{HSCNE} \bar\partial (j^{(s)}_1)_z= {1\over \sqrt{N}}(b_1\partial^{s-1}\phi_1\tilde\phi_1+b_2\partial^{s-2}\phi_1\partial\tilde\phi_1+\cdots+b_s\phi_1\partial^{s-1}\tilde\phi_1), \end{aligned}\end{equation} where $b_s$ are constants depending on $\lambda$, and can be fixed by requiring the left hand side of \eqref{HSCNE} being a Virasoro primary. By same argument, there are also antiholomorphic higher spin currents $(j^{(s)}_1)_{\bar z}$. We expect that there are also approximately conserved holomorphic and antiholomorphic higher spin currents $(j^{(s)}_2)_z$, $(j^{(s)}_3)_z$, and $(j^{(s)}_2)_{\bar z}$, $(j^{(s)}_3)_{\bar z}$ that take a the similar form as \eqref{ACHSC}. \section{The single particle spectrum} Now we state a conjecture on the complete spectrum of the single particle states in the bulk. Throughout this paper, by a single-trace operator we mean an operator that obeys the same large $N$ factorization property as single-trace operators in large $N$ gauge theories; such an operator is dual to the state of one elementary particle in the bulk. The products of single-trace operators are dual to multi-particle states. As we have seen in the previous section, the primary operators that involve up to one box in the Young tableaux of $\Lambda_+$ and $\Lambda_-$ are all single-trace operators: they are $\phi_1$, $\tilde\phi_1$, and $\omega_1$. The primaries that involve up to two boxes in the Young tableaux of $\Lambda_+$ and $\Lambda_-$ are some suitable linear combination of single-trace operators $\phi_2$, $\tilde\phi_2$, $\omega_2$, or products of two single-trace operators. We have also seen some evidences that the primaries with up to three boxes in their representations are linear combinations of single-trace operators $\phi_3$, $\tilde\phi_3$, $\omega_3$, or products of single-trace operators. We conjecture that the primaries with up to $n$-box representations are linear combinations of single-trace operators $\phi_n$, $\tilde\phi_n$, $\omega_n$, or products of such single-trace operators $\phi_m, \tilde\phi_m, \omega_m$ for $m<n$. Here $\phi_n$ is a linear combination of primaries of the form $(\Lambda_+,\Lambda_-)$ that involve $(n,n-1)$ boxes, $\tilde\phi_n$ is a linear combination of primaries that involve $(n-1,n)$ boxes, and $\omega_n$ is a linear combination of light primaries of the form $(\Lambda,\Lambda)$ where $\Lambda$ involves $n$ boxes. A part of this conjecture is easy to prove: the statement that there is only one light single-trace operator $\omega_n$ for each $n$ labeling the number of boxes in its corresponding $SU(N)$ representations follows easily from the fusion rule. First we note that generally, the light state of the form $(\Lambda,\Lambda)$ have dimension $B(\Lambda)\lambda^2/N+{\cal O}(N^{-2})$, where $B(\Lambda)$ is the number of boxes of the Young tableaux of the representation $\Lambda$, in the large $N$ limit and fixed finite $B(\Lambda)$. We may write a partition function of the light states \begin{equation}\begin{aligned} Z(x) = \sum_{(\Lambda,\Lambda)} x^{B(\Lambda)} = \prod_{n=1}^\infty {1\over 1-x^n}. \end{aligned}\end{equation} Each single-trace operator of dimension $n\lambda^2/N$ is a linear combination of $(\Lambda,\Lambda)$ with $B(\Lambda)=n$. The dimension of the product of single-trace operators is additive at order $1/N$. The products of a single-trace operator is counted by the partition function $1/(1-x^n)$. By comparing this with $Z(x)$, we see that there is precisely one single-trace operator $\omega_n$ for each $n$. The $\phi_n$, $\tilde\phi_n$, $\omega_n$ are all the single-trace operators that are dual to scalar fields in the bulk. These are not all, however. There are other single-trace operators that are dual to spin-1, spin-2, and higher spin gauge fields. As explained in the previous section, while $\partial\omega_n$ is a level-one descendent of $\omega_n$, the norm of $\partial\omega_n$ goes to zero in the large $N$ limit. Consequently, the normalized operator $(j_n^{(1)})_z \sim \sqrt{N} \partial \omega_n$ behaves like a primary operator. Such operators will be referred to as {\it large $N$ primary operators}, and we include them in our list of single-trace operators because they should be dual to elementary fields in the bulk as well. We conjecture that $j_n^{(1)}$'s are single-trace operators dual to the spin-1 Chern-Simons gauge field in take bulk. This statement has passed some tests involving the three-point function of $j_n^{(1)}$ with two scalars. This is not the end of the story. As shown in the previous section, there are large $N$ primaries of higher spin $s$, denoted by $j^{(s)}_n$. These are single-trace operators dual to additional elementary higher spin gauge fields in the bulk. Unlike the original $W_N$ currents, however, the would-be higher spin symmetries generated by $j_n^{(s)}$ are broken by the boundary conditions on the charged scalars, leading to the current non-conservation relation. These hidden symmetries are recovered in the infinite $N$ limit. Let us summarize our conjecture on the single-particle spectrum. There are two families of complex single-trace operators $\phi_n,\tilde\phi_n$, which are dual to massive complex scalar fields (of the same mass classically), one family of complex single-trace operators $\omega_n$, that are dual to massless scalars in the bulk, and a family of approximately conserved higher spin single-trace operators $j^{(s)}_n$ for each positive integer spin $s=1,2,3,\cdots$, that are dual to Chern-Simons spin-1 and higher spin gauge fields. \section{Large $N$ partition functions} In this section, we check our proposed single particle spectrum against the partition function of the $W_N$ minimal model and the $W_N$ characters of various primaries in the large $N$ limit. Let ${\mathcal O}$ be a single-trace operator with left and right dimensions $h_{\mathcal O}$ and $\bar h_{\mathcal O}$. From the bulk perspective, the (free) single particle contribution to the partition function is \begin{equation}\begin{aligned}\label{CDOC} & Z_{{\mathcal O}}=\chi^\infty_{\mathcal O}(h_{\mathcal O},q)\overline{\chi^\infty_{\mathcal O}}(\bar h_{\mathcal O},\bar q), \\ & \chi^\infty_{\mathcal O}(h_{\mathcal O},q)={q^{h_{\mathcal O}}\over 1-q} =q^{h_{\mathcal O}}+q^{h_{\mathcal O}+1}+q^{h_{\mathcal O}+2}+\cdots. \end{aligned}\end{equation} The coefficients of the $q$-expansion are 1 in this case; they count the operators ${\mathcal O},\partial{\mathcal O},\partial^2{\mathcal O},\cdots$. There is an exception to the formula \eqref{CDOC}. If the single-trace operator ${\mathcal O}$ has zero conformal weight, then the character is identity, i.e. $\chi^\infty_{\mathcal O}(h_{\mathcal O}=0)=1$. $\chi_{\cal O}^\infty$ is not the same as the $W_N$ character for the primary ${\cal O}$ in the large $N$ limit, because it misses the contribution from boundary excitations of higher spin fields. The contribution from the boundary higher spin gauge fields to the partition function is \begin{equation}\begin{aligned} Z_{hs}=|\chi_{hs}^\infty|^2,~~~\chi_{hs}^\infty=\prod^\infty_{s=2}\prod^\infty_{n=s}{1\over (1-q^n)}. \end{aligned}\end{equation} According to our conjecture, the single particle partition functions is: \begin{equation}\begin{aligned}\label{MSC} &Z_{\phi_n}={q^{{1+\lambda\over 2}}\bar q^{{1+\lambda\over 2}}\over (1-q)(1-\bar q)},~~~Z_{\tilde\phi_n}={q^{{1-\lambda\over 2}}\bar q^{{1-\lambda\over 2}}\over (1-q)(1-\bar q)}, \end{aligned}\end{equation} and \begin{equation}\begin{aligned} &Z_{\omega_n}=1,~~~Z_{j^{(s)}_n}=\chi^\infty_{j^{(s)}_n}={q^s\over 1-q}. \end{aligned}\end{equation} For simplicity, let us summarize all the character of higher spin gauge fields as a single character $\chi_{j_n}^\infty$: \begin{equation}\begin{aligned}\label{S1CC} \chi_{j_n}^\infty=\sum^\infty_{s=1}\chi^\infty_{j_n^{(s)}}=\sum_{s=1}^\infty{q^s\over 1-q}={q\over (1-q)^2}. \end{aligned}\end{equation} Next, let us consider the partition function for the $W_N$ minimal model in the large $N$ limit. Following from the diagonal modular invariance, the partition function in the large $N$ limit is given by the sum of the absolute value square of the characters: \begin{equation}\begin{aligned} Z_{W_N}=\sum_{(\Lambda_+,\Lambda_-)}|\chi_{(\Lambda_+,\Lambda_-)}|^2. \end{aligned}\end{equation} The characters $\chi_{(\Lambda_+,\Lambda_-)}$, for $\Lambda_\pm$ being representations with one to three boxes in the Young tableaux, in the large $N$ limit are computed in the appendix D up to cubic order. The following formulas in this section have all been checked up to this order. Let us start by looking at the contribution of the identity operator to the partition function, which in the large $N$ limit gives the partition function of the boundary higher spin gauge fields: \begin{equation}\begin{aligned} \lim_{N\to \infty}|\chi_{(0,0)}|^2=Z_{hs}. \end{aligned}\end{equation} The primary operators $(\yng(1),0)=\phi_1$ and $(0,\yng(1))=\tilde\phi_1$ are dual to massive scalars. Their contributions to the partition function indeed give the partition function of single massive scalar: \begin{equation}\begin{aligned}\label{DCPPH} \lim_{N\to \infty}|\chi_{(\yng(1),0)}|^2= Z_{hs}Z_{\phi_1}, \\ \lim_{N\to \infty}|\chi_{(0,\yng(1))}|^2=Z_{hs}Z_{\tilde\phi_1}. \end{aligned}\end{equation} The primary operator $(\yng(1),\yng(1))=\omega_1$ is dual to a massless scalar, and its descendants $j^{(s)}_1$ are dual to spin-1, spin-2 and higher spin gauge fields. Also, by the current non-conservation equation \eqref{HSCNE}, some of the descendants of $(\yng(1),\yng(1))$ are dual to the two particle states. Therefore, contribution of $(\yng(1),\yng(1))$ to the partition function decomposes as \begin{equation}\begin{aligned} \lim_{N\to \infty}|\chi_{(\yng(1),\yng(1))}|^2=Z_{hs}(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}+Z_{\phi_1}Z_{\tilde\phi_1}), \end{aligned}\end{equation} where the last term is the contribution of the two particle states of $\phi_1$ and $\tilde\phi_1$. The identification of other primary operators are inevitable involving multi-particle states. By Bose statistics, we can write a multi-particle partition function in terms of the single-particle partition function \eqref{CDOC} as \begin{equation}\begin{aligned} Z^{multi}_{\mathcal O}(t)&=\exp\left[\sum^\infty_{m=1}{Z_{\mathcal O}(q^m,\bar q^m)\over m} t^m\right]. \end{aligned}\end{equation} Suppose ${\mathcal O}=\phi_n$, then the partition function $Z^{multi}_{\phi_n}(t)$ can be expanded as \begin{equation}\begin{aligned} Z^{multi}_{\phi_n}(t)=\sum_{\ell=0}^\infty t^\ell Z_{\phi_n^\ell}, \end{aligned}\end{equation} where $Z_{\phi^m_n}$ is the $m$-particle partition function. For instance, $Z_{\phi_n^2}$ and $Z_{\phi_n^3}$ are given by \begin{equation}\begin{aligned} Z_{\phi_n^2} ={q^{1+\lambda}\bar q^{1+\lambda}(1+q\bar q)\over (1- q)^2(1+ q)(1-\bar q)^2(1+\bar q)}, \\ Z_{\phi_n^3} ={q^{{3\over 2}(1+\lambda)}\bar q^{{3\over 2}(1+\lambda)}(1+q\bar q+q^2\bar q+\bar q^2 q+q^2\bar q^2+q^3\bar q^3)\over (1- q)^3(1+ q)(1+ q+ q^2)(1-\bar q)^3(1+\bar q)(1+\bar q+\bar q^2)}. \end{aligned}\end{equation} For ${\mathcal O}=\omega_n$, all the $m$-particle partition functions are identity: \begin{equation}\begin{aligned} Z_{\omega^m_n =1. \end{aligned}\end{equation} For ${\mathcal O}=j^{(s)}_n$, the multi-particle partition function for $j^{(s)}_n$, $s=1,2,\cdots$, can be computed from \begin{equation}\begin{aligned} Z^{multi}_{j_n}(t)&=\prod^\infty_{s=1}Z^{multi}_{j^{(s)}_n}(t)=\exp\left[\sum^\infty_{m=1}\sum^\infty_{s=1}{\chi^\infty_{j^{(s)}_n}(q^m)\over m} t^m\right]=\exp\left[\sum^\infty_{m=1}{\chi^\infty_{j_n}(q^m)\over m} t^m\right]. \end{aligned}\end{equation} Expanding $Z^{multi}_{j_n}(t)$ in powers of $t$, we can write the $Z^{multi}_{j_n}(t)$ as \begin{equation}\begin{aligned} Z^{multi}_{j_n}(t)= 1+\chi^\infty_{j_n}t+\chi^\infty_{j_n^2}t^2+\chi^\infty_{j_n^3}t^3+\cdots, \end{aligned}\end{equation} where $\chi^\infty_{j_n^m}$ has the interpretation of the $m$-particle character (partition function). For instance, \begin{equation}\begin{aligned} \chi^\infty_{j_n^2}&={q^2(1+q^2)\over (1-q)^4(1+q)^2}, \\ \chi^\infty_{j_n^3}&={q^3(1+q^2+2q^2+q^4+q^6)\over (1-q)^6(1+q)^2(1+q+q^2)^2}. \end{aligned}\end{equation} Let us continue on the matching of boundary and bulk partition functions. Consider the primary operators $(\yng(1,1),0)$ and $(\yng(2),0)$. They are dual to two-particle states. Their contribution to the partition function matches with the two particle partition function: \begin{equation}\begin{aligned} \lim_{N\to \infty}\left(|\chi_{(\yng(2),0)}|^2+|\chi_{(\yng(1,1),0)}|^2\right)=Z_{hs}Z_{\phi_1^2}. \end{aligned}\end{equation} Now, consider the primary operators $(\yng(3),0)$, $(\yng(2,1),0)$, and $(\yng(1,1,1),0)$. They are dual to three-particle states. Their contribution to the partition function matches with the three-particle partition function: \begin{equation}\begin{aligned} \lim_{N\to \infty}\left(|\chi_{(\yng(3),0)}|^2+|\chi_{(\yng(2,1),0)}|^2+|\chi_{(\yng(1,1,1),0)}|^2\right)=Z_{hs}Z_{\phi_1^3}. \end{aligned}\end{equation} Next, consider the primary operators $(\yng(2),\yng(1))$ and $(\yng(1,1),\yng(1))$. Their contribution to the partition function also decomposes as the multi-particle partition functions: \begin{equation}\begin{aligned} \lim_{N\to \infty}\left(|\chi_{(\yng(2),\yng(1))}|^2+|\chi_{(\yng(2),\yng(1))}|^2\right =Z_{hs}\Big[Z_{\phi_1}\left(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)+Z_{\tilde\phi_1}Z_{\phi_1^2}+Z_{\phi_2}\Big]. \end{aligned}\end{equation} For the primary operators $(\yng(2),\yng(2))$, $(\yng(1,1),\yng(1,1))$, $(\yng(2),\yng(1,1))$, and $(\yng(1,1),\yng(2))$, their contribution to the partition function decomposes as \begin{equation}\begin{aligned} &\lim_{N\to \infty}\left(|\chi_{(\yng(2),\yng(2))}|^2+|\chi_{(\yng(1,1),\yng(1,1))}|^2+|\chi_{(\yng(2),\yng(1,1))}|^2+|\chi_{(\yng(1,1),\yng(2))}|^2\right) \\ &=Z_{hs}\Big[Z_{\omega_1^2}+Z_{\omega_1}(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}})+(\chi^\infty_{j_1^2}+\overline{\chi^\infty_{j_1^2}})+|\chi^\infty_{j_1}|^2+Z_{\omega_1}Z_{\phi_1}Z_{\tilde\phi_1} \\ &~~~+Z_{\phi_1}Z_{\tilde\phi_1}\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)+Z_{\phi_1^2}Z_{\tilde\phi_1^2}+Z_{\omega_2}+(\chi^\infty_{j_2}+\overline{\chi^\infty_{j_2}})+Z_{\phi_2}Z_{\tilde\phi_1}+Z_{\phi_1}Z_{\tilde\phi_2}\Big]. \end{aligned}\end{equation} Now, let us go on to the representations with three boxes in the Young tableaux. For the primary operators $(\yng(3),\yng(1))$, $(\yng(2,1),\yng(1))$, and $(\yng(1,1,1),\yng(1))$, their contribution to the partition function decomposes as \begin{equation}\begin{aligned} &\lim_{N\to \infty}\Big(|\chi_{(\yng(3),\yng(1))}|^2+|\chi_{(\yng(2,1),\yng(1))}|^2+|\chi_{(\yng(1,1,1),\yng(1))}|^2\Big) \\ &=Z_{hs}\Big[Z_{\phi_1}Z_{\phi_2}+\left(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)Z_{\phi_1^2}+Z_{\tilde\phi_1}Z_{\phi_1^3}\Big]. \end{aligned}\end{equation} For the primary operators $(\yng(3),\yng(2))$, $(\yng(2,1),\yng(2))$, $(\yng(1,1,1),\yng(2))$, $(\yng(3),\yng(1,1))$, $(\yng(2,1),\yng(1,1))$, and $(\yng(1,1,1),\yng(1,1))$, their contribution to the partition function decomposes as \begin{equation}\begin{aligned} &\lim_{N\to \infty}\Big(|\chi_{(\yng(3),\yng(2))}|^2+|\chi_{(\yng(2,1),\yng(2))}|^2+|\chi_{(\yng(1,1,1),\yng(2))}|^2+|\chi_{(\yng(3),\yng(1,1))}|^2+|\chi_{(\yng(2,1),\yng(1,1))}|^2+|\chi_{(\yng(1,1,1),\yng(1,1))}|^2\Big) \\ &=Z_{hs}\Big[\left(Z_{\omega_2}+\chi^\infty_{j_2}+\overline{\chi^\infty_{j_2}}\right)Z_{\phi_1}+\left(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)Z_{\phi_2}+Z_{\phi_1^2}Z_{\tilde\phi_2}+Z_{\phi_1}Z_{\tilde\phi_1}Z_{\phi_2} \\ &~~~~+\left(Z_{\omega_1^2}+Z_{\omega_1}\chi^\infty_{j_1}+Z_{\omega_1}\overline{\chi^\infty_{j_1}}+\chi^\infty_{j_1^2}+\overline{\chi^\infty_{j_1^2}}+\chi^\infty_{j_1}\overline{\chi^\infty_{j_1}}\right)Z_{\phi_1} \\ &~~~~+\left(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)Z_{\phi_1^2}Z_{\tilde\phi_1}+Z_{\phi_1^3}Z_{\tilde\phi_1^2}+Z_{\phi_3}\Big]. \end{aligned}\end{equation} The contribution from the primary operators $(\yng(3),\yng(3))$, $(\yng(2,1),\yng(3))$, $(\yng(1,1,1),\yng(3))$, $(\yng(3),\yng(2,1))$, $(\yng(2,1),\yng(2,1))$, $(\yng(1,1,1),\yng(2,1))$, $(\yng(3),\yng(1,1,1))$, $(\yng(2,1),\yng(1,1,1))$, and $(\yng(1,1,1),\yng(1,1,1))$, to the partition function decomposes as \begin{equation}\begin{aligned} &\lim_{N\to \infty}\Big(|\chi_{(\yng(3),\yng(3))}|^2+|\chi_{(\yng(2,1),\yng(3))}|^2+|\chi_{(\yng(1,1,1),\yng(3))}|^2+|\chi_{(\yng(3),\yng(2,1))}|^2 \\ &~~~+|\chi_{(\yng(2,1),\yng(2,1))}|^2+|\chi_{(\yng(1,1,1),\yng(2,1))}|^2+|\chi_{(\yng(3),\yng(1,1,1))}|^2+|\chi_{(\yng(2,1),\yng(1,1,1))}|^2+|\chi_{(\yng(1,1,1),\yng(1,1,1))}|^2\Big) \\ &=Z_{hs}\Big[Z_{\omega^3_1}+Z_{\omega_1^2}\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)+Z_{\omega_1}\left(\chi^\infty_{j_1^2}+\overline{\chi^\infty_{j_1^2}}+\chi^\infty_{j_1}\overline{\chi^\infty_{j_1}}\right)+\left(\chi^\infty_{j_1^3}+\overline{\chi^\infty_{j_1^3}}+\chi^\infty_{j_1^2}\overline{\chi^\infty_{j_1}}+\chi^\infty_{j_1}\overline{\chi^\infty_{j_1^2}}\right) \\ &~~~~+\left(Z_{\omega^2_1}+Z_{\omega_1}\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)+\left(\chi^\infty_{j_1^2}+\overline{\chi^\infty_{j_1^2}}+\chi^\infty_{j_1}\overline{\chi^\infty_{j_1}}\right)\right)Z_{\phi_1}Z_{\tilde\phi_1}+\left(Z_{\omega_1}+\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)\right)Z_{\phi^2_1}Z_{\tilde\phi^2_1}+Z_{\phi^3_1}Z_{\tilde\phi^3_1} \\ &~~~~+Z_{\omega_1}Z_{\omega_2}+Z_{\omega_1}\left(\chi^\infty_{j_2}+\overline{\chi^\infty_{j_2}}\right)+Z_{\omega_2}\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)+\left(\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)\left(\chi^\infty_{j_2}+\overline{\chi^\infty_{j_2}}\right) \\ &~~~~+\left(Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}\right)\left(Z_{\phi_1}Z_{\tilde\phi_2}+Z_{\phi_2}Z_{\tilde\phi_1}\right)+\left(Z_{\omega_2}+\chi^\infty_{j_2}+\overline{\chi^\infty_{j_2}}\right)Z_{\phi_1}Z_{\tilde\phi_1}+Z_{\phi^2_1}Z_{\tilde\phi_1}Z_{\tilde\phi_2}+Z_{\phi_1}Z_{\tilde\phi^2_1}Z_{\phi_2} \\ &~~~~+Z_{\omega_1}+\chi^\infty_{j_1}+\overline{\chi^\infty_{j_1}}+Z_{\phi_1}Z_{\tilde\phi_3}+Z_{\phi_2}Z_{\tilde\phi_2}+Z_{\phi_3}Z_{\tilde\phi_1}\Big]. \end{aligned}\end{equation} \section{Interactions and a semi-local bulk theory} The three-point functions\footnote{Some three-point functions are computed, and a general form of such three-point functions are postulated in appendix E.} involving the hidden symmetry currents amount to the following assignment of gauge generators $T_n$ associated to the currents $j^{(s)}_n(z)$, which act on the states $|\phi_m\rangle$ and $|\tilde \phi_m\rangle$. We use the ket notation here, rather than the primary fields themselves, because while $\phi_m$ and $\tilde\phi_m$ have different scaling dimensions at infinite $N$, they are dual to scalar fields of the same mass that transform into one another under the hidden gauge symmetries. \begin{equation}\begin{aligned} &T_n|\phi_{m}\rangle = |\phi_{n+m}\rangle,~~~T_n|\bar\phi_m\rangle = -|\bar\phi_{m-n}\rangle~~(n<m)~~\text{or}~-|\tilde\phi_{n-m+1}\rangle~~(n\geq m), \\ &T_n|\tilde\phi_{m}\rangle = - |\tilde\phi_{n+m}\rangle,~~~T_n|\bar{\tilde\phi}_m\rangle = |\bar{\tilde\phi}_{m-n}\rangle~~(n<m)~~\text{or}~|\phi_{n-m+1}\rangle~~(n\geq m). \end{aligned}\end{equation} Let us define the fields $\varphi_r$ and $\tilde\varphi_r$ for $r\in\mathbb{Z}+{1\over 2}$ by \begin{equation}\begin{aligned} &\varphi_r=\phi_{r+{1\over 2}},~~~\varphi_{-r}=\bar{\tilde\phi}_{r+{1\over 2}}, \\ &\tilde\varphi_r=\tilde\phi_{r+{1\over 2}},~~~\tilde\varphi_{-r}=\bar{\phi}_{r+{1\over 2}}. \end{aligned}\end{equation} They are related by complex conjugation: \begin{equation}\begin{aligned} \bar\varphi_r=\tilde\varphi_{-r},~~~\bar{\tilde\varphi}_r=\varphi_{-r}. \end{aligned}\end{equation} In terms of $\varphi_r$ and $\tilde\varphi_r$, the gauge generators act as \begin{equation}\begin{aligned}\label{crOPE} &T_n|\varphi_r\rangle = |\varphi_{r+n}\rangle,~~~T_n|\tilde\varphi_r\rangle = -|\tilde\varphi_{r+n}\rangle. \end{aligned}\end{equation} We also have \begin{equation}\begin{aligned} &\overline T_n|\varphi_r\rangle = -|\varphi_{r-n}\rangle,~~~\overline T_n|\tilde\varphi_r\rangle=|\tilde\varphi_{r-n}\rangle. \end{aligned}\end{equation} which suggests the definition $T_{-n}=-\overline{T}_n$, or $j^{(s)}_{-n}=-\bar j^{(s)}_n$. Now \eqref{crOPE} is extended to all $n\in\mathbb{Z}$. The action of $T_n$ can be diagonalized by the Fourier transform: \begin{equation}\begin{aligned} |\varphi(x)\rangle=\sum_{r\in\mathbb{Z}+1/2} e^{irx}|\varphi_r\rangle,~~~|\tilde\varphi(x)\rangle=\sum_{r\in\mathbb{Z}+1/2} e^{irx}|\tilde\varphi_r\rangle,~~~T(x)= \sum_{n\in\mathbb{Z}}e^{inx}T_n, \end{aligned}\end{equation} where $x$ is an auxiliary generating parameter. Here we also included the generator $T_0$ which assigns charge $+1$ to $\varphi$ and charge $-1$ to $\bar\varphi$. With this definition, $|\bar\varphi(x)\rangle=|\tilde\varphi(x)\rangle,\overline T(x)=-T(x)$. We have \begin{equation}\begin{aligned} T(x)|\varphi(y)\rangle =\delta(x-y)|\varphi(y)\rangle \end{aligned}\end{equation} Here $x,y$ are understood to be periodically valued with periodicity $2\pi$. What is the interpretation of this result? We see that there is a circle worth of gauge generators $T(x)$, each of which corresponds to a tower of gauge fields in $AdS_3$, of spin $s=1,2,3,\cdots,\infty$. Furthermore, these gauge generators commute, indicating Vasiliev theory with $U(1)^\infty$ ``Chan-Paton factor". At the level of bulk equation of motion, we expect the infinite family of Vasiliev theories to decouple. They only interact through the $AdS_3$ boundary conditions that mix the matter scalar fields. The boundary condition is such that the ``right moving" modes of $\varphi(x)$ on the circle, namely $\varphi_r$ with $r>0$ ($r={1\over 2}, {3\over 2},\cdots$) are dual to operators of dimension $\Delta_+=1+\lambda$, whereas $\varphi_r$ with $r<0$ are dual to operators of dimension $\Delta_- = 1-\lambda$. As a consequence of this boundary condition, the corresponding generating operator $\varphi(x;z,\bar z)$ in the CFT has two-point function \begin{equation}\begin{aligned} \vev{\varphi(x;z,\bar z)\bar\varphi(y;0)}&=\sum_{r,s\in\mathbb{Z}+1/2}e^{irx+isy}\vev{\varphi_r(z)\tilde\varphi_s(0)} =\left({1\over |z|^{2+2\lambda}}-{1\over |z|^{2-2\lambda}}\right) {i\over 2 \sin{x-y\over 2}} \end{aligned}\end{equation} in the large $N$ limit. Note that the spin-1 gauge field is included here. It is also natural to include the massless scalar $\omega_n$, of spin $s=0$. $|\varphi(x)\rangle$ labels a complex massive scalar in $AdS_3$, for each $x$. This spectrum precisely fits into Vasiliev's system in three dimensions. In earlier works, we did not consider the spin-1 gauge field in Vasiliev theory, because it is governed by $U(1)\times U(1)$ Chern-Simons action and would decouple from the higher spin gravity if it weren't for the matter scalar field. It is possible to choose the boundary condition on the spin-1 Chern-Simons gauge field in $AdS_3$ so that there is no dual spin-1 current in the boundary CFT. This is presumably why the spin-1 current $j^{(1)}_0(z)$ is missing from the spectrum of $W_N$ minimal model. But the spin-1 currents $j^{(1)}_n(z)$ do exist in the infinite $N$ limit. Usually, in three-dimensional Vasiliev theory, there is no propagating massless scalar field either. There is however, an auxiliary scalar field $C_{aux}$ \cite{Chang:2011mz}, whose equation of motion at the linearized level takes the form $\nabla_\mu C_{aux}=0$. Classically, we could trade this equation with the massless Klein-Gordon equation $\Box C_{aux}=0$, together with the $\Delta=0$ boundary condition which eliminates normalizable finite energy states of this field in $AdS_3$. If this scalar field acquires a small mass, of order $1/N$ due to quantum corrections, then the boundary condition would allow for a normalizable state in $AdS_3$ of very small energy/conformal weight. We believe that this is the origin of the elementary light scalars $\omega_n$ themselves, in the infinite family of Vasiliev systems parameterized by the circle. The identification of the single-trace operators, dual to elementary particles in the bulk, makes sense a priori only in the infinite $N$ limit. Non-perturbatively, or at finite $N,k$, the infinite family $\phi_n, \tilde\phi_n, \omega_n, j_n^{(s)}$ should be cut off to a finite family. Due to the restrictions on the unitary representations of $SU(N)$ current algebra at level $k$ or $k+1$, we expect the subscript $n$ which counts the number of boxes in the Young tableau in the construction of the single-trace primaries to be cut off at $n\sim k$. This means that the circle that parameterize a continuous family of Vasiliev theories in $AdS_3$ should be rendered discrete, with spacing $\sim 2\pi/k$. \section{Discussion} We have proposed that the holographic dual of $W_N$ minimal model in the 't Hooft limit, $k,N\to \infty$, $0<\lambda<1$, is a circle worth of Vasiliev theories in $AdS_3$ that couple with one another only through the boundary conditions on the matter scalars, which break all but one single tower of higher spin symmetries. It would seem to be a natural question to ask what is the CFT dual to the bulk theory with symmetry-preserving boundary conditions, that assign say the same scaling dimension $\Delta_+$ to all matter scalars. If we are to flip the boundary condition on $\tilde\phi_n$, on the CFT side this corresponds to turning on the double trace deformation by $\tilde\phi_n \bar{\tilde\phi}_n$ and flow to the critical point (IR in this case). This deformation decreases the central charge $c\approx N(1-\lambda^2)$ by an order $N^0$ amount. It is unclear what is the fixed point one ends up with by turning on double trace deformations $\tilde\phi_n \bar{\tilde\phi}_n$ for all $n$ (which should be cut off at $\sim k$), if there is such a nontrivial critical point at all. There has been an alternative proposal on the holographic dual of $W_N$ minimal model \cite{Castro:2011iw,Perlmutter:2012ds,Gaberdiel:2012ku}, as Vasiliev theory based on $hs[N]\simeq sl(N)$ higher spin algebra, with families of conical deficit solutions included to account for the primaries missing from the perturbative spectrum of Vasiliev theory. On the face of it, this proposal involves an entirely different limit, where $N$ is held fixed, and an analytic continuation is performed in $k$ so that the central charge $c$ is large. The resulting CFT is not unitary. Furthermore, it is unclear to us that the analog of large $N$ (or rather, large $c$) factorization holds in this limit, which would be necessary for the holographic dual to be weakly coupled. There is also an intriguing parallel between the 't Hooft limit of $W_N$ minimal model in two dimensions and Chern-Simons vector model in three dimensions. While the gauge invariant local operators and their correlation functions on $\mathbb{R}^3$ or $S^3$ in the three dimensional Chern-Simons vector model are expected to be computed by the parity violating Vasiliev theory in $AdS_4$ to all order in $1/N$, the duality in its naive form is not expected to hold for the CFT on three-manifolds of nontrivial topology (e.g. when the spatial manifold is a torus or a higher genus surface). This is because the topological degrees of freedom of the Chern-Simons gauge fields cannot be captured by a semi-classical theory in the bulk with Newton's constant that scales like $1/N$ rather than $1/N^2$. In a similar manner, the $W_N$ minimal model CFT on $\mathbb{R}^2$ or $S^2$ in the large $N$ admits a closed subsector, generated by the OPEs of the primary $\phi_1$ along with higher spin currents, that is conjectured to be perturbatively dual to Vasiliev theory in $AdS_3$. This duality makes sense only perturbatively in $1/N$. The light primaries which in a sense arise from twistor sectors must be included to ensure that the CFT is modular invariant. Here we see that the bulk theory should be extended as well, to an infinite family of Vasiliev theories. It would be interesting to understand the analogous statement in the $AdS_4/CFT_3$ example, where the connection to ordinary string theory is better understood \cite{Chang:2012kt} . \bigskip \section*{Acknowledgments} We would like to thank Matthias Gaberdiel, Rajesh Gopakumar, Tom Hartman, Shiraz Minwalla, Soo-Jong Rey and Steve Shenker for useful discussions. We would like to especially thank the hospitality of the 6th Asian Winter School on Strings, Particles, and Cosmology in Kusatsu, Japan, 2012 Indian Strings Meeting, Puri, India, and Tata Institute for Fundamental Research, Mumbai, India, during the course of this work. XY would like to thank the organizers of CQUeST Spring Workshop on Higher Spins and String Geometry, Seoul, Ginzburg conference at Lebedev Institute, Moscow, Strings 2012, Munich, and Komaba 2013, Tokyo, where partial results of this work are presented. This work is supported in part by the Fundamental Laws Initiative Fund at Harvard University, and by NSF Award PHY-0847457.
2,877,628,089,256
arxiv
\section{\label{sec:level1}INTRODUCTION} Long-ranged hydrodynamic interactions in dilute bacterial suspensions drive growing orientation fluctuations, in turn leading to collective motion on length scales much larger than a single bacterium \cite{ganesh11, ganesh09, saintillan08, saintillan08b, simha02}. While large-scale coherent motion in unsheared bacterial suspensions observed in simulations \cite{saintillan07,graham08,ganesh15}, and in many experiments \cite{dunkel2013,wu2000,clement2014}, is regarded as well understood theoretically, much less is known about the dynamics of sheared bacterial suspensions \cite{clement16}. Several recent experiments have observed counter-intuitive behavior of bacterial suspensions under an external shear, including regimes of apparent superfluidity \cite{samanta18,clement16,lopez15,sokolov09,koch14b,stocker16}. In this letter, we demonstrate a novel concentration-shear coupled mechanism for growth of fluctuations in bacterial suspensions, eventually leading to banded steady states. The proposed mechanism is shown to lead to shear-bands, with concentration inhomogeneities, in the dilute regime itself; in sharp contrast to both passive complex fluids \cite{fardin16, cates06, olmsted08, fardin12, dhont08}, and active fluids \cite{ramaswamy13,fielding08,fielding11,liverpool10,liverpool18} where shear banding is observed/predicted only in the semi-dilute and concentrated regimes. Fig.~\ref{FIG11} illustrates the physical mechanism underlying the novel concentration-shear banding instability in a homogeneously sheared bacterial suspension. The bacteria are modeled as slender particles that swim along their axis, while being rotated and aligned by the background shear \cite{stocker14,ganesh17}. The latter leads to a spatially homogeneous suspension with an anisotropic orientation distribution (Fig.~\ref{FIG11} (a)). In the dilute regime, the contribution of the anisotropically oriented bacteria to the suspension viscosity is proportional to the local concentration. However, in sharp contrast to passive microstructural elements, the flow perturbation created by the tail-actuated swimming mechanism of bacteria (termed `pusher') aids the imposed shear, thereby lowering the suspension viscosity below that of the solvent \cite{ganesh17, ramaswamy04, saintillan18,clement13,clement16,sokolov09,lopez15,haines09}. An initial gradient-aligned concentration perturbation thus leads to a lower (higher) effective suspension viscosity in regions of higher (lower) concentration. The invariance of the shear stress in the inertialess limit then implies that the higher (lower) concentration layers are subject to a higher (lower) shear rate (Fig.~\ref{FIG11} (c)). In the higher shear rate region, the bacteria are more aligned with the flow. In turn, this implies a net concentration drift of bacteria into the higher shear rate (higher concentration) region, with a diffusivity driving an opposing stabilizing flux. The drift overcoming the diffusivity thus provides a mechanism for exponential growth of gradient-aligned (layering) concentration-shear fluctuations from the homogeneous state (Fig.~\ref{FIG11} (d)). Front-actuated swimmers (`pullers') such as algae, and passive rigid rods, increase the suspension viscosity in the dilute regime, leading to a stabilising drift, and thence, to decaying fluctuations. \begin{figure} \center \includegraphics[scale=.65]{fig1.eps} \caption {Schematic illustrating the physical mechanism for growing concentration fluctuations in a sheared bacterial suspension. In this figure $\mu$, $V$ and $D$ represent the suspension viscosity, the destabilizing drift and the stabilizing diffusivity, respectively.} \label{FIG11} \end{figure} Migration of bacteria towards higher-shear rate regions, in inhomogeneous shear flows, leading to so-called shear trapping, has been examined before \cite{clement16, stocker14, stocker15, saintillan15, bearon15, sokolov2016}. However, all of these studies have focused on the kinematic point of view where changes in the bacterial concentration and orientation distribution, and the resulting changes in the bacterial stress, do not couple back to the flow. The mechanism outlined above illustrates, for the first time, how concentration and shear-rate fluctuations can be dynamically self-sustaining in bacterial suspensions. The exponentially growing layering perturbations eventually lead to a banded steady state, with the high shear band containing a (marginally) higher concentration of bacteria. In the rest of the letter, the aforementioned mechanism is first demonstrated through a linear stability analysis, followed by the results of non-linear simulations. Gradient banding in sheared active fluids has been studied using phenomenological continuum equations with a bulk nematic or polar order \cite{fielding08,liverpool10,liverpool18,fielding11,ramaswamy13}. Since only the simplest terms allowed by symmetry are retained in these phenomenological equations, they do not describe the shear-induced migration observed in dilute bacterial suspensions \cite{clement16,stocker14,saintillan15,bearon15}. Consequently, the concentration banding instability reported here is also not described by the active fluid equations. Indeed, \cite{fielding08, fielding11, liverpool10, liverpool18} only report the shear-modified orientation instability already seen in unsheared active fluids \cite{simha02, ramaswamy13}. In the specific context of bacterial suspensions, an earlier effort only examined vorticity-aligned perturbations, and therefore did not find the novel concentration-shear instability analyzed here \cite{saintillan11}. To the best of our knowledge therefore, this letter is the first demonstration of a shear-induced mechanism for gradient banding in an active fluid. At the microscale, a bacterium swims with a speed $U_b$, and the swimming direction ($\mathbf{p}$) decorrelates via both rotary diffusion (with diffusivity $D_r$) and tumbling (at a mean rate $\tau^{-1}$). Using $\tau, H, U_{\infty}$, where $U_{\infty}/H$ is the imposed shear-rate, as the time, length and velocity scales, respectively, the kinetic equation for the bacterium phase-space probability density, $\Omega(\mathbf{x},\mathbf{p},t)$ in the dilute limit is given by \cite{ganesh09} \begin{eqnarray} \frac{\partial \Omega}{\partial t} & + & \epsilon \mathbf{p}. \nabla_{\mathbf{x}}\Omega-D_r\tau \nabla^{2}_{p}\Omega+Pe \nabla_{p} \cdot (\dot{\mathbf{p}}\Omega) \nonumber \\ &+&[\Omega- \frac{1}{4\pi} \int d \mathbf{p}^{\prime} \Omega(\mathbf{p}^{\prime})]=0, \label{EQ:NDGoveq} \end{eqnarray} where $\epsilon = U_b\tau /H$ is the ratio of the bacterium run length to the imposed length scale and $Pe = U_{\infty}\tau/H$ denotes the relative importance of the shear-induced and intrinsic reorientation time scales. Approximating the bacteria as slender force-dipoles, the rotation due to the flow is given by the Jeffery's relation, $\dot{\mathbf{p}}=\mathbf{E} \cdot \mathbf{p}+\boldsymbol{\omega} \cdot \mathbf{p}-\mathbf{p}(\mathbf{E}: \mathbf{p} \mathbf{p})$, where $\mathbf{E}$ and $\boldsymbol{\omega}$ are the strain rate and vorticity tensors tensors, respectively, associated with the local linear flow \cite{Jeffery1922}. \eqref{EQ:NDGoveq} is coupled to the inertialess momentum and continuity equations \begin{eqnarray} Pe \nabla^{2} \textbf{u}&=&-\nabla \cdot \Sigma^{B},\nonumber \\ \nabla \cdot \mathbf{u}&=&0, \label{EQ:NDEqOfMotion} \end{eqnarray} where we use the stress scale $\mu \tau^{-1}$. We now approximate $\Sigma^{B}$ by its active contribution alone which, in a continuum framework, is given in terms of the bacterium force-dipole density as $-\mathcal{A} \int d\mathbf{p} \Omega(\mathbf{p})(\mathbf{p}\mathbf{p}-I/3)$. The non-dimensional parameter $\mathcal{A}=C n_0 L^{2}U_{b}\tau$, termed the activity number here \cite{baskaran09}, is a measure of the bacterial force-dipole density, where $ L$ is the bacterium length, $n_0$ the number density and $\mathcal{C} $ the bacterial force dipole strength, with $\mathcal{C}>0$ for `pushers' \cite{ganesh09,simha02,saintillan08,saintillan08b}. As will be seen below, $\mathcal{A}$ and $Pe$ delineate the unstable regions. \begin{figure} \center \includegraphics[scale=0.17]{fig2a.eps} \includegraphics[scale=0.17]{fig2b.eps} \caption {Variation in the $(a)$ the homogeneous base-state stress and $(b)$ growth rate predicted by the multiple scales analysis against $Pe$. $(c)$ Unstable region in the $\mathcal{A}-Pe$ parameter plane.} \label{FIG1} \end{figure} The homogeneous base-state is given by $u_{0}=z \mathbf{1_x}$ and an anisotropic orientation distribution {$\Omega_{0}(\mathbf{p})$, which needs to be solved for numerically \cite{supp}. Knowing $\Omega_0(\mathbf{p})$ allows the calculation of the stress-shear-rate curves for the homogeneous state; see Fig \ref{FIG1} (a) \cite{saintillan2010dilute,ganesh17,saintillan18,clement16}. For $\mathcal{A} <\mathcal{A}^{*}$, the base state stress ($\Sigma_{0}$) is a monotonically increasing function of the shear rate although the effective viscosity is lower than the solvent viscosity. $\mathcal{A}^{*} \approx 35$ marks the threshold for the instability in an unsheared suspension owing to the viscosity vanishing at $Pe=0$ \cite{ganesh09, ganesh11, saintillan08, saintillan08b, simha02}. For $\mathcal{A} >\mathcal{A}^{*}$, $\Sigma_{0}$ is a non-monotonic function of $Pe$ and the suspension has a zero viscosity at $Pe \equiv Pe_{cr}(\mathcal{A})$; $Pe_{cr}$ being an increasing function of $\mathcal{A}$. We examine the stability of the above homogeneous state to infinitesimal layering perturbations $(u_{1}$ and $\Omega_1)$ in the gradient direction \cite{note1}. Confinement is known to lead to concentration inhomogeneities via wall accumulation of swimming bacteria through both kinematic and hydrodynamic mechanisms \cite{saintillan15,baskaran17,lauga08,naji17,gompper16}. However, in order to focus on concentration inhomogeneities arising from banding in the bulk, we neglect wall effects in the analysis, and impose periodic boundary conditions in the non-linear simulations. \textit{Concentration fluctuation dynamics} - In the limit $U_b\tau/H \rightarrow 0$, concentration fluctuations ($n_1=\int\Omega_1 d\mathbf{p}$) evolve on a slower, diffusive, time scale ($H^2/(\tau U^2_{b})$) compared to orientation fluctuations ($\tau$). A multiple scales analysis can thus be used to derive a generalized drift-diffusion equation for concentration fluctuations with the orientation fluctuations evolving in a quasi-static manner \cite{ganesh04, hinch97, koch12, koch14}. When linearized about the homogeneous base-state, we obtain \cite{supp} \begin{equation} \frac{\partial n_1}{\partial t_{2}}= \frac{\partial}{\partial z} \left(-V_1+D_0 \frac{\partial n_1}{\partial z} \right), \label{EQ:DriftDiffEq} \end{equation} with $V_1 = 2 \sqrt{\frac{\pi}{3}} e_{1,0} \frac{\partial \dot{\gamma}_1}{\partial z}$. The perturbation shear-rate ($\dot{\gamma}_1$) is obtained from the momentum equation as \begin{equation} \frac{\partial \dot{\gamma}_1}{\partial z} =\mathcal{A} \frac{\partial n_1}{\partial z} \frac{\sqrt{\frac{2\pi}{15}} \left(d_{2,-1}-d_{2,1} \right)}{\mu_0}. \label{EQ:PertbMom} \end{equation} The constants involved in Eqs. (\ref{EQ:DriftDiffEq}) and (\ref{EQ:PertbMom}) are functions of $Pe$, and are obtained by numerically solving the linearized equations governing the quasi-static evolution of the orientation degrees of freedom \cite{supp}. Assuming normal modes of the form $[n_1,\dot{\gamma}_{1}]=[\tilde{n}_1, \tilde{\dot{\gamma}}_{1}] \cos(zk_{z})\exp(\sigma t_{2})$, we obtain the following semi-analytical expression for the eigenvalue governing the evolution of concentration perturbations \begin{equation} \sigma=k_{z}^2 \left(\mathcal{A} V_1 \frac{\sqrt{\frac{2\pi}{15}} \left(d_{2,-1}-d_{2,1} \right)}{\mu_0}-D_{0} \right). \label{EQ:GrowthRate} \end{equation} The second term ($D_{0}$) in Eq.~\eqref{EQ:GrowthRate} represents the $Pe$-dependent stabilizing diffusivity. The first term represents the drift that drives a destabilizing flux from regions of low to high shear rate (Fig.~\ref{FIG11}), in proportion to the shear-rate gradient. When the drift exceeds the diffusivity, the homogeneous state becomes unstable (Fig~\ref{FIG1} (b)). For $Pe \rightarrow Pe_{cr}$, the suspension viscosity ($\mu_0$) vanishes and thus the destabilizing drift diverges in Eq. (\ref{EQ:GrowthRate}) making the suspension infinitely susceptible to concentration fluctuations (Fig~\ref{FIG1} (b)). The lower ($Pe_{cr}$) and upper ($Pe_{max}$) Peclet thresholds for the concentration-shear instability as a function of $\mathcal{A}$ are shown in Fig.~\ref{FIG1}~(c). The shear-rate range $(Pe_{cr}, Pe_{max})$ in which the system is susceptible to the concentration-shear instability increases with increasing $\mathcal{A}$. \textit{Coupled concentration and orientation fluctuation dynamics} - The divergence of the growth rate for $Pe \rightarrow Pe_{cr}$ is an artifact of the multiple scales analysis. For (dimensional) $\sigma \sim \tau^{-1}$ or $k_z \sim (U_b \tau)^{-1}$, the assumption of a separation of time scales between the concentration and orientation fluctuations is no longer valid. We therefore carry out a linear stability analysis, numerically, without the assumption of a time scale separation; Fig.~\ref{FIG2} (inset) shows good agreement between the two approaches. The full analysis continues to predict a finite growth rate of $\mathcal{O}(1/\tau)$ near $Pe_{cr}$. The multiple-scales analysis does not predict a finite length scale for the fastest growing mode since $\sigma \propto k_z^2$ (see Eq~\ref{EQ:GrowthRate}). The full analysis, with orientation dynamics included, predicts the fastest growing wavenumber to be $\mathcal{O}(1/(U_b\tau))$ such that the relaxation times of the concentration and orientation (and thence, stress) fluctuations become comparable both being $\mathcal{O}(\tau)$ (Fig.~\ref{FIG3} (a)). For $k_z>\mathcal{O}(1/(U_b\tau))$, the diffusive rate of accumulation of bacteria ($k_z^{-2}/(\tau U^{2}_b)$) would exceed the stress relaxation time ($\tau$), and hence, such perturbations decay. For $Pe > Pe_{cr}$, there are strong, long wavelength concentration fluctuations ($\tilde{n}_{1}$) as predicted by the mechanism outlined earlier. This is seen in Fig.~\ref{FIG3} (b) where $\tilde{n}_{1}$ approaches a finite value even as $k_z \rightarrow 0 $ (with $\tilde{n}_{1} =0$ for $k_z =0 $ being a singular limit). Along with the multiple-scales analysis results, this reinforces the concentration-shear coupling mechanism that leads to a layering instability for $Pe> Pe_{cr}$. \begin{figure}[!tbp] \centering \hfill \includegraphics[scale=0.16]{fig3.eps} \caption{Comparison of growth rates versus $Pe$ obtained from the concentration dynamics ($CD$) and coupled concentration-orientation dynamics ($CCOD$) analyses. The magnified inset emphasizes the agreement between the two approaches for $Pe > Pe_{cr}$} \label{FIG2} \end{figure} The full analysis also predicts the orientation-shear instability, which has earlier been interpreted as a negative-viscosity instability responsible for the onset of collective motion in a quiescent bacterial suspension \cite{ganesh09, ganesh11, koch14}. Indeed, Fig.~\ref{FIG2} shows that orientation fluctuations drive an instability on the negative (effective) viscosity portion of the stress-shear-rate curve for $\mathcal{A} > \mathcal{A}^{*}$ and $Pe<Pe_{cr}$ where the multiple scales analysis predicts decaying concentration fluctuations (contrast with Fig.~\ref{FIG1}~(b) for $Pe < Pe_{cr}$). One therefore needs to distinguish between the orientation-shear and concentration-shear instability mechanisms which operate in distinct parameter regimes. The onset of instability coincides with the stress becoming a non-monotonic function of the shear rate (Fig~\ref{FIG1} (a)). While the orientation-shear instability, analyzed by earlier authors \cite{ganesh09, saintillan08, saintillan08b}, is the usual mechanical shear-banding instability \cite{fardin16, cates06, olmsted08, fardin12, dhont08} operating in the range $Pe < Pe_{cr}$, the novel concentration-shear instability identified here exists only on the positive viscosity branch of the stress-shear curve. The physical mechanisms for the two instabilities can most easily be differentiated by focusing on the spatially homogeneous ($k_z=0$) mode. For $Pe < Pe_{cr}$, $k_z=0$ (implying no concentration fluctuations) is the fastest growing wavenumber with the growth driven by the orientation-shear coupling \cite{ganesh09, saintillan08b}. The growth rate of the $k_z=0$ mode monotonically decreases as $Pe$ increases. For $Pe > Pe_{cr}$, the dynamics is driven by concentration fluctuations and hence the $k_z=0$ mode is stable. In an unsheared suspension, the unstable eigenfunction does not have number density perturbations for any $k_z$ (Fig.~\ref{FIG3} (b)) in agreement with earlier predictions \cite{ganesh09, saintillan08b, saintillan11, shelley10}. Weak shear leads to weak long wavelength concentration fluctuations ($\tilde{n}_{1} \rightarrow 0$ as $k_z \rightarrow 0$) even for $Pe < Pe_{cr}$. However, as noted earlier, enhanced long wavelength fluctuations exist only for the concentration-shear instability ($Pe>Pe_{cr}$). For $Pe \sim Pe_{cr}$, there is no sharp distinction between the two mechanisms. \begin{figure} \center \includegraphics[scale=.16]{fig4.eps} \caption{Variation in the (a) growth rate and (b) concentration fluctuations ($\tilde{n}_{1}$) against the wavenumber for different shear rates with $\tau D_r=1$, $\mathcal{A}=48.5$ ($Pe_{cr} \approx 2.29$ and $Pe_{max} \approx 3.1$).} \label{FIG3} \end{figure} \textit{Non-linear simulations} - To examine the steady state resulting from the linear instability discussed above, we numerically integrate \eqref{EQ:NDGoveq} and \eqref{EQ:NDEqOfMotion} in time. The non-linear simulations are carried out in two dimensions, so the orientation vector is restricted to the flow-gradient plane \cite{supp}. An imposed non-dimensional shear rate ($Pe$) is the control parameter. Rather remarkably, the selected stress and shear-rate at steady state (see Fig~\ref{nonlinear}) can be explained using a Maxwell construction based on the homogeneous stess-shear rate profile. Fig~\ref{FIG1} (a) (with its symmetric extension for $Pe<0$) suggests a banded state with equal and opposite shear-rates ($\dot{\gamma}^{\star}$) with with a zero bulk stress and a homogeneous concentration \cite{note3}. In our numerical results, the selected stress (shear-rate) always differs from 0 ($\dot{\gamma}^{\star}$) by a finite amount but with a very small magnitude. With variation in the imposed shear rate, only the relative extents of the two bands change. The selected stress is, thus, (nearly) zero irrespective of $Pe$ and $\mathcal{A}$. Further, unexpectedly, the steady-state banded profiles do no show any major difference across $Pe_{cr}$ (Fig~\ref{nonlinear}) even though concentration fluctuations are crucial for the instability, and thus start-up kinetics for $Pe>Pe_{cr}$. This insensitivity of the selected stress to concentration-coupling is in sharp contrast to shear-banding in passive complex fluids, where it leads to an increase in the selected stress with the shear rate \cite{cates06, olmsted03, olmsted08}. \begin{figure} \includegraphics[scale=0.85]{fig5.eps} \caption {The shear-rate ($\dot{\gamma}$) and concentration ($n$) (inset) profiles in the non-linear banded state for a box size 10 times the run length $U_b \tau$ for $\tau D_r=0.0025$ and $\mathcal{A}=62.83$ for which $Pe_{cr} \sim 0.67$. The shear rate ($\dot{\gamma}^{\star}$) is marked.} \label{nonlinear} \end{figure} The equal and oppositely sheared zones in the banded state imply that the shear rate goes through zero at the interface, driving a local depletion of bacteria \cite{stocker14, bearon15, saintillan15} as seen in Fig~\ref{nonlinear}. Consequently, the bands have a marginally higher concentration of bacteria than the original homogeneous state, in turn implying that, in a finite domain, the shear rate selected slightly differs from $\dot{\gamma}^{\star}$ and that the stress is selected is finite, but (very) small in magnitude. The width of the interface between the shear bands is of the order of the bacterium run length $(U_b\tau)$, which can be seen from \eqref{EQ:NDGoveq} to be the length scale governing the spatial decay of stress \cite{olmsted08}. With increasing box size, the extent of interface depletion reduces, and the shear rate selected approaches $\dot{\gamma}^{\star}$. An analogous result, for the selected stress, was obtained earlier for extensile active nematics for nematic-nematic banding and no concentration variation \cite{fielding08,fielding11}. The active-nematic formalism however has phenomenological constants that do not have a direct microscopic interpretation, especially for dilute bacterial suspensions that are far from an isotropic-nematic transition. Thus, \cite{liverpool18, liverpool10} report similar stress-shear rate curves and yet very different velocity profiles from those in \cite{fielding08, fielding11}. In contrast, our approach solves the underlying kinetic equation directly and rigorously demonstrates the selection of a banded-state even in the dilute regime. Crucially, our results demonstrate that long-range hydrodynamic interactions are sufficient to explain experimental observations of a banded state in dilute bacterial suspensions \cite{samanta18}. Postulating an orientationally ordered state, as is done in \cite{fielding08,fielding11, liverpool18, liverpool10}, is thus not necessary. \textit{Concluding Remarks} - In this letter, we have demonstrated a novel concentration-shear instability mechanism in dilute bacterial suspensions. The proposed instability is, in fact, reminiscent of the Helfand-Federickson mechanism that explains shear-enhanced concentration fluctuations in concentrated polymer solutions near an equilibrium critical point \cite{fredrickson89, onuki, milner93, larson92, leal13, leal14, olmsted03, pine91, pine92}. However, dilute bacterial suspensions are far from any critical point and the enhanced dynamics of the concentration fluctuations is crucially reliant on the novel rheological response arising from bacterial activity. We hope that the theoretical results reported in this letter would motivate light scattering experiments examining the dynamics of concentration fluctuations in bacterial suspensions. Similar experiments in polymer solutions have shed considerable light on on the nature of the shear-enhanced concentration fluctuations \cite{pine91,pine92}. The concentration-shear instability mechanism need not be restricted to a rheological scenario. Observations of collective motion driven by concentration fluctuations near the contact line of an evaporating drop were reported in \cite{koch14b}, and in pipe flow driven by a pressure gradient in \cite{stocker16}. The generalization of our results to an inhomogeneous shear-flow would lead to additional insight into these observations. \emph{Acknowledgements.} L.N.Rao would like to thank Science and Engineering Research Board, India (Grant No. PDF/2017/002050) and Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore for the financial support.
2,877,628,089,257
arxiv
\section{Introduction} In \cite{ChS}, Chern and Simons defined classes $ \hat{c}_n((E,\nabla))\in H^{2n-1}(X, {\mathbb R}/{\mathbb Z}(n))$ for $n\ge 1$ and a flat bundle $(E,\nabla)$ on a ${\mathcal C}^\infty$ manifold $X$, where ${\mathbb Z}(n):={\mathbb Z}\cdot (2\pi \sqrt{-1})^n$. Cheeger and Simons defined in \cite{CS} the group of real ${\mathcal C}^\infty$ differential characters $ \hat{H}^{2n-1}(X, {\mathbb R}/{\mathbb Z})$, which is an extension of global ${\mathbb R}$-valued $2n$-closed forms with ${\mathbb Z}(n)$-periods by $ H^{2n-1}(X, {\mathbb R}/{\mathbb Z}(n))$. They show that the Chern-Simons classes extend to classes $\hat{c}_n((E,\nabla)) \in \hat{H}^{2n-1}(X, {\mathbb R}/{\mathbb Z}) $, if $\nabla$ is a (not necessarily flat) connection, such that the associated differential form is the Chern form computing the $n$-th Chern class associated to the curvature of $\nabla$. If $X$ now is a complex manifold, and $(E,\nabla)$ is a bundle with an algebraic connection, Chern-Simons and Cheeger-Simons invariants give classes $\hat{c}_n((E,\nabla))\in \hat{H}^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z})$ with a similar definition of complex ${\mathcal C}^\infty$ differential characters. Those classes have been studied by various authors, and most remarquably, it was shown by A. Reznikov that if $X$ is projective and $(E,\nabla)$ is flat, then the classes ${\hat c}_n((E,\nabla))$ are torsion, for $n\ge 2$. This answered positively a conjecture by S. Bloch \cite{B}, which echoed a similar conjecture by Cheeger-Simons in the ${\mathcal C}^\infty$ category \cite{Ch}, \cite{CS}. On the other hand, for $X$ a smooth complex algebraic variety, we defined in \cite{ADC} the group $AD^n(X)$ of algebraic differential characters. It is easily written as the hypercohomology group $\H^n(X, {\mathcal K}_n\xrightarrow{d\log} \Omega^n_X\xrightarrow{d} \Omega^{n+1}_X\to \ldots \xrightarrow{d} \Omega^{2n-1}_X)$, where ${\mathcal K}_n$ is the Zariski sheaf of Milnor $K$-theory which is unramified in codimension 1. It has the property that it maps to the Chow group $CH^n(X)$, to algebraic closed $2n$-forms which have ${\mathbb Z}(n)$-periods, and to the complex ${\mathcal C}^\infty$ differential characters $\hat{H}^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z})$. If $(E,\nabla)$ is a bundle with an algebraic connection, it has classes $c_n((E,\nabla))\in AD^n(X)$ which lift both the Chern classes of $E$ in $CH^n(X)$ and $\hat{c}_n((E,\nabla))$. All those constructions are contravariant in $(X, (E,\nabla))$, the differential characters have an algebra structure, and the classes fulfill the Whitney product formula. They admit a logarithmic version: if $j: U\to X$ is a (partial) smooth compactification of $U$ such that $D:= X\setminus U$ is a strict normal crossings divisor, one defines the group $AD^n(X, D)=\H^n( X, {\mathcal K}_n \xrightarrow{d\log} \Omega^n_{ X}(\log D) \xrightarrow{d} \Omega^{n+1}_{ X}(\log D)\to \ldots \xrightarrow{d} \Omega^{2n-1}_{ X}(\log D))$. Obviously one has maps $AD^n( X)\to AD^i( X, D)\to AD^n(U)$. The point is that if $(E,\nabla)$ extends a pole free connection $(E,\nabla)|_U$ to a connection on $X$ with logarithmic poles along $D$, then $c_n((E,\nabla)|_U)\in AD^n(U) $ lifts to well defined classes $ c_n( (E, \nabla))\in AD^n( X, D)$ with the same functoriality and additivity properties. If $X$ is a smooth algebraic variety defined over a characteristic 0 field, and $X\supset U$ is a smooth (partial) compactification of $U$, it is computed in \cite[Appendix~B]{EV} that one can express the Atiyah class (\cite{A}) of a bundle extension $E$ of $E|_U$ in terms the residues of the extension $\nabla$ of $\nabla|_U$ along $D= X\setminus U$. In particular, if $ X$ is projective, $ \nabla$ has logarithmic poles along $D$ and has nilpotent residues, one obtains that the de Rham Chern classes of $ E$ are zero. If $k={\mathbb C}$, this implies that the (analytic) Chern classes of $ E$ in Deligne-Beilinson cohomology $H^{2n}_{{\mathcal D}}( X,{\mathbb Z}(n))$ lie in the continuous part $ H^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))/F^n \subset H^{2n}_{{\mathcal D}}( X, {\mathbb Z}(n))$. The purpose of this note is to show that this lifting property is in fact stronger: \begin{thm} \label{thm1} Let $ X\supset U$ be a smooth (partial) compactification of a complex variety $U$, such that $D=\sum_j D_j= X\setminus U$ is a strict normal crossings divisor. Let $(E, \nabla)$ be a flat connection with logarithmic poles along $D$ such that its residues $\Gamma_j$ along $D_j$ are all nilpotent. Then the classes $c_n(( E, \nabla)) \in AD^n( X, D)$ lift to well defined classes $c_n(( E, \nabla, \Gamma)) \in AD^n( X)$, which satisfy the Whitney product formula. More precisely, the classes $c_n(( E, \nabla, \Gamma))$ lie in the subgroup $AD^n_{\infty}( X)=\H^n( X, {\mathcal K}_n\xrightarrow{d\log} \Omega^n_{\bar X}\xrightarrow{d} \Omega^{n+1}_{ X}\to \ldots \xrightarrow{d} \Omega^{{\rm dim}(X)}_{ X})\subset AD^n( X) $ of classes mapping to 0 in $H^{0}( X, \Omega^{ 2n}_X)$. \end{thm} They also fulfill some functoriality property, and one can express what their restriction to the various strata of $D$ precisely are. Let us denote by $\hat{c}_n(( E, \nabla, \Gamma))$ the image of $c_n(( E, \nabla, \Gamma))$ via the regulator map $AD^n( X)\to \hat{H}^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z})$ defined in \cite{E2} and \cite{ADC}, which restricts to a regulator map $AD^n_{\infty}( X) \to H^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$. As an immediate consequence, one obtains \begin{cor} Let $(X, ( E, \nabla, \Gamma))$ be as in the theorem. Then the Cheeger-Chern-Simons classes $\hat{c}_n((E,\nabla)|_U)\in H^{2n-1}(U_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))\subset \hat{H}^{2n-1}(U_{{\rm an}}, {\mathbb C}/{\mathbb Z})$ lift to well defined classes $\hat{c}_n(( E, \nabla, \Gamma)) \in H^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n)) \subset \hat{H}^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z})$, with the same properties. \end{cor} A direct ${\mathcal C}^\infty$ construction of $\hat{c}_n(( E, \nabla, \Gamma))\in H^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$ in the spirit of Cheeger-Chern-Simons has been performed by P. Deligne and is written in a letter of P. Deligne to the authors of \cite{IS}. It consists in modifying the given connection $ \nabla$ by a ${\mathcal C}^\infty$ one form with values in ${\mathcal E} nd( E)$, so as to obtain a (possibly non-flat) connection without residues along $D$. This modified connection admits classes in $H^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))\subset \hat{H}^{2n-1}( X_{{\rm an}}, {\mathbb C}/{\mathbb Z})$. That they do not depend on the choice of the one form relies essentially on the argument showing that if $\nabla$ is flat with logarithmic poles along $D$ (and without further conditions on the residues), for $n\ge 2$, the image of $c_n((E,\nabla))$ in $H^0(U, {\mathcal H}^{2n-1}_{DR})$, where ${\mathcal H}^j_{DR}$ is the Zariksi sheaf of $j$-th de Rham cohomology, in fact lies in the unramified cohomology $H^0(X,{\mathcal H}^{2n-1}_{DR})\subset H^0(U, {\mathcal H}^{2n-1}_{DR})$. For this, see \cite[Theorem~6.1.1]{BE}. In the case when $D$ is smooth, J. Iyer and C. Simpson constructed the ${\mathcal C}^\infty$ classes $\hat{c}_n(( E, \nabla, \Gamma))\in H^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$ using the existence of the ${\mathcal C}^\infty$ trivialization of the canonical extension after an \'etale cover, a fact written by Deligne in a letter, together with Deligne's suggestion of considering patched connection. They then show that Reznikov's argument and theorem \cite{R} adapts to those classes. Our note is motivated by the question raised in \cite{IS} on the construction in the general case. Our algebraic construction in theorem \ref{thm1} relies on the modified splitting principle developed in \cite{E}, \cite{E2} and \cite{ADC} in order to define the classes in $AD^n(X, D)$. Let $q: Q\to X$ be the complete flag bundle of $ E$. A flat connection on $ E$ with logarithmic poles along $D$ defines a map of differential graded algebras $\tau:\Omega^\bullet_{Q}(\log q^{-1}(D)) \to {\mathcal K}^\bullet$ where ${\mathcal K}^i=q^*\Omega^i_{ X}(\log D)$ and $\ Rq_*{\mathcal K}^\bullet=\Omega^\bullet_{X}(\log D)$. This defines a partial flat connection $\tau \circ q^*\nabla: q^* E\to q^*\Omega^1_{ X}(\log D) \otimes_{{\mathcal O}_{{\mathcal Q}}} q^* E$ which has the property that it stabilizes all the rank one subquotients of $q^* E$. On the other hand, the nilpotency of $\Gamma$ allows to filter the restriction $E|_{\Sigma}$ to the different strata $\Sigma$ of $D$, in such a way that the restriction $\nabla|_\Sigma: E|_\Sigma \to \Omega^1_X(\log D)|_\Sigma\otimes E|_{\Sigma}$ of the connection stabilizes the filtration $F^\bullet_{\Sigma}$, and has the following important extra property: the induced flat connection $\nabla|_{\Sigma}$ on $gr(F^\bullet_\Sigma)$ has values in $\Omega^1_{\Sigma}(\log {\rm rest})$, where rest is the interestion with $\Sigma$ of the part of $D$ which is transversal to $\Sigma$. This fact translates into a sort of stratification of the flag bundle $Q$, where $\tau$ is refined on this stratification and has values in the pull back of $\Omega^1_\Sigma({\rm rest})$. Modulo some geometry in $Q$, the next observation consists in expressing the sections $\alpha \in \Omega^i_{ X}$ of forms without poles as pairs $\alpha=(\beta \oplus \gamma) \in \Omega^1_{ X}(\log D)\oplus \Omega^1_D$ such that $\beta|_D=\gamma$, where $\Omega^i_D=\Omega^i_{ X}/\Omega^i_{ X}(\log D)(-D)\subset \Omega^i_X(\log D)|_D$. This yields a complex receiving quasi-isomorphically $\Omega^{\ge i}_X$, which is convenient to define the wished classes. \\ \ \\ {\it Acknowledgement:} Our algebraic construction was performed independently of P. Deligne's ${\mathcal C}^\infty$ construction sketched above. We thank C. Simpson for sending us afterwards Deligne's letter. We also thank him for pointing out a mistake in an earlier version of this note. We thank E. Viehweg for his encouragement and for discussions on the subject, which reminded us of the discussions we had when we wrote \cite[Appendix~C]{EV}. \section{Filtrations} Let $X$ be a smooth variety defined over a characteristic 0 field $k$. Let $D\subset X$ be a strict normal crossings divisor (i.e. the irreducible components are smooth over $k$), and let $(E,\nabla)$ be a connection $\nabla: E\to \Omega_X^1(\log D)\otimes E$ with residue $\Gamma$ defined by the composition \ga{2.1}{\xymatrix{\ar[drr]_{\Gamma} E \ar[rr]^<<<<<<<<<{\nabla} & & \Omega^1_X(\log D) \otimes_{{\mathcal O}_X} E \ar[d]^{1\otimes {\rm res}} \\ & & \nu_* {\mathcal O}_{D^{(1)}} \otimes_{{\mathcal O}_X} E}} where $D^{(1)}=\sqcup_j D_j$. The composition of $\Gamma$ with the projection $\nu_*{\mathcal O}_{D^{(1)}}\to {\mathcal O}_{D_j}$ defines $\Gamma_j: E\to {\mathcal O}_{D_j}\otimes E$ which factors through $\Gamma_j \in {\rm End}( {\mathcal O}_{D_j}\otimes E)$. We write \ga{2.2}{\Gamma \in {\rm Hom}_{{\mathcal O}_X}({\mathcal O}_D \otimes_{{\mathcal O}_X} E , \nu_* {\mathcal O}_{D^{(1)}} \otimes_{{\mathcal O}_X} E).} Recall that if $\nabla$ is integrable, then \ga{2.3}{[\Gamma_i|_{D_{ij}}, \Gamma_j|_{D_{ij}}]=0.} We use the notation $D_I=D_{i_1}\cap \ldots \cap D_{i_r}$ if $I=\{i_1,\ldots, i_r\}$, $D=D^I+\sum_{s\in I} D_s $ with $D^I=\sum_{\ell \notin I} D_\ell$. The connection $\nabla: E\to \Omega^1_X(\log D)\otimes E$ stabilizes $E(-D_j)$, but also $E\otimes {\mathcal I}_{D_{I}}$, as the K\"ahler differential on ${\mathcal O}_X$ restricts to a flat $\Omega^1_X(\log(\sum_{s\in I} D_s))$-connection on $ {\mathcal I}_{D_I}$. Thus $\nabla$ induces a flat connection \ga{2.4}{\nabla_I: E|_{D_{I}} \to \Omega^1_X(\log D)|_{D_{I}} \otimes E|_{D_{I}}.} One has the diagram \ga{2.5}{\xymatrix{ & \ar[d] \ar[ddr]_>>>>>>>>>{{\rm 1\otimes \Gamma_j}} \Omega^1_{D_j}(\log (D^j\cap D_j))\otimes E & \ar[d] \Omega^2_{D_j}(\log (D^j\cap D_j)) \otimes E\\ \ar[dr]_{\Gamma_j} E|_{D_j} \ar[r]^<<<<{\nabla_j} & \ar[d]_{{\rm res}} \Omega^1_X(\log D)|_{D_j} \otimes E \ar[r]^{\nabla_j} & \ar[d]_{{\rm res}} \Omega^2_X(\log D)|_{D_j}\otimes E\\ & E|_{D_j} & \Omega^1_{D_j}(\log (D^j\cap D_j))\otimes E } } We define $F_j^1={\rm Ker}(\Gamma_j)\subset E|_{D_j}$. It is a coherent subsheaf. $\nabla_j$ sends $F_j^1$ to $\Omega^1_{D_j}(\log D^j\cap D_j)\otimes E$, but because of integrability, the diagram \eqref{2.3} shows that $\nabla_{D_j}$ induces a flat connection $F_j^1\to \Omega^1_{D_j}(\log D^j\cap D_j)\otimes F_j^1$. \begin{claim} \label{claim2:1} $F_j^1\subset E|_{D_j}$ is a subbundle. \end{claim} \begin{proof} We use Deligne's Riemann-Hilbert correspondence \cite{DPSR}: the data are defined over a field of finite type $k_0$ over ${\mathbb Q}$, so embeddable in ${\mathbb C}$, and the question is compatibe with the base changes $\otimes_{k_0}k$ and $\otimes_k {\mathbb C}$. So it is enough to consider the question for the underlying analytic connection on a polydisk $(\Delta^*)^r\times \Delta^s$ with coordinates $x_j$, where $D_j$ is defined by $x_j=0$ for $1\le j\le r$. By the Riemann-Hilbert correspondence, the argument given in \cite[p.86]{DPSR} shows that the analytic connection is isomorphic to $(V\otimes {\mathcal O}, \sum_1^r \Gamma_j^0 \frac{dx_i}{x_i})$, where the matrices $\Gamma_j^0$ are constant nilpotent. Thus $F_j^1$ is isomorphic to $F_j^1(V) \otimes {\mathcal O}_{D_j}$ on the polydisk, with $F^1_j(V):={\rm Ker}(\Gamma_j^0)$, thus is a subbundle. \end{proof} We can replace $E|_{D_j}$ by $E|_{D_j}/F_j^1$ in \eqref{2.4} and redo the construction. This defines by pull back $ F^2_j\subset E|_{D_j}\twoheadrightarrow {\rm Ker}(\Gamma_j: E|_{D_j}/F^1_j \to E|_{D_j}/F^1_j)$ with $F^2_j\supset F_j^1$ etc. \begin{claim} \label{claim2:2} $F^\bullet_j: F^0_j=0\subset F^1_j\subset \ldots \subset F_j^i \subset \ldots \subset F^{r_j}_j= E|_{D_j}$ is a filtration by subbundles with a flat $\Omega^1_{X}(\log D)|_{D_j}$-valued connection, such that the induced connection $\nabla_j$ on $gr (F_j^\bullet)$ is flat and $\Omega^1_{D_j}(\log D^j\cap D_j)$-valued. (One can also tautologically say that $F^\bullet_j$ refines the (trivial) filtration on $E|_{D_j}$). \end{claim} \begin{proof} By construction, the flat $\Omega^1_X(\log D)|_{D_j}$-valued connection $\nabla_j$ on $E|_{D_j}$ respects the filtration and induces a flat $\Omega^1_{D_j}(\log D^j\cap D_j)$-connection on $gr (F_j^\bullet)$. We use the transcendental argument to show that this is a filtration by subbundles. With the notations as in the proof of the claim \ref{claim2:1}, $F^s_j$ is analytically isomorphic to $F_j^s(V)\otimes {\mathcal O}_{D_j}$, where $F^1_j(V)\subset F^2_j(V)\subset \ldots \subset V$ is the filtration on $V$ defined by the successive kernels of $\Gamma_j^0$, so $F^2_j(V)$ is the inverse image of ${\rm Ker}(\Gamma_j^0)$ on $V/F^1_j(V)$ etc. \end{proof} The argument which allows us to construct $F_j^\bullet$ can in be used to define successive refinements on all $E|_{D_I}$. We consider now the case $|I|=r\ge 2$. We refine the filtrations $F_J^\bullet|_{D_I}$, which have been constructed inductively, where $J\subset I, |J|<r$ . In fact, we do the construction directly on $E|_{D_I}$. We have $r$ linear maps induced by $\Gamma_j$ \ga{2.6}{\Gamma_j|_{D_{I}}: E|_{D_{I}} \xrightarrow{\nabla_{I}} \Omega^1_X(\log( D))|_{D_{I}}\otimes E|_{D_{I}} \to {\mathcal O}_{D_j}\otimes E|_{D_{I}}=E_{D_{I}}} We define \ga{2.7}{F_{I}^1=\cap_{j\in I} {\rm Ker}(\Gamma_j|_{D_{I}}) =\cap_{j\in I}F^1_j|_{D_{I}}.} \begin{claim} \label{claim2:3} $F_{I}^1 \subset E|_{D_{I}}$ is a subbundle, stabilized by the connection $\nabla_{I}$, and more precisely one has $\nabla_{I}: F^1_{I}\to \Omega^1_{D_{I}}(\log (D^{I}\cap D_{I}))\otimes F^1_{I}.$ \end{claim} \begin{proof} We argue analytically as in the proof of claim \ref{claim2:1}. With notations as there, the analytic $F^1_{I}$ isomorphic to $F^1_{I}(V)\otimes {\mathcal O}_{D_{I}}$. \end{proof} Thus $\nabla_{I}$ induces a flat $\Omega^1_X(\log D)|_{D_{I}}$-valued connection on the quotient $E|_{D_{I}}/F^1_{I}$. We define $ F^2_{I}\supset F^1_{I}$ in $E|_{D_{I}}$ to be the inverse image via the projection $E|_{D_{I}}\to E|_{D_{I}}/F^1_{I}$ of $\cap_{j\in I}{\rm Ker}(\Gamma_j|_{D_{I}}) $, etc. \begin{claim} \label{claim2:4} The filtration $F^\bullet_{I}: F^0_{I}=0\subset F^1_{I}\subset F^2_{I}\subset \ldots \subset F^{r_{I}}_{I}=E|_{D_{I}}$ is a filtration by subbundles, stabilized by $\nabla_{I}$, such that $\nabla_{I}$ on $gr(F^\bullet_{I})$ is a flat $\Omega^1_{D_{I}}(\log (D^{I}\cap D_{I}))$-valued connection. Furthermore, $F^\bullet_{I}$ refines all $F^\bullet_J|_{D_{I}}$ for all $J\subset I, |J|<r$ and one has compatibility of the refinements in the sense that if $K\subset J\subset I$, then the refinement $F_I^\bullet$ of $F_K^\bullet|_{D_I}$ is the composition of the refinements $F^\bullet_I$ of $F_J^\bullet|_{D_J}$ and $F_J^\bullet$ of $F_K^\bullet|_{D_J}$. \end{claim} \begin{proof} We argue again analytically. Then $F^s_{I}$ is isomorphic to $F^s_{I}(V)\otimes {\mathcal O}_{D_I}$ with the same definition. The filtration terminates as finitely many mutually commuting nilpotent endomorphisms on a finite dimensional vector space always have a common eigenvector. \end{proof} \begin{defn} \label{defn2:5} We call $F_I^\bullet$ the canonical filtration of $E|_{D_I}$ associated to $\nabla$, which defines $(gr(F_I^\bullet), \nabla_I, \Gamma_I)$ where $\nabla_I$ is the flat $\Omega^1_I(\log (D^I\cap D_I)$-valued connection on $gr(F_I^\bullet)$, and $\Gamma_I$ is its nilpotent residue along the normalization of $D^I\cap D_I$. \end{defn} \begin{proof} \end{proof} \section{$\tau$-Splittings} We first define flag bundles. We set $q_I: Q_I\to D_I$ to be the total flag bundle associated to $E|_{D_I}$. So the pull back of $E|_{D_I}$ to $Q_I$ has a filtration by subbundles such that the associated graded bundle is a sum of rank one bundles $\xi_I^s$ for $s=1,\ldots, N={\rm rank}(E)$. (It is here understood that $D_{\emptyset}=X$, and to simplify, we set $q=q_{\emptyset}: Q\to X, Q_{\emptyset}=Q$). For $J\subset I$, the inclusion $D_I\to D_I$ defines inclusions $i(J\subset I): Q_I\to Q_J$. The canonical filtrations associated to $\nabla$ allow to define partial sections of the $q_I$. As an illustration, let us assume that $I=\{1\}$, thus $D$ is smooth, and that $F^\bullet_{1}$ is a total flag, i.e. the $gr(F_1^\bullet)$ is a sum of rank one bundles. Then $F_1^\bullet$ defines a section $D\xrightarrow{\lambda^F_1} Q$. More generally, let us define $G_I^s=F^s_I/F_I^{s-1}$. We define \ga{3.1}{\xymatrix{ \ar[d]_{q_I} Q_I & \ar[l]_{\lambda_I^F} Q_I^F \ar[dl]^{q_I^F}\\ D_I}} using the filtration: recall that $Q_I\to D_I$ is the composition of $\P(E|_{D_I})\to D_I$ with $\P(E')\to \P(E|_{D_I})$ etc., where $E'\to {\mathcal O}_{\P(E)}\otimes E$ is the rank $(N-1)$ subbundle defined as the kernel to the rank one canonical rank 1 bundle $\xi_I^N(\P(E|_{D_I}))$, the pull back of which to $Q_I$ defines the last graded rank one quotient. Then the quotient $E|_{D_I}\to G^{r_I}_I$ defines a map $\P(E|_{D_I})\xleftarrow{} \P(G_I^{r_I})$ such that the pull back of $\xi_I^N(\P(E|_{D_I}))$ is $\xi$, where $\xi$ is the canonical rank one bundle. Writing $G'\to G^{r_I}_I$ for the kernel, we redo the same construction for $E', G'$ replacing $E|_{D_I}, G^{r_I}_I$ etc. We find this way that the flag bundle of $G^{r_I}_I$ maps to the intermediate step between $D_I$ and $Q_I$ which splits the first $M$ rank one bundles, where $M$ is the rank of $G^{r_I}_I$. Then we continue with the pull back of $G^{r_I-1}_I$ to the flag bundle of $G^{r_I}_I$, replacing $G^{r_I}_I$, and $E''$ replacing $E$, where $E''$ on this intermediate step is the rank $N-M$ bundle which is not yet split. All this is very classical. We have extra closed embeddings $\lambda^F(I\subset J)$ which come from the refinements of the canonical filtrations, which are described in the same way: for $J\subset I$, one has commutative squares \ga{3.2}{\xymatrix{\ar[d]_{q_J^F} Q_J^F & & \ar[ll]_{\lambda^F(I\subset J)} Q_I^F \ar[d]^{q_I^F}\\ D_J & & \ar[ll]^{i(I\subset J)} D_I } \xymatrix{\ar[d]_{q} Q & & \ar[ll]_{\mu_I} Q_I^F \ar[d]^{q_I^F}\\ X & & \ar[ll]^{i_I} D_I } } where $i_I=i(\emptyset \subset I), \ \mu_I=\lambda(\emptyset\subset I)$. Recall from \cite{E}, \cite{E2}, \cite{ADC} that $\nabla$ yields a splitting $\tau: \Omega^1_Q(\log q^{-1}(D))\to q^*\Omega^1_X(\log D)$, and that flatness of $\nabla$ implies flatness of $\tau$ in the sense that it induces a map of differential graded algebras $(\Omega^\bullet_Q(\log q^{-1}(D)),d)\to (q^*\Omega^\bullet_X(\log D), d_\tau)$ so in particular, $(Rq_*\Omega^{\ge n}_X(\log D),d)=(\Omega^{\ge n}_X(\log D), d)$. Furthermore, the filtration on $q^*(E)$ which defines the rank one subquotient $\xi^s$ has the property that it is stabilized by $\tau\circ q^*\nabla$, and this defines a $\tau$-flat connection $\xi^s\to q^*\Omega^1_X(\log D)\otimes \xi^s$. The $\tau$-splitting is constructed first on $\P(E)$, with $p: \P(E)\to X$. Then $\tau\circ \nabla$ stabilizes the beginning of the flag $E'\subset $pull-back of $E$ etc. Concretely, the composition $\Omega^1_{\P(E)/X}(1)\xrightarrow{\nabla} \Omega^1_{\P(E)}\otimes E \xrightarrow{{\rm projection}} \Omega^1_{\P(E)}\otimes {\mathcal O}_{\P(E)}(1)$ defines the splitting. On the other hand, the flat $\Omega^1_X(\log D)|_{D_I}$-valued connection on $G^{r_I}_I$ has values in $\Omega^1_{D_I}(\log (D_I\cap D^I))$. When we restrict to $\P(G^{r_I}_I)$, then one has a factorization \ga{3.3}{\xymatrix{\ar[dr]_{\tau} \Omega^1_{\P(E)}(\log p^{-1}(D)) \otimes {\mathcal O}_{\P(G^{r_I}_I)} \ar[r]^{\tau(G^{r_I}_I)} & \Omega^1_{D_I}(\log (D_I\cap D^I)) \otimes {\mathcal O}_{\P(G^{r_I}_I)} \ar[d]^{{\rm inj}} \\ & \Omega^1_X(\log D) \otimes {\mathcal O}_{\P(G^{r_I}_I)} } } which defines a differential graded algebra $ (\Omega^\bullet_{D_I}(\log (D_I\cap D^I)) \otimes {\mathcal O}_{\P(G^{r_I}_I)}, d_\tau)$ with total direct image on $D_I$ being $(\Omega^\bullet_{D_I}(\log (D_I\cap D^I)),d) $ and with the property that $\xi$ has a flat connection with values in $\Omega^1_{D_I}(\log (D_I\cap D^I))$, which is compatible with the flat $p^*\Omega^1_X(\log D)$-connection on $\xi^N$. We can repeat the construction with $D_I\to X$ replaced by $\P(G^{r_I}_I)\to \P(E|_{D_I})$, with $E|_{D_I} \to G^{r_I}_I$ replaced by $E'\to G'$ where $E'={\rm Ker}( E|_{D_I}\otimes {\mathcal O}_{\P(E|_{D_I})}\to {\mathcal O}(1))$ and $G'={\rm Ker}(G^{r_I}_I \to {\mathcal O}(1)$. This splits the next rank one piece, one still has the splitting as in \eqref{3.3}, and we go on till we reach the total flag bundle to $G^{r_I}_I$. Then we continue with the flag bundle to $G^{r_I-1}_I$ etc. We conclude \begin{claim} \label{claim3:1} One has a factorization \ga{3.4}{\xymatrix{\ar[dr]_{\tau} \mu_I^*\Omega^1_{Q}(\log q^{-1}(D)) \ar[r]^{\tau_I} & (q_I^F)^*\Omega^1_{D_I}(\log (D_I\cap D^I)) \ar[d]^{{\rm inj}} \\ & (q_I^F)^*\Omega^1_X(\log D)|_{D_I} }} $\tau_I$ defines a differential graded algebra $((q_I^F)^*\Omega^\bullet_{D_I}(\log (D_I\cap D^I)), d_\tau)$ which is a quotient of $\mu_I^*(\Omega^\bullet_{Q}(\log q^{-1}(D)), d)$. The flat $q^*\Omega^1_X(\log D)$-valued $\tau$-connection on $\xi^s, s=1,\ldots, N$, restricts via the splitting $\tau_I$, to a flat $(q_I^F)^*\Omega^1_{D_I}(\log (D^I\cap D_I))$-valued $\tau$-connection on $(\xi^F_I)^s= \mu_I^*\xi^s$. \end{claim} \begin{defn} \label{defn3:2} On $Q$ we define the complex of sheaves $$A(n)=A^n\to A^{n+1}\to \ldots$$ with \ml{}{A^i= B^i\oplus C^i\\ B^i=\oplus_I (\mu_I)_* (q_I^F)^*\Omega^i_{D_I}(\log (D^I\cap D_I)), \ C^i=\oplus_{I\neq \emptyset} (\mu_I)_*(q_I^F)^*\Omega^{i-1}_X(\log D)|_{D_I},\notag } where $C^i=0$ for $i=n$. The differentials $D_\tau$ are defined as follows: $(\oplus_I\beta_I, \oplus_I \gamma_I)$, where $\beta_I \in (\mu_I)_* (q_I^F)^*\Omega^i_{D_I}(\log (D^I\cap D_I)), \gamma_I\in (\mu_I)_*(q_I^F)^*\Omega^{i-1}_X(\log D)|_{D_I}$ is sent to \ml{}{\oplus_I d_\tau \beta_I \in (\mu_I)_* (q_I^F)^*\Omega^{i+1}_{D_I}(\log (D^I\cap D_I)),\\ \oplus_I d_\tau \gamma_I +(-1)^i (\mu_I^*\beta-\beta_I) \in (\mu_I)_*(q_I^F)^*\Omega^{i}_X(\log D)|_{D_I}.\notag} \end{defn} Let ${\mathcal K}_n$ be the image of the Zariski sheaf of Milnor $K$-theory into Milnor $K$-theory $K_n(k(X)$ of the function field (which is the same as Ker($K_n(k(X))\to \oplus K_{n-1}(\kappa(x)))$ on all codimension 1 points $x\in X$). The $\tau$-differential defines $ d_\tau \log: {\mathcal K}_n\to A^n=B^n$ ($C^n=0$). The image in $A^n$ is $D_\tau$-flat. Thus this defines $ d_\tau\log: {\mathcal K}_n\to A(n)[-1]$. \begin{defn} \label{defn3:3} We define ${\mathcal K}_n\Omega^\infty_Q$ to be the complex $ {\mathcal K}_n\xrightarrow{ d_\tau\log} A(n)[-1]$ and $ \ {\mathcal K}_n\Omega^\infty_Q \supset ({\mathcal K}_n\Omega^\infty_Q)_0$ to be the subcomplex $ {\mathcal K}_n \xrightarrow{d_\tau\log} A^n_{D_\tau}$, where $A^n_{D_\tau}$ means the subsheaf of $D_\tau$-closed sections. \end{defn} \begin{lem} \label{lem3:4} The $\tau$-connections on $(\xi_I^F)^s$ define a class $\xi^s(\nabla)\in \H^1(Q, ({\mathcal K}_1\Omega^\infty_Q)_0)$ with the property that the image of $\xi^s(\nabla)$ in $H^1(Q, {\mathcal K}_1)$ is $c_1(\xi^s)$. \end{lem} \begin{proof} The cocycle of the class $\xi^s(\nabla)$ results from the claim \ref{claim3:1}. Write $g_{\alpha \beta}^s$ for a ${\mathcal K}_1$-valued 1-cocyle for $\xi^s$. Then the flat $\tau$-connection on $\xi^s$ is defined by local sections $\omega^s_\alpha$ in $q^*\Omega^1_X(\log D)$ which are $d_\tau$ flat for $d_\tau: q^*\Omega^1_X(\log D)\to q^*\Omega^2_X(\log D)$. So the cocyle condition reads $d_\tau \log g_{\alpha \beta}^s=\delta (\omega^s)_{\alpha \beta}$ where $\delta$ is the Cech differential. The claim \ref{claim3:1} implies then that $\mu^*_I(\omega^s_{\alpha}) \in (q_I^F)^*\Omega^1_{D_I}(\log (D_I\cap D^I))$, is $\tau$-flat and one has $d_\tau \log \mu_I^*(g_{\alpha \beta}^s)=\delta \mu^*_I(\omega^s)_{\alpha \beta}$. So the class $(\xi_I^F)^s$ is defined by the Cech cocyle $(g^s_{\alpha \beta},\mu_I^* \omega^s\oplus 0)$, with $\mu_I^* \omega^s \in B^1, 0\in C^1$. \end{proof} We define a product \ga{3.5}{({\mathcal K}_m\Omega_{Q}^\infty)_0\times ({\mathcal K}_n\Omega_{Q}^\infty)_0 \xrightarrow{\cup} ({\mathcal K}_{m+n}\Omega_{Q}^\infty)_0} by using the formulae defined in \cite[Definition~2.1.1]{ADC}, that is \ga{3.6}{x\cup y=\begin{cases} \{x,y\} & x\in {\mathcal K}_m, y\in {\mathcal K}_n\\ d_\tau \log x \wedge y \oplus d_\tau \log x\wedge y & x\in {\mathcal K}_m, y \in (B^n\oplus C^n)_{D_\tau}\\ 0 & {\rm else}. \end{cases} } The product is well defined. \begin{defn} \label{defn3:5} We define $c_n(q^*(E,\nabla,\Gamma))\in \H^n(Q, {\mathcal K}_n\Omega_{Q}^\infty))$ to be the image via the map $\H^n(Q, ({\mathcal K}_n\Omega_{Q}^\infty)_0)\to \H^n(Q, {\mathcal K}_n\Omega_{Q}^\infty)$ of $$\sum_{s_1<s_2\ldots <s_n} \xi^{s_1}(\nabla)\cup \cdots \cup \xi^{s_n} (\nabla).$$ \end{defn} \begin{defn} \label{defn3:6} On $X$ we define the complex of sheaves $$A_X(n)=A_X^n\to A_X^{n+1}\to \ldots$$ with \ml{}{A_X^i= B_X^i\oplus C_X^i\\ B_X^i=\oplus_I (i_I)_* \Omega^i_{D_I}(\log (D^I\cap D_I)), \ C^i_X=\oplus_{I\neq \emptyset} (i_I)_*\Omega^{i-1}_X(\log D)|_{D_I},\notag } where $C_X^i=0$ for $i=n$. The differentials $D_X$ are defined as follows: $(\oplus_I\beta_I, \oplus_I \gamma_I)$, where $\beta_I \in (i_I)_* \Omega^i_{D_I}(\log (D^I\cap D_I)), \gamma_I\in (i_I)_*\Omega^{i-1}_X(\log D)|_{D_I}$ is sent to \ml{}{\oplus_I d \beta_I \in (i_I)_* \Omega^{i+1}_{D_I}(\log (D^I\cap D_I)),\\ \oplus_I d \gamma_I +(-1)^i (i_I^*\beta-\beta_I) \in (i_I)_*\Omega^{i}_X(\log D)|_{D_I},\notag} where the differentials $d_\tau$ are the $\tau$ differentials in the various differential graded algebras $\Omega^\bullet_{D_I}(\log (D^I\cap D_I))$. \end{defn} One has an injective morphism of complexes \ga{3.7}{\iota:\Omega^{\ge n}_X\to A_X^{\ge n}} sending $\alpha\in \Omega^i_X$ to $i_I^*\alpha \oplus 0$. \begin{prop} \label{prop3:7} The morphism $\iota$ is a quasi-isomorphism. Furthermore, one has $Rq_*A(n)=A_X(n)$. \end{prop} \begin{proof} We start with the second assertion: since $\mu_I$ is a closed embedding, one has $R(\mu_I)_*=(\mu_I)_*$ on coherent sheaves. Thus by the commutativity of the diagram \eqref{3.2}, and the fact that ${\mathcal O}$ on the flag varieties is relatively acyclic, one has $Rq_*(R\mu_I)_*(q_I^F)^* {\mathcal E}= (i_I)_* {\mathcal E}$ for a locally free sheaf ${\mathcal E}$ on $D_I$. This shows the second statement. We show the first assertion. We first show that the 0-th cohomology sheaf of $A_X(n)$ is $(\Omega^n_X)_d$. The condition $D(\beta, \beta_I)=0$ means $d\beta=d\beta_I=0$ and $i_I^*\beta=\beta_I$. Thus $\beta\in \Omega^n_X$ and $d\beta=0$. Assume now $i\ge n+1$. Then modulo $DA^{i-1}(n)$, $((\beta, \beta_I), \gamma_I)$ is equivalent to $((\beta, \beta_I + (-1)^{i-1}d\gamma_I), 0)$. So we are back to the computation as in the case $i=n$ and ${\rm Ker}(D)$ on $B^i\oplus 0$ is ${\rm Ker}(d)$ on $\Omega^i_X$. On the other hand, by the same reason, $D(B^{i-1}\oplus C^{i-1})=D(B^{i-1}\oplus 0)$, and $D(B^{i-1}\oplus 0)\cap (B^i\oplus 0)=d(\Omega^i_X)$. This finishes the proof. \end{proof} \begin{prop} \label{prop3:8} The map $q^*: AD^n(X)_\infty=\H^n(X, {\mathcal K}_n\xrightarrow{d\log} \Omega^n_X\xrightarrow{d} \ldots \xrightarrow{d} \Omega^{{\rm dim}_X})\to \H^n(Q, {\mathcal K}_n\Omega^\infty)$ is injective. The classes $c_n((q^*(E,\nabla,\Gamma))\in \H^n(Q, {\mathcal K}_n\Omega^\infty)$ in definition \ref{defn3:5} are of the shape $q^*c_n((E,\nabla, \Gamma))$ for uniquely defined classes $c_n((E,\nabla, \Gamma))\in \H^n(X, {\mathcal K}_n\xrightarrow{d\log} \Omega^n_X\xrightarrow{d} \ldots \to \Omega^{{\rm dim}_X})$. \end{prop} \begin{proof} One has a commutative diagram of long exact sequences \ga{3.8}{\xymatrix{ H^{n-1}(Q,{\mathcal K}_n) \ar[r] & \H^{n-1}(A(n)) \ar[r] & \H^n({\mathcal K}_n\Omega^\infty_{Q}) \ar[r] & H^n(Q, {\mathcal K}_n)\\ \ar[u]^{{\rm inj}} H^{n-1}(X,{\mathcal K}_n) \ar[r] & \ar[u]^{=} \H^{n-1}(A(n)_X) \ar[r] & \ar[u] \H^n({\mathcal K}_n\Omega^\infty_{X}) \ar[r] & \ar[u]^{{\rm inj}} H^n(X, {\mathcal K}_n) }} where ${\mathcal K}_n\Omega^\infty_{X}={\mathcal K}_n\xrightarrow{d\log} \Omega^n_X\xrightarrow{d} \ldots \xrightarrow{d}\Omega_X^{{\rm dim}X}$. We write $H^{i}(Q, {\mathcal K}_j)=H^i(X, {\mathcal K}_j) \oplus {\rm rest}$, where the rest is divisible by the classes of powers of the $[\xi^s] \in H^1(Q, {\mathcal K}_1)$, with coefficients in some $H^a(X, {\mathcal K}_b)$. But $[\xi^s]$ comes by lemma \ref{lem3:4} from a class $\xi^s(\nabla) \in \H^1(Q, ({\mathcal K}_1\Omega^\infty_Q)_0)$. Consequently, the image of ${\rm rest}$ in $\H^{i}(A(n))$ dies. We conclude that one has an exact sequence $0\to \H^n({\mathcal K}_n\Omega^\infty_{X}) \to \H^n({\mathcal K}_n\Omega^\infty_{Q}) \to \H^n(X, R^\bullet q_*{\mathcal K}_n/q_*{\mathcal K}_n)$. By the standard splitting principle for Chow groups, one has $H^n(Q, {\mathcal K}_n)/H^n(X, {\mathcal K}_n)= \H^n(X, R^\bullet q_*{\mathcal K}_n/q_*{\mathcal K}_n)$, and $$\sum_{s_1<s_2\ldots <s_n} c_1(\xi^{s_1})\cup \cdots \cup c_1(\xi^{s_n}) \in {\rm Im}( CH^n(X)\subset CH^n(Q)).$$ By lemma \ref{lem3:4}, $\xi^s(\nabla) \in \H^1(Q, ({\mathcal K}_1\Omega^\infty)_0)$ maps to $c_1(\xi^s)\in H^1(Q, {\mathcal K}_1)$. Thus we conclude that $c_n(q^*(E,\nabla, \Gamma))\in {\rm Im}( \H^n({\mathcal K}_n\Omega^\infty_{X})\subset \H^n({\mathcal K}_n\Omega^\infty_{Q})$. This finishes the proof. \end{proof} \begin{thm} \label{thm3:9} Let $ X\supset U$ be a smooth (partial) compactification of a variety $U$ defined over a characteristic 0 field, such that $D=\sum_j D_j= X\setminus U$ is a strict normal crossings divisor. Let $(E, \nabla)$ be a flat connection with logarithmic poles along $D$ such that its residues $\Gamma_j$ along $D_j$ are all nilpotent. Then the classes $c_n(( E, \nabla)) \in AD^n( X, D)$ lift to well defined classes $c_n(( E, \nabla, \Gamma)) \in AD^n( X)$. They are functorial: if $f: Y\to X$ with $Y$ smooth, such that $f^{-1}(D)$ is a normal crossings divisor, \'etale over its image $\subset D$, then $f^*c_n((E,\nabla, \Gamma))=c_n(f^*(E,\nabla,\Gamma))$ in $AD^n(Y)$. If $D'\supset D$ is a normal crossings divisor and $\nabla'$ is the connection $\nabla$, but considered with logarithmic poles along $D'$, thus with trivial residues along the components of $D'\setminus D$, then $c_n((E,\nabla,\Gamma))=c_n((E,\nabla',\Gamma'))$. The classes $c_n((E,\nabla,\Gamma))$ satisfy the Whitney product formula. In addition, $c_n(( E, \nabla, \Gamma))$ lies in the subgroup $AD^n_{\infty}( X)=\H^n( X, {\mathcal K}_n\xrightarrow{d\log} \Omega^n_{\bar X}\xrightarrow{d} \Omega^{n+1}_{ X}\to \ldots \xrightarrow{d} \Omega^{{\rm dim}(X)}_{ X})\subset AD^n( X) $ of classes mapping to 0 in $H^{0}( X, \Omega^{ 2n}_X)$. The restriction to $AD^n_\infty(D_I)$ of $c_n((E,\nabla, \Gamma))$ is $c_n ((gr(F_I^\bullet),\nabla_I, \Gamma_I))$ where $(gr(F_I^\bullet),\nabla_I, \Gamma_I)$ is the canonical filtration (see \ref{claim2:4} and \ref{defn2:5}). . \end{thm} \begin{proof} The construction is the proposition \ref{prop3:8}. We discuss functoriality. If $f$ is as in the theorem, then the filtrations $F_I^\bullet$ for $(E,\nabla)$ restrict to the filtration for $f^*(E,\nabla)$. Whitney product formula is proven exactly as in \cite[2.17,2.18]{E} and \cite[Theorem~1.7]{E2}, even if this is more cumbersome, as we have in addition to follow the whole tower of $F_I^\bullet$. Finally, the last property follows immediately from the definition of $\xi^s(\nabla)$ in lemma \ref{lem3:4}. \end{proof} \begin{thm} \label{thm3:9}Assume given $k\subset {\mathbb C}$ and $\Gamma$ is nilpotent. Then the classes $\hat{c}_n((E,\nabla))\in H^{2n}((X\setminus D)_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$ defined in \cite{E}, come from well defined classes $\hat{c}_n ((E,\nabla, \Gamma))\in H^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$. Furthermore $\hat{c}_n((E,\nabla,\Gamma))$ fulfill the same functoriality, additivity, restriction and enlargement of $\nabla$ properties as $c_n((E,\nabla, \Gamma)) \in AD^n_\infty(X)$. \end{thm} \begin{proof} We just have to use the regulator map $AD^n(X)\to H^{2n-1}(X_{{\rm an}}, {\mathbb C}/{\mathbb Z}(n))$, which is an algebra homomorphism, and which defined in \cite[Theorem~1.7]{E2}. Of course we can also follow the same construction directly in the analytic category. \end{proof} \bibliographystyle{plain} \renewcommand\refname{References}
2,877,628,089,258
arxiv
\section{Introduction} To use a tool for its designed purpose, it often has to be held with a grasp that is different from the one for picking it up. To turn a nut using a wrench, for example, one would first pick up the wrench using the fingertips and then pull it closer to the palm while transitioning to power grasp so that a large force can be applied (\figref{fig:in-hand-example}). It is therefore necessary to change the grasp along with the object pose relative to the hand between picking up and using the tool. While similar to the in-hand manipulation problem, which focuses on object {\em reposing}, the tool use problem poses a constraint on the grasps that can be used. While for the object reposing any grasp that can realize the goal pose can be used, the final grasp in the tool-use problem should enable applying a specific wrench (force/torque) to the object. In this paper, we present a planning and control framework for in-hand manipulation that can also enable the more complex dexterous manipulation problem of tool-use. \begin{figure} \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_init_grasp.png} \end{subfigure} \vspace{0.1cm} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_final_grasp.png} \end{subfigure} \caption{Example of tool manipulation. Left: initial grasp for picking up; right: final grasp for using the tool.} \label{fig:in-hand-example} \end{figure} \section{Related Work} In order for the finger joints to stay within their limits while reposing an object in-hand, finger gaiting, sliding, or rolling a link along the object surface could be needed. While some of the past works utilized external forces such as the gravitational force and contact forces from the environment, e.g. supporting the object by the hand palm or reposing the object through non-prehensile primitives~\cite{dafle2014extrinsic,chavan2015prehensile,karayiannidis2015hand,van2015learning,nonprehensile_liu}, we focus on in-hand manipulation using only internal forces. We also consider generic fully-actuated robotic hands without assuming any underactuation or mechanical compliance that can help improve the stability and robustness~\cite{liarokapis2017deriving,abondance2020dexterous, van2015learning}. Earlier works employed model-based approaches such as constrained optimization~\cite{liu2009dextrous,mordatch_CIO,nonprehensile_liu} to determine the hand and/or object trajectories as well as finger-object contact interactions for simple object reorientation tasks. While these methods can generate plausible in-hand manipulation motions including sliding and rolling for a pre-determined task, they are computationally expensive to be run online and do not consider real-time feedback control and therefore are not suitable for robotics applications. Since dynamics of the hand-object system may be difficult to model or have large uncertainties, a data-driven learning-based approach is a promising alternative. This approach can further be divided into learning-from-demonstration and reinforcement learning (RL). An example of the former is Ueda et al.~\cite{ueda2010multifingered} where a cylinder reposing was achieved using direct teaching from human demonstrations. However, in addition to dependency on costly demonstration data, grasp changes would be more challenging to teach due to the differences between human and robotic hands. Model-free RL methods can learn a policy without extensive modeling effort or human demonstrations. Taking advantage of extra stability resulted by supporting the object through palm, multiple end-to-end Deep RL (DRL) methods have been used for in-hand reorientation of simple objects such as a cylinder or cube \cite{kumar2016optimalControlRL,andrychowicz2020OpenAI_dexterousManipulation}. Attempts have been made to improve the sample efficiency of these methods by augmenting demonstrations~\cite{rajeswaran2017DRL-demonstration} or using model-based RL~\cite{nagabandi2020model-based-RL}. Hybrid learning- and model-based method is a possible approach for realizing both sample efficiency and robustness. Li et al. \cite{Hierarchical_Control} proposed a hierarchical control structure for the typical in-hand manipulation where a learned policy determines the motion primitives (sliding, flipping, reposing) and a model-based low-level controller executes the selected primitive. They evaluated the method by a simulated 3-fingered hand with 2 degrees of freedom (DoF) on each finger, reposing a pole and cube in a 2D vertical plane. This paper presents a hierarchical hybrid learning- and model-based dexterous manipulation planning and control framework for applications that require not only object reposing but also a grasp suitable for tool-use, which has been rarely considered in prior work. An RL policy infused with domain knowledge of the physics of the problem and the low-level controller outputs a sequence of finger joint addition, removal, or sliding actions in real-time, that are robustly realized by the model-based controller. We speculate that a hybrid structure enables more data-efficient learning~\cite{Hierarchical_Control} compared to end-to-end RL approaches~\cite{andrychowicz2020OpenAI_dexterousManipulation,kumar2016optimalControlRL}, although we have not conducted a formal comparison on the amount of data required for training. We also show that, unlike a model-based approach, this hybrid approach can enable adapting to variations. We demonstrate our method in a simulated physics environment on realistic hands with 16~DoF (Honda dexterous hand \cite{hasegawa2022MFH} and Allegro hand\cite{Allegro}) and realistic tool models. Hardware evaluation is underway. \section{Problem Definition} \label{sec:overview} \subsection{Assumptions} This paper concerns the problem of in-hand manipulation of a single rigid object using a dexterous robotic hand with at least four fingers in order to change the grasp while maintaining 3D force closure. We assume that inertial and kinematic properties of the object and hand, as well as the friction coefficient between the hand links and object, are known. We also assume that every joint is torque controlled in both directions, and that each link is equipped with a tactile or force-torque sensor that gives the total 3-dimensional force and center of pressure of the distributed force applied to the link surface, determining the contact point. Currently, we also assume the object manipulation happens through the fingers only (i.e. no palm contact). Finally, we assume that the object's pose (position and orientation) is given by a technique such as vision-based object tracking. \subsection{Planning Phase} \label{planning overview} To define the planning problem, we first define a tuple called contact information $C = \{J, \vc{c}_J, \vc{c}_O\}$ where $J$ denotes the joint (link) in contact with the object, and $\vc{c}_J$ and $\vc{c}_O$ are the contact point location represented in the joint and object frames respectively. We assume a point contact on each link. A grasp $G = \{C_1, C_2, \ldots\}$ is defined as a set of 0 or more contact information. If $G = \varnothing$, there is no contact and therefore the object is not held. To facilitate planning, we provide a set of possible grasp candidates $\set{G}_{cand}$. Given these definitions, the in-hand manipulation planning problem is defined as follows: \theoremstyle{definition} \newtheorem*{definition*}{Definition} \begin{definition*}[In-hand manipulation planning] Given \begin{itemize} \item a set of grasp candidates $\set{G}_{cand}$ \item initial grasp $G_s\in \set{G}_{cand}$, \item initial and final object positions $\vc{p}_s$ and $\vc{p}_g$, \item initial and final object orientations $\vc{R}_s$ and $\vc{R}_g$, \item external wrench $\vc{w}_{ext}$ applied to the object at $(\vc{p}_g, \vc{R}_g)$ \end{itemize} find \begin{itemize} \item execution time $T$, \item object reference position $\hat{\vc{p}}_O(t)$ and orientation $\hat{\vc{R}}_O(t)$ as a function of time $t\in [0, T]$, and \item sequence of grasps $G_m\;(m=1,2,\ldots,M)$ where $M>1$ is a user-defined integer representing the number of uniformly distributed sampling times $t_m =$\footnotesize$Tm/(M-1)$\normalsize \end{itemize} such that \begin{itemize} \item the object does not collide with the stationary part of the hand (i.e. palm) or the environment, \item the contact points in $G_m$ are reachable, and \item the contacts in $G_m$ are able to provide the wrench required to generate the acceleration at $t=t_m$ along the object reference trajectory $\hat{\vc{p}}_O(t),\hat{\vc{R}}_O(t)$. \end{itemize} \end{definition*} The object trajectory is computed in two steps: \begin{enumerate} \item Path planning: obtain a collision-free path of the object such that every waypoint has at least one grasp in $\set{G}_{cand}$ in which all contact points can be reached. \item Trajectory generation: obtain $\hat{\vc{p}}_O(t), \hat{\vc{R}}_O(t)$ by optimizing the timestamp of each waypoint obtained in step~1). The timestamp of the last waypoint becomes the completion time $T$. Also compute $\hat{\vc{p}}_{Om} = \hat{\vc{p}}_O(t_m)$ and $\hat{\vc{R}}_{Om} = \hat{\vc{R}}_O(t_m)\;(m=1,2,\ldots,M)$. \end{enumerate} Fig.~\ref{fig:in-hand-manipulation-framework} includes the structure of the planning phase. Details of the planners are provided in section \ref{sec:planning}. \begin{figure} \begin{center} \includegraphics[scale=0.35]{figures/planner_controller_framework.pdf} \end{center} \caption{In-hand manipulation framework} \label{fig:in-hand-manipulation-framework} \end{figure} \subsection{Control Phase} Structure of the controller, which executes the in-hand manipulation plan generated in the planning phase, is shown in Fig. \ref{fig:in-hand-manipulation-framework}. The {\em grasp sequence manager} block runs with a constant time interval and determines whether a grasp transition should take place. For instance with a learning-based method, the policy is called to determine if a contact should be added, removed, slid, or the current grasp should be maintained. The remainder of the controller consists of following three low-level tracking controllers (see Section~\ref{sec:controller} for details): \begin{enumerate} \item Object tracking: compute the contact forces applied to the object such that it tracks the planned trajectory, \item Contact force tracking: compute the joint reference positions to realize the contact forces from 1), and \item Contact state tracking: realize the contact states dictated by the planned contact sequence. \end{enumerate} \section{Preliminaries} \label{sec:preliminaries} \subsection{Inverse Kinematics} \label{sec:ik} To compute the joint positions $\vc{q}$ for a given grasp with $K$ contacts, $C_k = \{J_k,\vc{c}_{Jk},\vc{c}_{Ok}\}\; (k=1,2,\ldots,K)$, and object pose $(\vc{p}_O, \vc{R}_O)$, we apply an iterative algorithm: \begin{gather} \Delta \vc{q} = \arg\min \sum_{k=1}^K ||\vc{J}_{Ck}\Delta \vc{q} - \Delta \vc{p}_k ||^2\label{eq:qp-ik} \end{gather} subject to element-wise bounds \begin{equation} \frac{1}{k_{IK}}(\vc{q}_{min} - \vc{q}) \leq \Delta\vc{q} \leq \frac{1}{k_{IK}}(\vc{q}_{max} - \vc{q}) \end{equation} where \begin{eqnarray} \Delta \vc{p}_k &=& (\vc{p}_O + \vc{R}_O \vc{c}_{Ok}) - \left(\vc{p}_{Jk}(\vc{q}) + \vc{R}_{Jk}(\vc{q}) \vc{c}_{Jk}\right) \label{eq:ik-position-error} \end{eqnarray} and $k_{IK}$ is a positive gain, $\vc{J}_{Ck}$ is the Jacobian matrix of contact point $k$ with respect to $\vc{q}$, $\vc{p}_{Jk}(\vc{q})$ and $\vc{R}_{Jk}(\vc{q})$ denote the pose of joint $J_k$ at $\vc{q}$, and $\vc{q}_{max}$ and $\vc{q}_{min}$ are the vector of maximum and minimum joint positions respectively. Using the total IK error defined as $d(\vc{q}, \vc{p}_{O}, \vc{R}_O, G) = \sum_{k=1}^K ||\Delta\vc{p}_k ||^2$, iteration terminates with $\vc{q}^*$ and $d^*$ if: \begin{itemize} \item $d(*)$ is below a predefined threshold, or \item $d(*)$ increases from the previous iteration, or \item any of the links make a contact with the environment. \end{itemize} The maximum IK error for grasp $G$ at sample $m$ is: \begin{equation} \Delta p^*(m, G) = \max_{k=1,2,\ldots,K} ||\Delta \vc{p}_k ||^2. \end{equation} \subsection{Contact Force Optimization} \label{sec:contact-force-optimization} Using the Newton-Euler equations of 3D rigid-body dynamics, we can compute the total force and torque to be applied to the object moving with angular velocity $\vc{\omega}_O$ to generate linear and angular accelerations $(\dot{\vc{v}}_O,\; \dot{\vc{\omega}}_O)$ by \begin{eqnarray} \hat{\vc{f}}_{total} &=& {\mathcal M} \dot{\vc{v}}_{O} - \vc{f}_E\\ \hat{\vc{\tau}}_{total} &=& {\mathcal I}\dot{\vc{\omega}}_{O} + \vc{\omega}_{O} \times {\mathcal I} \vc{\omega}_{O} - \vc{\tau}_E \end{eqnarray} where ${\mathcal M}$ and ${\mathcal I}$ are the object mass and moments of inertia and $\vc{f}_E$ and $\vc{\tau}_E$ are the applied external force and torque. Given a grasp $G$ we optimize the contact forces $\vc{f}_k$: \begin{equation} \vc{f}^*_1,\vc{f}^*_2,\ldots,\vc{f}^*_K = \arg\min Z_f \label{eq:qp-force} \end{equation} subject to constraints \begin{eqnarray} \boldsymbol{c}^T_{kl}\boldsymbol{f}_k \leq 0 \; (k=1,2,\ldots,K;~ l=1,2,\ldots,L) \label{eq:friction_inequality_const} \end{eqnarray} where \begin{equation} \resizebox{0.85\columnwidth}{!}{% $Z_f = ||\hat{\vc{f}}_{total} - \sum_{k=1}^K \vc{f}_k ||^2 + w_t ||\hat{\vc{\tau}}_{total} - \sum_{k=1}^K \vc{p}_{Ok} \times \vc{f}_k||^2$} \end{equation} $\vc{p}_{Ok} = \vc{R}_{Om} \vc{c}_{Ok}$, $L$ is the number of sides of the pyramid approximating the friction cone, $w_t>0$ is a user-defined weight, and $\vc{c}_{kl}$ is the the normal vector of the $l$-th side of the pyramid at contact $k$ which can be computed as $\vc{c}_{kl} = \left( \vc{t}_{1k}\; \vc{t}_{2k}\; \vc{n}_k \right) \left( \cos \theta_l\; \sin \theta_l\; -\mu \right)^T$ where $\mu$ is the friction coefficient, $\vc{t}_{1k}$ is a tangent vector at contact $k$, $\vc{t}_{2k} = \vc{t}_{1k}\times \vc{n}_k$, and $\theta_l = 2\pi l / L$. The inequality constrain (\ref{eq:friction_inequality_const}) applies to the sticking contacts. For a sliding contact, we constrain the $f_k$ to be on the edge of the friction cone, with the tangent component in the direction of desired sliding. We also define the following variables related to contact forces: \begin{equation} \hat{f}_{total}(m) = || \hat{\vc{f}}_{total}||^2 + w_t|| \hat{\vc{\tau}}_{total}||^2 \end{equation} \begin{equation} \resizebox{0.95\columnwidth}{!}{% $e^*(m, G) = ||\hat{\vc{f}}_{total} - \sum_{k=1}^K \vc{f}^*_k ||^2+ w_t ||\hat{\vc{\tau}}_{total} - \sum_{k=1}^K \vc{p}_{Ok} \times \vc{f}^*_k||^2$} \end{equation} \begin{equation} f^*(m, G) = \sum_{k=1}^K ||\vc{f}^*_k ||^2 \end{equation} where the argument $m$ indicates that the object pose, velocity and acceleration at sample $m$ are used. Essentially, $e^*(m,G)$ represents the residual wrench that grasp $G$ cannot generate. We use the ALGLIB library~\cite{bib-alglib} to solve the quadratic programs of \eqref{eq:qp-ik} and \eqref{eq:qp-force}. \section{In-Hand Manipulation Planning} \label{sec:planning} The initial grasp can be found by a grasp planner~\cite{bohg2013data} or prior knowledge from demonstrations. In our implementation, we generate $\set{G}_{cand}$ by choosing 1--3 possible contact points for each middle and distal joint, and then enumerating all combinations of contacts that do not cause collisions. \subsection{Object Path Planning} We chose a sampling-based motion planning algorithm based on Probabilistic Roadmap (PRM)~\cite{kavraki1996prm} called PRM*~\cite{karaman2011prmstar} since it worked best for our problem. PRM builds a roadmap of valid samples in the configuration space by random sampling. A valid path can be found by searching for a path that connects the start and goal configurations in the roadmap. PRM* extends PRM to find an optimal path. Our implementation uses the Open Motion Planning Library~\cite{sucan2012the-open-motion-planning-library}. The configuration space is a 6D space representing the 3D position $\vc{p}$ and orientation $\vc{R}$ of the object. A sampled configuration is valid if the object pose satisfies the following: \begin{enumerate} \item The object does not collide with the environment (floor) or the fixed part (palm) of the robot hand. \item There exists at least one grasp $G$ in which $d^*(\vc{p}, \vc{R}, G)$ is smaller than a threshold. \end{enumerate} In this work, we use the path length as the cost function to be minimized. \subsection{Object Trajectory Generation} \label{sec:trajectory-optimization} Let $N$ denote the number of waypoints of the path obtained by path planning. The goal is to determine the timestamp $t_i$ of waypoint $i\;(i=1,2,\ldots,N)$ with constraints $t_1=0$, $t_{i-1} + \Delta t_{min} \leq t_i \;(2 \leq i \leq N)$ and $t_N \leq T_{max}$, where $\Delta t_{min} > 0$ is the minimum interval between waypoints and $T_{max}$ is the maximum duration. Given a set of timestamps, we can interpolate the waypoints with piecewise cubic B-splines such that the trajectory passes the initial and final poses with zero velocity. As a result, we obtain 7 sets of cubic B-splines for the 3 position components and 4 quaternion components. We then use these B-splines to sample the whole trajectory at $M$ sample points with a uniform time interval $t_N / (M-1)$. We formulate the problem of determining the timestamps as a numerical optimization problem with cost function \begin{equation} Z_1 = \sum_{m=1}^M c^*(m) \end{equation} where $c^*(m)$ is the cost at sample $m\;(1\leq m \leq M)$, obtained as $c^*(m) = \min_{G\in \set{G}_{cand}} c(m,G)$ where $c(m,G)$ is the cost for using grasp $G$ at sample $m$. The interpolated trajectory gives the position $\vc{p}_{Om} = \vc{p}_O(t_m)$, orientation $\vc{R}_{Om}=\vc{R}_O(t_m)$, linear and angular velocities and accelerations $\vc{v}_{Om}$, $\vc{\omega}_{Om}$, $\dot{\vc{v}}_{Om}$ and $\dot{\vc{\omega}}_{Om}$ at sample $m$. Using these quantities, $c(m,G)$ is computed by \begin{equation} \resizebox{0.88\columnwidth}{!}{% $c(m, G) = d^*(\vc{p}_{Om},\vc{R}_{Om},G) + w_e e^*(m,G) + w_f f^*(m,G) \label{eq:sample-set-cost}$} \end{equation} where $w_e$ and $w_f$ are user-defined weights. In our implementation, we use the Constrained Optimization by Linear Approximations (COBYLA) algorithm implemented in the NLopt library~\cite{bib-nlopt}. \subsection{Online Grasp Sequence Planning} \label{sec:learning} We use DRL to find a policy that can determine the grasp which can enable realizing the desired object trajectory while counteracting the external wrench $\vc{w}_{ext}$. In the current implementation the grasp generated by the planner differs from the previous grasp only one join at a time. Also, while $\vc{w}_{ext}$ could be time dependant in general to represent external disturbances experienced by the tool, in this work we limit $\vc{w}_{ext}$ to the final expected wrench to be counteracted by tool. We formulate the Grasp Sequence planning problem as a Markov Decision Process (MDP) dependent on the output of the trajectory optimization problem as follows: \begin{itemize}[leftmargin=*] \item {\bf State $\vc{s}_m$: } Robot joint positions $\vc{q}$, reference tool pose $(\hat{\vc{p}}_O(m), \hat{\vc{R}}_O(m))$, goal pose $(\hat{\vc{p}}_g,\hat{\vc{R}}_g)$, current sample $m$, current grasp $G_m$, external expected wrench $\vc{w}_{ext}$. \item {\bf Action $\vc{a}_m$: } grasp change command selected from the following discrete action space: \begin{align} \scriptsize &\mathcal{A} = \{command (C)\ | \thinspace C \in \mathcal{G}\ and \\ &command \in \{(add, remove, slide \thinspace to, no \thinspace change)\} \nonumber \end{align} \normalsize which consists of $\sum_{k=1}^{N_J} (2n_{C(k)}+1)+1$ actions, where $n_{C(k)}$ is the number of contact candidates on link $k$. \item {\bf Reward: } Some $R<<0$ if the tool or robot comes into collision with the environment (\ref{reward_weights}). Otherwise, calculated based on the metrics determining kinematic realizability of the new grasp, ability of the commanded grasp $\hat{G}(m)$ to realize the desired object trajectory $(\hat{\vc{p}}_O(m), \hat{\vc{R}}_O(m))$ and to counteract the external wrench $\vc{w}_{ext}$: \begin{multline} \label{eq:reward} R(\vc{s}_m, \vc{a}_m) = -w_1 \Delta p^*(m,\hat{G}_m)- w_2 e^*(m,\hat{G}_m)\\ + \frac{w_3 \hat{f}_{total}(m)}{f^*(m,\hat{G}_m)} - t(m,G_m)-s(m,G_m,\hat{G}_{m}) \end{multline} where $w_i > 0$ ($i=1,2,3)$ are user-defined weights. The first two terms encourage the agent to avoid actions that result in large IK or wrench error, and the third term rewards grasps that require smaller contact forces. The domain knowledge about the physics of the problem and low-level controller is infused to the learning side through the above defined reward terms. $t(m,G_{m})$ is a penalty to discourage invalid transitions: \begin{equation} \resizebox{0.9\columnwidth}{!}{% $ t(m,G_{m}) = \left\{ \begin{array}{ll} 2 & \mbox{if } a_m=add(C) \; || \; slide(C) \; \wedge \; C \in G_{m}\\ 2 & \mbox{if } a_m=remove(C) \; \wedge \; C \notin G_{m}\\ 10 & \mbox{if } a_m=slide(C) \; \wedge \; C(J) \notin G_{m}\\ 0 & \mbox{otherwise.} \end{array} \right. $} \end{equation} The last term is a sliding specific penalty, to encourage gaiting over sliding when the sliding distance is large or when contacts are on different surfaces of the object: \begin{equation} s(m,G_m,\hat{G}_{m}) = w_4 \Delta s+w_5 \theta_n \end{equation} where $\Delta s$ is the sliding distance and $\theta_n$ is the angle between object normals at the two contact points. \item {\bf Transition: } The system dynamics proceeds to $s_{m+1}$ according to reference trajectory and updates the grasp according to action $a_m$. The episode ends if there are less than two links in contact with the object, or if the maximum timestep is reached. \end{itemize} \section{Controller} \label{sec:controller} \subsection{Object Tracking} \label{Object_tracking} The desired object acceleration given the current ($\vc{p}_{O}$, $\vc{R}_{O}$) and reference object pose ($\hat{\vc{p}}_O(m)$, $\hat{\vc{R}}_O(m)$) is: \begin{eqnarray} \hat{\dot{\vc{v}}}_{O} &=& k_{P1} (\hat{\vc{p}}_O(m) - \vc{p}_{O}) - k_{D1} \hat{\vc{v}}_{O}\\ \hat{\dot{\vc{\omega}}}_{O} &=& k_{P2}\Delta \vc{r}_{O} - k_{D2} \hat{\vc{\omega}}_{O} \end{eqnarray} where $k_{P1}$, $k_{D1}$, $k_{P1}, k_{D2} > 0$ are feedback gains, $\Delta \vc{r}_{O}$ is a vector given by $\vc{a}\sin \theta$ where $\vc{a}$ and $\theta$ are the rotation axis and angle to transform $\vc{R}_{O}$ to $\hat{\vc{R}}_O(m)$, and $\hat{\vc{v}}_{O}$ and $\hat{\vc{\omega}}_{O}$ are obtained by integrating $\hat{\dot{\vc{v}}}_{O}$ and $\hat{\dot{\vc{\omega}}}_{O}$ respectively. We do not use measured object velocity because it is likely to be noisy. Instead, we use the integration of desired object acceleration for the damping term. Using $\hat{\dot{\vc{v}}}_O$, $\hat{\dot{\vc{\omega}}}_O$ and $\hat{\vc{\omega}}_O$, we optimize the contact forces for the given grasp using (\ref{eq:qp-force}). \subsection{Contact Force Tracking} \label{sec:contact-force-tracking} The optimized contact forces $\vc{f}^*_k$ are tracked by a controller similar to admittance control as follows. Let us assume that contact point $k$ is on the $j$-th finger and there is no other contact point on the finger, and let $\vc{J}_{Ckj}$ denote the Jacobian matrix of the position of contact point $k$ with respect to the joint positions of finger $j$. The contact force error at a contact point $k$ can be compensated by joint torque: \begin{equation} \Delta \vc{\tau}_j = \vc{J}^T_{Ckj} (\vc{f}^*_k - \vc{f}_k) \end{equation} which can be produced by reference joint offset: \begin{equation} \Delta \vc{q}_j = \vc{K}^{-1}_{Pj} \Delta \vc{\tau}_j. \label{eq:joint-position-displacement} \end{equation} where $\vc{K}_{Pj}$ denotes a diagonal matrix whose elements are the proportional gains of the joints of finger $j$. Directly adding $\Delta \vc{q}_j$ to the current joint reference positions will cause an issue if we want the object to track a given trajectory because the finger also has to follow the object motion. To solve this issue, we add the contact point displacement due to the object motion as well: \begin{equation} \Delta \vc{p}_k = \vc{J}_{Ckj} \Delta \vc{q}_j + \Delta t (\hat{\vc{v}}_{O} + \hat{\vc{\omega}}_{O} \times \vc{p}_{Ok}) \end{equation} which can be used in place of \eqref{eq:ik-position-error} by a single iteration of the IK algorithm in Section~\ref{sec:ik} to obtain the new joint reference position, which will be tracked by a proportional-derivative controller with gravity compensation. \subsection{Contact State Change} A grasp change involves adding a new contact, moving a contact point while maintaining the contact, e.g. sliding, or removing an existing contact. To add a new contact, we use the IK algorithm in Section~\ref{sec:ik} to move the desired contact point on the finger toward the contact point on the object. Once a contact force is detected, the finger is controlled to maintain a small constant contact force (0.1~N in our experiments) using the force tracking controller described in Section~\ref{sec:contact-force-tracking} until the contact force is maintained above a threshold (0.05~N) for a given duration (0.1~s). The reference object pose is fixed until the new contact is established. For sliding towards a new contact point, the IK algorithm in (\ref{eq:ik-position-error}) is modified by projecting $\Delta \vc{p}_k$ onto the object surface. \begin{equation} \Delta \vc{p}_k := \Delta \vc{p}_k - (\vc{n}_k^T \Delta \vc{p}_k) \vc{n}_k \end{equation} where $\vc{n}_k$ is the normal vector at $C_k$. A hybrid force and position controller realizes the sliding by performing force control in the normal direction to maintain contact while performing position control on the sliding direction. When an existing contact needs to be removed, the finger is controlled by the IK algorithm to move the contact point away from the object in the normal vector direction. The new grasp is declared to be established when no force at the contact to be removed is detected for a given period (1.0~s). \section{Experiments} \label{sec:experiments} \subsection{Experimental Setup} \begin{figure} \centering \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_grasp_seq_1.png} \end{subfigure} \vspace{0.1cm} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_grasp_seq_2.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_grasp_seq_3.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/wrench_grasp_seq_5.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/screwdriver_grasp_seq_1.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/screwdriver_grasp_seq_3.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/screwdriver_grasp_seq_4.png} \end{subfigure} \begin{subfigure}[b]{0.11\textwidth} \centering \includegraphics[width=\textwidth]{figures/screwdriver_grasp_seq_5.png} \end{subfigure} \caption{Sample sequence of grasp changes resulted by the framework, from left (initial grasp) to right (final grasp)} \label{fig:grasp_sequence} \end{figure} We validate the planning and control framework in simulation using two representative in-hand manipulation tasks with realistic tool models. In both tasks, the objects are initially lying horizontally on a table (\figref{fig:grasp_sequence}). The nominal tasks are: \begin{itemize} \item Wrench: pick up with prismatic 2-finger grasp~\cite{feix2015grasp_taxonomy} and lift about 0.08~m while gradually transitioning to power grasp that wraps the index and middle fingers around the wrench to apply a large torque to turn a nut. \item Screwdriver: pick up with prismatic 3-finger grasp and lift about 0.13~m and rotate $90^\circ$ to take a vertical pose while transitioning to tripod grasp so that a screw placed vertically on the table can be rotated. In this example, the hand is also lifted about 0.14~m. \end{itemize} We use an anthropomorphic robotic hand model with 4 fingers, each consisting of 4~DoF corresponding to the 2~DoF of human MCP joint and 1~DoF each of the PIP and DIP joints\cite{hasegawa2022MFH}. The simulator computes the contact forces using the contact model proposed by Todorov\cite{todorov2011convex} with the friction cone of friction coefficient $\mu=1$ approximated by a $L=12$-sided pyramid, and then computes the joint accelerations using unit vector method~\cite{walker1982efficient}, which are integrated by 4-th order Runge-Kutta integrator at 1~ms timestep. The low-level controller (Section~\ref{sec:controller}) and grasp sequence manager (\figref{fig:in-hand-manipulation-framework}) run at 100~Hz and 10~Hz respectively on separate threads. We train the learning-based planner using Proximal Policy Optimization (PPO) \cite{schulman2017PPO} with 15 parallel environments using the implementation in TF-Agents~\cite{TFAgents}. Discount factor is selected as 0.99. For the entropy regularization coefficient and learning rate hyperparameters, we select higher values at the beginning of the training (0.5 and $5\times10^{-4}$ respectively) to allow more exploration and faster learning~\cite{ahmed2019entropy}. We decrease both parameters linearly as the training progresses to ensure convergence to the optimal solution. To improve robustness, we perform domain randomization~\cite{tobin2017domain} during training by adding random variations in the range of [-0.01, 0.01]~(m) for position and [-0.1, 0.1]~(rad) for orientation to the given initial $(\vc{p}_s,\vc{R}_s)$ and final $(\vc{p}_g,\vc{R}_g)$ object poses. For the wrench example, we train two policies with different external torques around the vertical axis applied at the last sample (i.e. loosening and tightening tasks): policy $RL_{-1}$ with $-1$\thinspace Nm and $RL_{1}$ with $1$\thinspace Nm, while we train one screwdriver policy without external torque, since the final desired grasp for this task does not depend on the direction of rotation of the screwdriver. Each policy is trained in $10^6$ iterations with 15 episodes each (equivalent to 347 days of experience), which takes about 4 days on a desktop computer with NVIDIA\textsuperscript{\textregistered} Quadro\textsuperscript{\textregistered} P2200 GPU. The real-time running of the policy takes about 1~sec. Equation (\ref{reward_weights}) shows the weights used for the reward terms in (\ref{eq:reward}). As the baseline, we compare our method with a model-based approach that plans the optimal sequence of grasps offline using dynamic programming (DP) with the same cost function as the RL. DP takes about 150\!~sec to compute on the same machine. Such a long computation time, makes such approaches unsuitable for real-time implementation. \scriptsize \begin{align}\label{reward_weights} w_1 &= \left\{ \begin{array}{ll} 100/\Delta p^*(m,G_m) & \mbox{if } \Delta p^*(m,G_m)\geq 10^{-4}\\ 200 & \mbox{otherwise} \end{array} \right.\\ w_2 &= \left\{ \begin{array}{ll} 100/e^*(m,G_m) & \mbox{if } e^*(m,G_m) \geq 1\\ 50 & \mbox{otherwise} \nonumber \end{array} \right.\\ w_3 &= \left\{ \begin{array}{ll} 2000 & \mbox{if } e^*(m,G_m) < 1 \wedge (||\vc{\tau}_{Em}||>0 \vee ||\vc{f}_{Em}|| >0)\\ 10 & \mbox{if } e^*(m,G_m) < 1 \wedge (||\vc{\tau}_{Em}||=0 \wedge ||\vc{f}_{Em}|| =0)\\ 0 & \mbox{otherwise} \nonumber \end{array} \right.\\ w_4 &= 10 \nonumber \\ w_5 &= \left\{ \begin{array}{ll} \theta_n/2 & \theta_n<100^\circ\\ 10 & \mbox{otherwise} \nonumber \end{array}\right. \end{align} \normalsize \subsection{Qualitative Evaluation} When evaluated in the simulation, the proposed approach successfully performed the desired tool-use tasks (videos attached). A comparison of the generated grasp sequences for some example tasks is shown in Table~\ref{Tab:grasp sequence}, where n:no\thinspace change, a:add, r:remove, s:slide, T:thumb, I:index, M:middle, R:ring, d:distal\thinspace link, m:middle\thinspace link, and subscripts show the contact candidates. The $-1$ and $1$ subscripts in the policy denote the applied external torque, and g means gaiting\thinspace only (i.e. no sliding primitive). See Fig. \ref{fig:contacts} for the current choice of contact locations on the tools (wrench is reversed to show the points on the bottom surface). As can be seen, for identical tasks, RL generates sequences similar to that of DP, which shows that RL is capable of generating an optimal policy. In the wrench task, for example, both methods maintain the initial grasp until the sixth sample when the wrench is elevated enough for the fingers to be placed beneath the wrench without colliding with the floor. When sliding primitive is enabled, RL carefully decides between gaiting and sliding ($RL_{-1}$). Adding sliding primitive speeds up the learning process $30-50\%$ by reducing the actions to learn. We also observe that RL planner can result in different final grasps for different external torques, verifying the ability of the planner to choose a suitable grasp for the task. For example, when the external torque applied to the wrench simulates the requirement for loosening the nut, thumb is placed further away from the wrench head, whereas for tightening the nut the thumb sits closer to the wrench head. By applying the algorithm to Allegro hand we showed that the method is transferable to different hand models. \begin{table} \centering \caption{Commanded grasp changes for different tasks starting from their initial grasps.} \label{Tab:grasp sequence} \resizebox{\linewidth}{!}{ \begin{tabular}{@{\hskip0pt}l@{\hskip2pt}|l@{\hskip2pt}|@{\hskip2pt}l@{\hskip0.5pt}} \hline task & policy & grasp sequence\\ \hline\hline \multirow{4}{*}{\textrm{wrench}} & $DP_{g_{-1}}$ & [$n$, $n$, $n$, $n$, $n$, $a(R_{d_1})$, $r(M_{d_1})$, $a(M_{d_2})$, $n$, $r(I_{d_1})$, $a(I_{d_2})$, $a(I_{m_1})$, $r(M_{d_2})$, $a(M_{d_3})$, $a(M_{m_1})$, $n$]\\ \cline{2-3} & $RL_{g_{-1}}$ & [$n$, $n$, $n$, $n$, $n$, $a(R_{d_1})$, $r(M_{d_1})$, $a(M_{d_2})$, $r(I_{d_1})$, $a(I_{d_2})$, $n$, $n$, $a(I_{m_1})$, $r(M_{d_2})$, $a(M_{d_3})$, $a(M_{m_1})$] \\ \cline{2-3} & $RL_{-1}$ & [$n$, $n$, $n$, $n$, $n$, $a(R_{d_1})$, $s(M_{d_2})$, $n$, $r(I_{d_1})$, $a(I_{d_2})$, $n$, $n$, $a(I_{m_1})$, $s(M_{d_3})$, $a(M_{m_1})$, $n$] \\ \cline{2-3} & $RL_{1}$ & [$n$, $n$, $n$, $n$, $n$, $a(R_{d_1})$, $s(M_{d_2})$, $n$, $r(I_{d_1})$, $a(I_{d_2})$, $a(I_{m_1})$, $s(M_{d_3})$, $a(M_{m_1})$, $n$, $s(T_{d_2})$, $n$]\\ \hline \textrm{screw}& $DP$ & [$n$, $s(M_{d_2})$, $n$, $n$, $n$, $n$, $n$, $r(R_{d_1})$, $s(I_{d_2})$, $n$, $n$, $n$, $n$, $n$, $n$]\\ \cline{2-3} \textrm{driver} & $RL$ & [$n$, $n$, $n$, $n$, $n$, $r(R_{d_1})$, $n$, $n$, $n$, $n$, $n$, $n$, $s(M_{d_2})$, $s(I_{d_2})$, $n$, $n$]\\ \hline \end{tabular} } \end{table} \begin{figure} \centering \includegraphics[scale=0.09]{figures/contacts.png} \caption{Object contact points} \label{fig:contacts} \end{figure} \subsection{Robustness Evaluation} We analyze the robustness of each method by adding random variations in the ranges of [-0.03,0.03]~(m) for the horizontal position and [-20,20]~(deg) for the yaw rotation of the start pose, and [-0.025, 0.025]~(m) for the horizontal position and [-0.03, 0.01]~(m) for the vertical position of the goal pose. We divide the trials into two groups: {\em medium variation} if all variations are within [-0.01, 0.01]~(m) for position and [-10,10]~(deg) for orientation, and {\em large variation} otherwise. We run 50 trials for each object-method pair (25 in each variation group). A trial is a failure if one or more of the following events occur: 1) the object is dropped, 2) the hand makes an unintended collision with the object or environment, 3) the final object pose has a position error larger than 0.005~m or an orientation error larger than 0.1~rad \begin{table} \centering \caption{Robustness comparison between the proposed hybrid method and model-based method for different tasks} \label{Tab:success rate} \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|c|c|c|c} \hline \multirow{3}{*}{task} & \multirow{3}{*}{planner} & \multicolumn{2}{c|}{medium variation} & \multicolumn{2}{c}{large variation}\\ \cline{3-6} & & success & orientation error & success & orientation error \\ & & rate & (radians) & rate & (radians) \\\hline \multirow{2}{*}{wrench} & RL\textsubscript{1} & 64\% & 0.060 & 48\% & 0.207 \\ & DP & 40\% & 0.085 & 20\% & 0.341\\ \hline \multirow{2}{*}{screwdriver}& RL & 76\% & 0.068 & 56\% & 0.085\\ & DP & 84\% & 0.103 & 40\% & 0.257\\ \hline \end{tabular} } \end{table} Table~\ref{Tab:success rate} summarizes the success rate and the orientation error at the end of the trajectory under the applied external torque, for each 25 trial for each combination of task, planner, and variation level. Position errors are negligible due to no external force in the current experiments. In both variation groups, RL has higher robustness thanks to realtime adaptation of the grasp sequence as well as domain randomization during training. Naturally, the success rate decreases as variation increases. In addition to not being exposed to variations of such magnitude during planning or training, another possible reason is that some of the desired contact points no longer could be reachable. Resultantly, the realized grasp will have large contact position errors which in turn can cause large wrench errors and tilting or dropping the object. Possible solutions for this include using a larger set of possible contact points or non-prehensile manipulation primitives during the planning phase, and adaptively modifying the contact points during control. \section{Conclusion} \label{sec:conclusion} In this paper, we presented a framework for robust and data-efficient in-hand tool manipulation where in addition to object reposing, achieving a final grasp that enables tool-use is required. The learning-based grasp sequence planner that is infused with knowledge about the physic of the problem and the low-level controller, can successfully infer the optimal contact transitions to robustly be realized by the controller and can react to variations introduced during run-time. We conducted simulation experiments in two in-hand manipulation tasks using realistic four-fingered robotic hands and object models and showed that the approach can successfully be applied to different hands, objects, and tasks. Future work includes comparison of performance and data efficiency to end-to-end learning and hardware implementation ~\cite{Allegro,hasegawa2022MFH}. \bibliographystyle{IEEEtran}
2,877,628,089,259
arxiv
\section{Introduction} The theory of variable-length codes, one of the most studied areas of coding theory, continues to play an important role not only in the evolution of formal languages, but also in some applicative areas of computer science such as data compression. The aim of this paper is to continue developing and enriching this theory with new results, along with showing their effectiveness in concrete applications. Specifically, we continue our study on {\it adaptive codes}, which have been recently presented in \cite{t:1,t:2} as a new class of non-standard variable-length codes. Intuitively, an adaptive code of order $n$ associates a codeword to the symbol being encoded depending on the previous $n$ symbols in the input data string. Generalized adaptive codes (GA codes, for short) have been also presented in \cite{t:1,t:2} not only as a new class of non-standard variable-length codes, but also as a natural generalization of adaptive codes of any order. Both classes are described in detail in section 2. Then, we show that adaptive Huffman encodings and Lempel-Ziv encodings are particular cases of encodings by GA codes (sections 3 and 4). In section 5, we show that any $(n,1,m)$ convolutional code satisfying a certain condition can be modelled as an adaptive code of order $m$. This result is exploited further in section 6, where an efficient cryptographic scheme based on convolutional codes is described. An insightful analysis of this cryptographic scheme is provided in the same section. In sections 7 and 8, we extend adaptive codes to $(p,q)$-adaptive codes, and present a new class of variable-length codes, called adaptive time-varying codes. In the remainder of this introductory section, we recall some basic notions and notations used throughout the paper. We denote by $|S|$ the \textit{cardinality} of the set $S$; if $x$ is a string of finite length, then $|x|$ denotes the length of $x$. The \textit{empty string} is denoted by $\lambda$. For an alphabet $\Sigma$, we denote by $\Sigma^{*}$ the set $\bigcup_{n=0}^{\infty}\Sigma^{n}$ and by $\Sigma^{+}$ the set $\bigcup_{n=1}^{\infty}\Sigma^{n}$, where $\Sigma^{0}$ is the set $\{\lambda\}$. Also, we denote by $\Sigma^{\leq n}$ the set $\bigcup_{i=0}^{n}\Sigma^{i}$ and by $\Sigma^{\geq n}$ the set $\bigcup_{i=n}^{\infty}\Sigma^{i}$. Let us consider an alphabet $\Delta$, $X$ a finite and nonempty subset of $\Delta^{+}$, and $w\in\Delta^{+}$. A \textit{decomposition of} $w$ over $X$ is any sequence of strings $u_{1}, u_{2}, \ldots, u_{h}$ with $u_{i}\in X$ for all $i$, $1\leq i\leq h$, such that $w=u_{1}u_{2}\ldots u_{h}$. A \textit{code} over $\Delta$ is any nonempty set $C\subseteq\Delta^{+}$ such that each string $w\in\Delta^{+}$ has at most one decomposition over $C$. A \textit{prefix code} over $\Delta$ is any code $C$ over $\Delta$ such that no string in $C$ is proper prefix of another string in $C$. If $\mathcal{A}$ is an algorithm and $x$ its input, then we denote by $\mathcal{A}(x)$ its output. Also, we denote by $\mathbb{N}$ the set of natural numbers, and by $\mathbb{N}^{*}$ the set of nonzero natural numbers. Finally, let us fix some useful notations which will be used in the description of the algorithms. Let $\mathcal{U}=(u_{1},u_{2},\ldots,u_{k})$ be a $k$-tuple. We denote by $\mathcal{U}.i$ the $i$-th component of $\mathcal{U}$, that is, $\mathcal{U}.i=u_{i}$ for all $i\in\{1,2,\ldots,k\}$. The $0$-tuple is denoted by $()$. The length of a tuple $\mathcal{U}$ is denoted by ${\it Len}(\mathcal{U})$. If $\mathcal{V}=(v_{1},v_{2},\ldots,v_{b})$, $\mathcal{M}=(m_{1},m_{2},\ldots,m_{r},\mathcal{U})$, $\mathcal{N}=(n_{1},n_{2},\ldots,n_{s},\mathcal{V})$, and $\mathcal{P}=(p_{1},\ldots,p_{i-1},p_{i},p_{i+1},\ldots,p_{t})$ are tuples, and q is an element or a tuple, then we define $\mathcal{P}\vartriangleleft{q}$, $\mathcal{P}\vartriangleright{i}$, $\mathcal{U}\vartriangle{\mathcal{V}}$, and $\mathcal{M}\lozenge{\mathcal{N}}$ by: \begin{itemize} \item $\mathcal{P}\vartriangleleft{q}=(p_{1},\ldots,p_{t},q)$, \item $\mathcal{P}\vartriangleright{i}=(p_{1},\ldots,p_{i-1},p_{i+1},\ldots,p_{t})$, \item $\mathcal{U}\vartriangle{\mathcal{V}}=(u_{1},u_{2},\ldots,u_{k},v_{1},v_{2},\ldots,v_{b})$, \item $\mathcal{M}\lozenge{\mathcal{N}}=(m_{1}+n_{1},m_{2}+1,\ldots,m_{r}+1,n_{2}+1,\ldots,n_{s}+1,\mathcal{U}\vartriangle{\mathcal{V}})$, \end{itemize} where $m_{1},m_{2},\ldots,m_{r},n_{1},n_{2},\ldots,n_{s}$ are integers. \section{Adaptive codes and GA codes} The aim of this section is to briefly review some basic definitions, results, and notations related to adaptive codes and generalized adaptive codes \cite{t:1,t:2}. \begin{definition} \label{def:ac} Let $\Sigma$ and $\Delta$ be two alphabets. A function $\ac{n}$, $n\geq{1}$, is called \emph{adaptive code of order $n$} if its unique homomorphic extension $\overline{c}:\Sigma^{*}\rightarrow\Delta^{*}$, given by: \begin{itemize} \item $\overline{c}(\lambda)=\lambda$, \item $\overline{c}(\sstring{m})=$ $c(\sigma_{1},\lambda)$ $c(\sigma_{2},\sigma_{1})$ $\ldots$ $c(\sigma_{n-1},\sstring{n-2})$ \newline $c(\sigma_{n},\sstring{n-1})$ $c(\sigma_{n+1},\sstring{n})$ $c(\sigma_{n+2},\sigma_{2}\sigma_{3}\ldots\sigma_{n+1})$ \newline $c(\sigma_{n+3},\sigma_{3}\sigma_{4}\ldots\sigma_{n+2})\ldots$ $c(\sigma_{m},\sigma_{m-n}\sigma_{m-n+1}\ldots\sigma_{m-1})$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$, is injective. \end{definition} As it is clearly specified in the definition above, an adaptive code of order $n$ associates a variable-length codeword to the symbol being encoded depending on the previous $n$ symbols in the input data string. Let us take an example in order to better understand this mechanism. \begin{example} Let $\Sigma=\{\ttup{a},\ttup{b}\}$ and $\Delta=\{0,1\}$ be two alphabets, and $\ac{1}$ a function given as in the table below. One can verify that $\overline{c}$ is injective, and according to Definition \ref{def:ac}, it follows that $c$ is an adaptive code of order one. \begin{table}[htbp] \caption{An adaptive code of order one.} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\Sigma\backslash\Sigma^{\leq{1}}$ & $\ttup{a}$ & $\ttup{b}$ & $\lambda$ \\ \hline $\ttup{a}$ & 0 & 1 & 00 \\ \hline $\ttup{b}$ & 10 & 00 & 11 \\ \hline \end{tabular} \end{center} \end{table} \hspace{0pt} \newline Let $x=\ttup{abaa}\in\Sigma^{+}$ be an input data string. Using the definition above, we encode $x$ by \begin{center} $\overline{c}(x)=c(\ttup{a},\lambda)c(\ttup{b},\ttup{a})c(\ttup{a},\ttup{b})c(\ttup{a},\ttup{a})=001010$. \end{center} \end{example} \begin{example} Let us consider $\Sigma=\{\ttup{a},\ttup{b},\ttup{c}\}$ and $\Delta=\{0,1\}$ two alphabets, and $\ac{2}$ a function given as in the following table. One can easily verify that $\overline{c}$ is injective, and according to Definition \ref{def:ac}, $c$ is an adaptive code of order two. \begin{table}[htbp] \caption{An adaptive code of order two.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\Sigma\backslash\Sigma^{\leq{2}}$ & $\ttup{a}$ & $\ttup{b}$ & $\ttup{c}$ & $\ttup{aa}$ & $\ttup{ab}$ & $\ttup{ac}$ & $\ttup{ba}$ & $\ttup{bb}$ & $\ttup{bc}$ & $\ttup{ca}$ & $\ttup{cb}$ & $\ttup{cc}$ & $\lambda$ \\ \hline $\ttup{a}$ & 0 & 11 & 10 & 00 & 1 & 10 & 01 & 10 & 11 & 11 & 11 & 0 & 00 \\ \hline $\ttup{b}$ & 10 & 000 & 11 & 11 & 01 & 00 & 00 & 11 & 01 & 101 & 00 & 10 & 11 \\ \hline $\ttup{c}$ & 111 & 01 & 00 & 10 & 00 & 11 & 11 & 00 & 00 & 0 & 10 & 11 & 10 \\ \hline \end{tabular} \end{center} \end{table} \hspace{0pt} \newline Let $x=\ttup{abacca}\in\Sigma^{+}$ be an input data string. Using the definition above, we encode $x$ by \begin{center} $\overline{c}(x)=c(\ttup{a},\lambda)c(\ttup{b},\ttup{a})c(\ttup{a},\ttup{ab})c(\ttup{c},\ttup{ba})c(\ttup{c},\ttup{ac})c(\ttup{a},\ttup{cc})=0010111110$. \end{center} \end{example} Let $\ac{n}$ be an adaptive code of order $n$, $n\geq{1}$. We denote by $C_{c, \sigma_{1}\sigma_{2}\ldots\sigma_{h}}$ the set $\{c(\sigma,\sigma_{1}\sigma_{2}\ldots\sigma_{h}) \mid \sigma\in\Sigma\}$, for all $\sigma_{1}\sigma_{2}\ldots\sigma_{h}\in\Sigma^{\leq{n}}-\{\lambda\}$, and by $C_{c, \lambda}$ the set $\{c(\sigma,\lambda) \mid \sigma\in\Sigma\}$. We write $C_{\sigma_{1}\sigma_{2}\ldots\sigma_{h}}$ instead of $C_{c, \sigma_{1}\sigma_{2}\ldots\sigma_{h}}$, and $C_{\lambda}$ instead of $C_{c, \lambda}$ whenever there is no confusion. Let us denote by $AC(\Sigma,\Delta,n)$ the set \begin{center} $\{\ac{n} \mid c$ is an adaptive code of order $n\}$. \end{center} \begin{theorem} \label{thm:1} Let $\Sigma$ and $\Delta$ be two alphabets, and $\ac{n}$ a function, $n\geq{1}$. If $C_{u}$ is prefix code, for all $u\in\Sigma^{\leq{n}}$, then $c\in{AC(\Sigma,\Delta,n)}$. \end{theorem} \begin{proof} Let us assume that $C_{\sstring{h}}$ is prefix code, for all $\sstring{h}\in\Sigma^{\leq{n}}$, but $c\notin{AC(\Sigma,\Delta,n)}$. By Definition \ref{def:ac}, the unique homomorphic extension of $c$, denoted by $\overline{c}$, is not injective. This implies that $\exists$ $u\sigma u', u\sigma'u''\in\Sigma^{+}$, with $\sigma,\sigma '\in\Sigma$ and $u,u',u''\in\Sigma^{*}$, such that $\sigma\neq\sigma'$ and \begin{equation} \overline{c}(u\sigma u')=\overline{c}(u\sigma'u''). \end{equation} We can rewrite the equality (1) by \begin{equation} \overline{c}(u)c(\sigma,{P}_{n}(u))\overline{c}(u')= \overline{c}(u)c(\sigma',{P}_{n}(u))\overline{c}(u''), \end{equation} where the function ${P}_{n}(\cdot)$ is given as below. \begin{displaymath} {P}_{n}(u)= \left\{ \begin{array}{ll} \lambda & \textrm{if $u=\lambda$.} \\ u_{1}\ldots u_{q} & \textrm{if $u=u_{1}u_{2}\ldots u_{q}$ and $u_{1},u_{2},\ldots,u_{q}\in\Sigma$ and $q\leq{n}$.} \\ u_{q-n+1}\ldots u_{q} & \textrm{if $u=u_{1}u_{2}\ldots u_{q}$ and $u_{1},u_{2},\ldots,u_{q}\in\Sigma$ and $q>n$.} \end{array} \right. \end{displaymath} By hypothesis, $C_{{P}_{n}(u)}$ is prefix code and $c(\sigma,{P}_{n}(u)),c(\sigma',{P}_{n}(u))\in{C_{{P}_{n}(u)}}$. Therefore, the set $\{c(\sigma,{P}_{n}(u)),c(\sigma',{P}_{n}(u))\}$ is a prefix code. But the equality (2) holds true if and only if $\{c(\sigma,{P}_{n}(u)),c(\sigma',{P}_{n}(u))\}$ is not a prefix set. Thus, our assumption leads to a contradiction. \end{proof} \begin{definition} \label{def:gac} Let $F:\mathbb{N}^{*}\times\Sigma^{+}\rightarrow\Sigma^{*}$ be a function, where $\mathbb{N}^{*}$ denotes the set $\mathbb{N}-\{0\}$. A function $c_{F}:\Sigma\times\Sigma^{*}\rightarrow\Delta^{+}$ is called \emph{generalized adaptive code} (GA code, for short) if its unique homomorphic extension $\overline{c_{F}}:\Sigma^{*}\rightarrow\Delta^{*}$, given by: \begin{itemize} \item $\overline{c_{F}}(\lambda)=\lambda$, \item $\overline{c_{F}}(\sstring{m})=c_{F}(\sigma_{1},F(1,\sstring{m}))\ldots{c_{F}(\sigma_{m},F(m,\sstring{m}))}$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$, is injective. \end{definition} \begin{remark} \label{rmk:1} The function $F$ in Definition \ref{def:gac} is called the \emph{adaptive function} corresponding to the GA code $c_{F}$. Clearly, a GA code $c_{F}$ can be constructed if its adaptive function $F$ is already constructed. \end{remark} \begin{remark} \label{rmk:2} Let $\Sigma$ and $\Delta$ be two alphabets. We denote by $GAC(\Sigma,\Delta)$ the set \begin{center} $\{c_{F}:\Sigma\times\Sigma^{*}\rightarrow\Delta^{+}$ $\mid$ $c_{F}$ is a GA code$\}$. \end{center} \end{remark} The following theorem proves that adaptive codes (of any order) are special cases of GA codes. \begin{theorem} \label{thm:2} Let $\Sigma$ and $\Delta$ be alphabets. Then, $AC(\Sigma,\Delta,n)\subseteq{GAC(\Sigma,\Delta)}$ for all $n\geq{1}$. \end{theorem} \begin{proof} Let $c_{F}\in{AC(\Sigma,\Delta,n)}$ be an adaptive code of order $n$, $n\geq{1}$, and $F:\mathbb{N}^{*}\times\Sigma^{+}\rightarrow\Sigma^{*}$ a function given by: \begin{displaymath} F(i,\sstring{m})= \left\{ \begin{array}{ll} \lambda & \textrm{if $i=1$ or $i>m$.} \\ \sstring{i-1} & \textrm{if $2\leq{i}\leq{m}$ and $i\leq{n+1}$.} \\ \sigma_{i-n}\sigma_{i-n+1}\ldots\sigma_{i-1} & \textrm{if $2\leq{i}\leq{m}$ and $i>n+1$.} \end{array} \right. \end{displaymath} for all $i\geq{1}$ and $\sstring{m}\in{\Sigma^{+}}$. One can verify that $|F(i,\sstring{m})|\leq{n}$, for all $i\geq{1}$ and $\sstring{m}\in{\Sigma^{+}}$. According to Definition \ref{def:ac}, the function $\overline{c_{F}}$ is given by: \begin{itemize} \item $\overline{c_{F}}(\lambda)=\lambda$, \item $\overline{c_{F}}(\sstring{m})=$ $c_{F}(\sigma_{1},\lambda)$ $c_{F}(\sigma_{2},\sigma_{1})$ $\ldots$ $c_{F}(\sigma_{n-1},\sstring{n-2})$ \newline $c_{F}(\sigma_{n},\sstring{n-1})$ $c_{F}(\sigma_{n+1},\sstring{n})$ $c_{F}(\sigma_{n+2},\sigma_{2}\sigma_{3}\ldots\sigma_{n+1})$ \newline $c_{F}(\sigma_{n+3},\sigma_{3}\sigma_{4}\ldots\sigma_{n+2})\ldots$ $c_{F}(\sigma_{m},\sigma_{m-n}\sigma_{m-n+1}\ldots\sigma_{m-1})$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$. It is easy to remark that \begin{center} $\overline{c_{F}}(\sstring{m})=c_{F}(\sigma_{1},F(1,\sstring{m}))\ldots c_{F}(\sigma_{m},F(m,\sstring{m}))$ \end{center} for all $\sstring{m}\in\Sigma^{+}$, which proves the theorem. \end{proof} The adaptive mechanism in Definition \ref{def:gac} can be illustrated by the figure below. More precisely, the figure captures the idea behind this mechanism: the codeword associated to the current symbol depends on the symbol itself and a sequence of symbols chosen by the adaptive function. \begin{figure}[hbtp] \setlength{\unitlength}{1pt} \begin{picture}(340,100) \begin{Large} \linethickness{0.4pt} \put(130.5,38) {$\overbrace{\sigma_{1}\ldots\sigma_{i-1}}\overbrace{\sigma_{i}}\overbrace{\sigma_{i+1}\ldots\sigma_{m}}$} \put(200,33){\line(1,0){10}} \put(205,33){\vector(0,-1){10}} \put(175,23){\line(1,0){60}} \put(175,3){\line(1,0){60}} \put(175,23){\line(0,-1){20}} \put(235,23){\line(0,-1){20}} \put(182,10){\small{ENCODER}} \put(235,13){\vector(1,0){20}} \put(257,10){\normalsize{$c_{F}$}} \put(268,10){\normalsize{$(\sigma_{i},F(i, \sigma_{1}\ldots\sigma_{m}))$}} \put(195,65){\line(1,0){20}} \put(195,85){\line(1,0){20}} \put(195,65){\line(0,1){20}} \put(215,65){\line(0,1){20}} \put(202,72){\small{F}} \put(205,54){\vector(0,1){11}} \put(160,54){\line(0,1){20}} \put(252,54){\line(0,1){20}} \put(160,74){\vector(1,0){35}} \put(252,74){\vector(-1,0){37}} \put(205,85){\line(0,1){2}} \put(205,89){\line(0,1){2}} \put(205,93){\line(0,1){2}} \put(205,95){\line(-1,0){2}} \put(201,95){\line(-1,0){2}} \put(197,95){\line(-1,0){2}} \put(193,95){\line(-1,0){2}} \put(189,95){\line(-1,0){2}} \put(185,95){\line(-1,0){2}} \put(181,95){\line(-1,0){2}} \put(177,95){\line(-1,0){2}} \put(173,95){\line(-1,0){2}} \put(169,95){\line(-1,0){2}} \put(165,95){\line(-1,0){2}} \put(161,95){\line(-1,0){2}}\put(157,95){\line(-1,0){2}}\put(153,95){\line(-1,0){2}} \put(149,95){\line(-1,0){2}}\put(145,95){\line(-1,0){2}}\put(141,95){\line(-1,0){2}} \put(137,95){\line(-1,0){2}}\put(133,95){\line(-1,0){2}}\put(129,95){\line(-1,0){2}} \put(125,95){\line(-1,0){2}}\put(121,95){\line(-1,0){2}}\put(117,95){\line(-1,0){2}} \put(113,95){\line(-1,0){2}}\put(109,95){\line(-1,0){2}}\put(105,95){\line(-1,0){2}} \put(101,95){\line(-1,0){2}}\put(97,95){\line(-1,0){2}} \put(95,95){\line(0,-1){2}} \put(95,91){\line(0,-1){2}} \put(95,87){\line(0,-1){2}} \put(95,83){\line(0,-1){2}} \put(95,79){\line(0,-1){2}} \put(95,75){\line(0,-1){2}} \put(95,71){\line(0,-1){2}} \put(95,67){\line(0,-1){2}} \put(95,63){\line(0,-1){2}} \put(95,59){\line(0,-1){2}} \put(95,55){\line(0,-1){2}} \put(95,51){\line(0,-1){2}} \put(95,47){\line(0,-1){2}} \put(95,43){\line(0,-1){2}} \put(95,39){\line(0,-1){2}} \put(95,35){\line(0,-1){2}} \put(95,31){\line(0,-1){2}} \put(95,27){\line(0,-1){2}} \put(95,23){\line(0,-1){2}} \put(95,19){\line(0,-1){2}} \put(95,15){\line(0,-1){2}} \put(95,13){\line(1,0){2}} \put(99,13){\line(1,0){2}} \put(103,13){\line(1,0){2}} \put(107,13){\line(1,0){2}} \put(111,13){\line(1,0){2}} \put(115,13){\line(1,0){2}} \put(119,13){\line(1,0){2}} \put(123,13){\line(1,0){2}} \put(127,13){\line(1,0){2}} \put(131,13){\line(1,0){2}} \put(135,13){\line(1,0){2}} \put(139,13){\line(1,0){2}} \put(143,13){\line(1,0){2}} \put(147,13){\line(1,0){2}} \put(151,13){\line(1,0){2}} \put(155,13){\line(1,0){2}} \put(159,13){\line(1,0){2}} \put(163,13){\line(1,0){2}} \put(167,13){\vector(1,0){8}} \end{Large} \end{picture} \caption{Encoding with a GA code.} \end{figure} \begin{example} \label{exmp:1} Let $\Sigma$ and $\Delta$ be two alphabets, $c_{F}:\Sigma\times\Sigma^{*}\rightarrow\Delta^{+}$ a GA code, and $F:\mathbb{N}^{*}\times\Sigma^{+}\rightarrow\Sigma^{*}$ its adaptive function. Let us consider $F$ given as below. \begin{displaymath} F(i,\sstring{m})= \left\{ \begin{array}{ll} \lambda & \textrm{if $i=1$ or $i>m$.} \\ \sigma_{i-1} & \textrm{if $2\leq{i}\leq{m}$.} \end{array} \right. \end{displaymath} One can trivially verify that the function $c_{F}$ is also an adaptive code of order one. \end{example} \section{GA codes and adaptive Huffman codes} In this section, we prove that adaptive Huffman encodings are particular cases of encodings by GA codes. This result can be exploited further in data compression to develop efficient compression algorithms; for example, the algorithms presented in \cite{t:2} combine adaptive codes with Huffman's classical algorithm. The well-known Huffman algorithm is a two-pass encoding scheme, that is, the input must be read twice. The version used in practice is called the \emph{adaptive Huffman algorithm}, which reads the input only once. Intuitively, the encoding of an input data string using the adaptive Huffman algorithm requires the construction of a sequence of {\it Huffman trees}. \newline\indent Let $\Sigma$ be an alphabet, and $w=w_{1}w_{2}\ldots{w_{h}}$ a string over $\Sigma$. Denote by $\mathcal{T}_{0}(w),\mathcal{T}_{1}(w),\ldots,\mathcal{T}_{h}(w)$ the sequence of Huffman trees constructed by the adaptive Huffman algorithm for the input string $w$. The Huffman tree $\mathcal{T}_{0}(w)$ is associated to the alphabet $\Sigma$ (with the assumption that each symbol in $\Sigma$ has frequency $1$). For all $i\in\{1,2,\ldots,h\}$, the Huffman tree $\mathcal{T}_{i}(w)$ (associated to the string $w_{1}w_{2}\ldots{w_{i}}$) is obtained by updating the tree $\mathcal{T}_{i-1}(w)$. \newline\indent The procedure via this update takes place is called the \textit{sibling transformation}, which can be described as follows. Let $\mathcal{T}_{i}(w)$ be the current tree and $k$ the frequency of $w_{i+1}$; the tree $\mathcal{T}_{i+1}(w)$ is obtained from $\mathcal{T}_{i}(w)$ by applying the following algorithm: compare $w_{i+1}$ with its successors in the tree (from left to right and from bottom to top). If the immediate successor has frequency $k+1$ or greater, then we do not have to change anything. Otherwise, $w_{i+1}$ should be swapped with the last successor which has frequency $k$ or smaller (only if this successor is not its parent). The frequency of $w_{i+1}$ is incremented from $k$ to $k+1$. If $w_{i+1}$ is the root of the tree, then the loop terminates. Otherwise, it continues with the parent of $w_{i+1}$ (for further details on Huffman trees and the adaptive Huffman algorithm, the reader is referred to \cite{ds:1}). \newline\indent The codeword associated to the symbol $\sigma$ in the Huffman tree $\mathcal{T}_{i}(w)$ is denoted by ${\it code}(\sigma,\mathcal{T}_{i}(w))$, for all $i\in\{0,1,\ldots,h\}$. \begin{theorem} \label{thm:ah} Adaptive Huffman encodings are particular cases of encodings by GA codes. \end{theorem} \begin{proof} Let $\Sigma$ and $\Delta$ be two alphabets, $w$ a string over $\Sigma$, and $F:\mathbb{N}^{*}\times\Sigma^{+}\rightarrow\Sigma^{*}$, $c_{F}:\Sigma\times\Sigma^{*}\rightarrow\Delta^{+}$ two functions. Let us consider the function $F$ given by: \begin{displaymath} F(i,\sstring{m})= \left\{ \begin{array}{ll} \lambda & \textrm{if $i=1$ or $i>m$.} \\ \sstring{i-1} & \textrm{otherwise.} \end{array} \right. \end{displaymath} and the function $c_{F}$ by $c_{F}(\sigma,u)={\it code}(\sigma,\mathcal{T}_{|u|}(u))$, for all $(\sigma,u)\in\Sigma\times\Sigma^{*}$. Let us assume that $\overline{c_{F}}$ is not injective, that is, $\exists$ $u\sigma{v}$, $u\sigma'v'\in\Sigma^{+}$ such that $\sigma,\sigma'\in\Sigma$, $\sigma\neq\sigma'$ and $\overline{c_{F}}(u\sigma{v})=\overline{c_{F}}(u\sigma'v')$. The previous equality can be rewritten by: \begin{equation} \overline{c_{F}}(u){code}(\sigma,\mathcal{T}_{|u|}(u))\overline{c_{F}}(v)= \overline{c_{F}}(u){code}(\sigma',\mathcal{T}_{|u|}(u))\overline{c_{F}}(v'). \end{equation} Due to the prefix property of the set $\{{code}(\sigma,\mathcal{T}_{|u|}(u)),{code}(\sigma',\mathcal{T}_{|u|}(u))\}$, the equality (3) cannot hold true, which leads to the conclusion that our assumption is false. Thus, we conclude that $c_{F}$ is a GA code, which proves the theorem. \end{proof} \begin{remark} \label{rmk:3} If $u$ is a prefix of $w$, then $\mathcal{T}_{i}(u)=\mathcal{T}_{i}(w)$, for all $i\leq{|u|}$. \end{remark} \begin{example} \label{exmp:ah} Let $\Sigma=\{\ttup{a},\ttup{b},\ttup{c},\ttup{d}\}$ be an alphabet, and $w=\ttup{bcabd}\in\Sigma^{+}$. Applying the adaptive Huffman algorithm to the input string $w$, we get the following Huffman trees. \setlength{\unitlength}{1pt} \begin{figure}[hbtp] \begin{picture}(453,80) \small{ \put(40,80){\circle*{5}} \put(25,79){\rotatebox{-45}{\line(0,-1){21}}} \put(41.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(23.8,62){\circle*{5}} \put(57.2,62){\circle*{5}} \put(14.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(24.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(14.4,41){\circle*{5}} \put(33,41){\circle*{5}} \put(48.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(58.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(48.4,41){\circle*{5}} \put(67,41){\circle*{5}} \put(11.4,28){$\ttup{a}$} \put(30,28){$\ttup{b}$} \put(45.4,28){$\ttup{c}$} \put(64,28){$\ttup{d}$} \put(5.4,38){$1$} \put(24,38){$1$} \put(39.4,38){$1$} \put(58,38){$1$} \put(14.8,59){$2$} \put(48.2,59){$2$} \put(31,78){$4$} \put(12.4,48){\footnotesize{$0$}} \put(30,48){\footnotesize{$1$}} \put(46.4,48){\footnotesize{$0$}} \put(64,48){\footnotesize{$1$}} \put(24,69){\footnotesize{$0$}} \put(53,69){\footnotesize{$1$}} \put(20,5){(a) $\mathcal{T}_{0}(w)$} \put(115,80){\circle*{5}} \put(100,79){\rotatebox{-45}{\line(0,-1){21}}} \put(116.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(98.8,62){\circle*{5}} \put(132.2,62){\circle*{5}} \put(89.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(99.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(89.4,41){\circle*{5}} \put(108,41){\circle*{5}} \put(123.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(133.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(123.4,41){\circle*{5}} \put(142,41){\circle*{5}} \put(86.4,28){$\ttup{a}$} \put(105,28){$\ttup{d}$} \put(120.4,28){$\ttup{c}$} \put(139,28){$\ttup{b}$} \put(80.4,38){$1$} \put(99,38){$1$} \put(114.4,38){$1$} \put(133,38){$2$} \put(89.8,59){$2$} \put(123.2,59){$3$} \put(106,78){$5$} \put(87.4,48){\footnotesize{$0$}} \put(105,48){\footnotesize{$1$}} \put(121.4,48){\footnotesize{$0$}} \put(139,48){\footnotesize{$1$}} \put(99,69){\footnotesize{$0$}} \put(128,69){\footnotesize{$1$}} \put(95,5){(b) $\mathcal{T}_{1}(w)$} \put(190,80){\circle*{5}} \put(175,79){\rotatebox{-45}{\line(0,-1){21}}} \put(191.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(173.8,62){\circle*{5}} \put(207.2,62){\circle*{5}} \put(164.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(174.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(164.4,41){\circle*{5}} \put(183,41){\circle*{5}} \put(198.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(208.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(198.4,41){\circle*{5}} \put(217,41){\circle*{5}} \put(161.4,28){$\ttup{a}$} \put(180,28){$\ttup{d}$} \put(195.4,28){$\ttup{c}$} \put(214,28){$\ttup{b}$} \put(155.4,38){$1$} \put(174,38){$1$} \put(189.4,38){$2$} \put(208,38){$2$} \put(164.8,59){$2$} \put(198.2,59){$4$} \put(181,78){$6$} \put(162.4,48){\footnotesize{$0$}} \put(180,48){\footnotesize{$1$}} \put(196.4,48){\footnotesize{$0$}} \put(214,48){\footnotesize{$1$}} \put(174,69){\footnotesize{$0$}} \put(203,69){\footnotesize{$1$}} \put(170,5){(c) $\mathcal{T}_{2}(w)$} \put(265,80){\circle*{5}} \put(250,79){\rotatebox{-45}{\line(0,-1){21}}} \put(266.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(248.8,62){\circle*{5}} \put(282.2,62){\circle*{5}} \put(239.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(249.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(239.4,41){\circle*{5}} \put(258,41){\circle*{5}} \put(273.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(283.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(273.4,41){\circle*{5}} \put(292,41){\circle*{5}} \put(236.4,28){$\ttup{d}$} \put(255,28){$\ttup{a}$} \put(270.4,28){$\ttup{c}$} \put(289,28){$\ttup{b}$} \put(230.4,38){$1$} \put(249,38){$2$} \put(264.4,38){$2$} \put(283,38){$2$} \put(239.8,59){$3$} \put(273.2,59){$4$} \put(256,78){$7$} \put(237.4,48){\footnotesize{$0$}} \put(255,48){\footnotesize{$1$}} \put(271.4,48){\footnotesize{$0$}} \put(289,48){\footnotesize{$1$}} \put(249,69){\footnotesize{$0$}} \put(278,69){\footnotesize{$1$}} \put(245,5){(d) $\mathcal{T}_{3}(w)$} \put(340,80){\circle*{5}} \put(325,79){\rotatebox{-45}{\line(0,-1){21}}} \put(341.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(323.8,62){\circle*{5}} \put(357.2,62){\circle*{5}} \put(314.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(324.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(314.4,41){\circle*{5}} \put(333,41){\circle*{5}} \put(348.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(358.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(348.4,41){\circle*{5}} \put(367,41){\circle*{5}} \put(311.4,28){$\ttup{d}$} \put(330,28){$\ttup{a}$} \put(345.4,28){$\ttup{c}$} \put(364,28){$\ttup{b}$} \put(305.4,38){$1$} \put(324,38){$2$} \put(339.4,38){$2$} \put(358,38){$3$} \put(314.8,59){$3$} \put(348.2,59){$5$} \put(331,78){$8$} \put(312.4,48){\footnotesize{$0$}} \put(330,48){\footnotesize{$1$}} \put(346.4,48){\footnotesize{$0$}} \put(364,48){\footnotesize{$1$}} \put(324,69){\footnotesize{$0$}} \put(353,69){\footnotesize{$1$}} \put(320,5){(e) $\mathcal{T}_{4}(w)$} \put(415,80){\circle*{5}} \put(400,79){\rotatebox{-45}{\line(0,-1){21}}} \put(416.6,79){\rotatebox{45}{\line(0,-1){21}}} \put(398.8,62){\circle*{5}} \put(432.2,62){\circle*{5}} \put(389.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(399.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(389.4,41){\circle*{5}} \put(408,41){\circle*{5}} \put(423.5,60.2){\rotatebox{-25}{\line(0,-1){20}}} \put(433.5,60.2){\rotatebox{25}{\line(0,-1){20}}} \put(423.4,41){\circle*{5}} \put(442,41){\circle*{5}} \put(386.4,28){$\ttup{d}$} \put(405,28){$\ttup{a}$} \put(420.4,28){$\ttup{c}$} \put(439,28){$\ttup{b}$} \put(380.4,38){$2$} \put(399,38){$2$} \put(414.4,38){$2$} \put(433,38){$3$} \put(389.8,59){$4$} \put(423.2,59){$5$} \put(406,78){$9$} \put(387.4,48){\footnotesize{$0$}} \put(405,48){\footnotesize{$1$}} \put(421.4,48){\footnotesize{$0$}} \put(439,48){\footnotesize{$1$}} \put(399,69){\footnotesize{$0$}} \put(428,69){\footnotesize{$1$}} \put(395,5){(f) $\mathcal{T}_{5}(w)$} } \end{picture} \caption{The Huffman trees associated to $w$: $\mathcal{T}_{0}(w)$, $\mathcal{T}_{1}(w)$, $\mathcal{T}_{2}(w)$, $\mathcal{T}_{3}(w)$, $\mathcal{T}_{4}(w)$, and $\mathcal{T}_{5}(w)$.} \end{figure} \hspace{0pt}\newline Let $F:\mathbb{N}^{*}\times\Sigma^{+}\rightarrow\Sigma^{*}$, $c_{F}:\Sigma\times\Sigma^{*}\rightarrow\{0,1\}^{+}$ be constructed as above. Then, we encode $w$ by: $\overline{c_{F}}(\ttup{bcabd})=c_{F}(\ttup{b},\lambda)c_{F}(\ttup{c},\ttup{b})c_{F}(\ttup{a},\ttup{bc})c_{F}(\ttup{b},\ttup{bca})c_{F}(\ttup{d},\ttup{bcab})$ $={\it code}(\ttup{b},\mathcal{T}_{0}(\lambda)){\it code}(\ttup{c},\mathcal{T}_{1}(\ttup{b}))$ ${\it code}(\ttup{a},\mathcal{T}_{2}(\ttup{bc})){\it code}(\ttup{b},\mathcal{T}_{3}(\ttup{bca})){\it code}(\ttup{d},\mathcal{T}_{4}(\ttup{bcab}))=0110001100$. \end{example} \section{GA codes and Lempel-Ziv codes} The aim of this section is to prove that Lempel-Ziv encodings are particular cases of encodings by GA codes. Let $\Sigma$ and $\Delta$ be two alphabets such that $\{0,1,\ldots,9\}\cap\Sigma=\emptyset$. First, we recall the Lempel-Ziv parsing procedure of an input data string $w$, where $w=w_{1}w_{2}\ldots{w_{h}}$ is a string over $\Sigma$. For more details, the reader is referred to \cite{zl:2,zl:1}. \newline\indent The first variable-length block arising from the Lempel-Ziv parsing of the data string $w$ is $w_{1}$. The second block in the parsing is the shortest prefix of $w_{2}\ldots{w_{h}}$ which is not equal to $w_{1}$. Consider that this second block is $w_{2}\ldots{w_{j}}$. Then, the third block will be the shortest prefix of $w_{j+1}\ldots{w_{h}}$ which is not equal to either $w_{1}$ or $w_{2}\ldots{w_{j}}$. Suppose the Lempel-Ziv parsing has produced the first $k$ variable-length blocks $B_{1},B_{2},\ldots,{B_{k}}$ in the parsing, and $w^{(k)}$ is that part left of $w$ after $B_{1},B_{2},\ldots,{B_{k}}$ have been removed. Then, the next block $B_{k+1}$ in the parsing is the shortest prefix of $w^{(k)}$ which is not equal to any of the preceding blocks $B_{1},B_{2},\ldots,{B_{k}}$ (if there is no such block, then $B_{k+1}=w^{(k)}$ and the Lempel-Ziv parsing procedure terminates). \begin{theorem} \label{thm:lz} Lempel-Ziv encodings are particular cases of encodings by GA codes. \end{theorem} \begin{proof} Let $\Sigma_{1}=\Sigma\cup\{0,1,\ldots,9\}$ be an alphabet, $\sigma_{f}\in\Sigma$ a fixed symbol, and let $F:\mathbb{N}^{*}\times\Sigma_{1}^{+}\rightarrow\Sigma_{1}^{*}$, $c_{F}:\Sigma_{1}\times\Sigma_{1}^{*}\rightarrow\{0,1\}^{*}$ be two functions. \newline\indent Let us consider $F$ given by $F(i,\sstring{m})=i_{1}i_{2}\ldots{i_{q}}\sigma_{f}\sstring{m}$, for all $i\in{\mathbb{N}^{*}}$ and $\sstring{m}\in\Sigma_{1}^{+}$, where $i_{1},i_{2},\ldots,i_{q}\in\{0,1,\ldots,9\}$ are the digits corresponding to $i$ (from left to right). \newline\indent Let $u=u_{1}u_{2}\ldots{u_{p}}$ be a string over $\Sigma_{1}$, that is, $u_{i}\in\Sigma_{1}$ for all $i\in\{1,2,\ldots,p\}$. Consider the following notations. \begin{itemize} \item ${\it fixed}(u)= \left\{ \begin{array}{ll} 1 & \textrm{if $p\geq{3}$ and $\exists$ $i\in\{2,3,\ldots,p-1\}$, such that $u_{i}=\sigma_{f}$}\\ & \textrm{and $u_{j}\in\{0,1,\ldots,9\}$ for all $j\in\{1,2,\ldots,i-1\}$.}\\ 0 & \textrm{otherwise.} \end{array} \right.$ \item ${\it left}(u)= \left\{ \begin{array}{ll} u_{1}u_{2}\ldots{u_{r}} & \textrm{if ${\it fixed}(u)=1$, $u_{i}\in\{0,1,\ldots,9\}$ for all}\\ & \textrm{$i\in\{1,2,\ldots,r\}$, and $u_{r+1}=\sigma_{f}$.}\\ \lambda & \textrm{otherwise.} \end{array} \right.$ \item ${\it right}(u)= \left\{ \begin{array}{ll} v & \textrm{if ${\it fixed}(u)=1$ and $u={\it left}(u)\sigma_{f}v$.}\\ \lambda & \textrm{otherwise.} \end{array} \right.$ \item ${\it goodpos}(u)= \left\{ \begin{array}{ll} 1 & \textrm{if ${\it fixed}(u)=1$ and $|{\it left}(u)|+2\leq{{\it left}(u)}\leq{|u|}$.}\\ 0 & \textrm{otherwise.} \end{array} \right.$ \end{itemize} Let us consider $c_{F}$ given by \begin{displaymath} c_{F}(\sigma,\sstring{m})= \left\{ \begin{array}{ll} {\it LZ}(\sigma,\sstring{m}) & \textrm{if ${\it fixed}(\sstring{m})=1$, ${\it goodpos}(\sstring{m})=1$}\\ & \textrm{and $\sigma=\sigma_{{left}(\sstring{m})}$.}\\ \lambda & \textrm{otherwise.} \end{array} \right. \end{displaymath} where ${\it LZ}(\sigma,\sstring{m})$ is defined as follows: let $B_{1},B_{2},\ldots,B_{t}$ be the blocks arising from the Lempel-Ziv parsing of the string ${\it right}(\sstring{m})$, and \begin{equation} B_{z}=\sigma_{|{\it left}(\sstring{m})|+2+j_{1}}\ldots\sigma_{|{\it left}(\sstring{m})|+2+j_{2}}, \end{equation} where $z\in\{1,\ldots,t\}$, $0\leq{j_{1}}\leq{j_{2}}\leq{|{\it rigth}(\sstring{m})|-1}$, and \begin{equation} |{\it left}(\sstring{m})|+2+j_{1}\leq{{\it left}(\sstring{m})}\leq{|{\it left}(\sstring{m})|+2+j_{2}}. \end{equation} If ${\it left}(\sstring{m})=|{\it left}(\sstring{m})|+2+j_{2}$, then let ${\it LZ}(\sigma,\sstring{m})$ be the codeword associated by the Lempel-Ziv data compression algorithm to the block $B_{z}$. Otherwise, we consider that ${\it LZ}(\sigma,\sstring{m})=\lambda$. One can easily verify that $\overline{c_{F}}(\sstring{m})$ is the encoding of $\sstring{m}$ by the Lempel-Ziv data compression algorithm, for all $\sstring{m}\in\Sigma_{1}^{+}$. Thus, we have obtained that $\overline{c_{F}}$ is injective, which proves the theorem. \end{proof} \begin{example} \label{exmp:lz} Let $\Sigma=\{\ttup{a},\ttup{b},\ttup{c}\}$, $\Sigma_{1}=\Sigma\cup\{0,1,\ldots,9\}$ be two alphabets, and let $F:\mathbb{N}^{*}\times\Sigma_{1}^{+}\rightarrow\Sigma_{1}^{*}$, $c_{F}:\Sigma_{1}\times\Sigma_{1}^{*}\rightarrow\{0,1\}^{*}$ be two functions given as in Theorem \ref{thm:lz} (considering $\sigma_{f}=\ttup{a}$). Also, let $w=\ttup{bcc}7\ttup{ba}\in\Sigma_{1}^{+}$ be an input string. Applying the Lempel-Ziv parsing procedure to the input string $w$, we get the following blocks: $B_{1}=\ttup{b}$, $B_{2}=\ttup{c}$, $B_{3}=\ttup{c}7$, and $B_{4}=\ttup{ba}$. Let us denote by ${\it codeLZ}(B_{i})$ the codeword associated by the Lempel-Ziv encoder to the block $B_{i}$, for all $i\in\{1,2,3,4\}$. One can verify that we get the following results: \begin{itemize} \item ${\it codeLZ}(B_{1})=1011$, \item ${\it codeLZ}(B_{2})=01100$, \item ${\it codeLZ}(B_{3})=100001$, \item ${\it codeLZ}(B_{4})=010111$. \end{itemize} Finally, we encode $w=\ttup{bcc}7\ttup{ba}$ by the GA code $c_{F}$ as shown below. \newline\indent $\overline{c_{F}}(w)=c_{F}(\ttup{b},F(1,\ttup{bcc}7\ttup{ba}))c_{F}(\ttup{c},F(2,\ttup{bcc}7\ttup{ba}))c_{F}(\ttup{c},F(3,\ttup{bcc}7\ttup{ba}))$ \newline\indent\hspace{40pt} $c_{F}(7,F(4,\ttup{bcc}7\ttup{ba}))c_{F}(\ttup{b},F(5,\ttup{bcc}7\ttup{ba}))c_{F}(\ttup{a},F(6,\ttup{bcc}7\ttup{ba}))$ \newline\indent\hspace{28.5pt} $=c_{F}(\ttup{b},1\ttup{abcc}7\ttup{ba})c_{F}(\ttup{c},2\ttup{abcc}7\ttup{ba})c_{F}(\ttup{c},3\ttup{abcc}7\ttup{ba})$ \newline\indent\hspace{40pt} $c_{F}(7,4\ttup{abcc}7\ttup{ba})c_{F}(\ttup{b},5\ttup{abcc}7\ttup{ba})c_{F}(\ttup{a},6\ttup{abcc}7\ttup{ba})$ \newline\indent\hspace{28.5pt} $={\it LZ}(\ttup{b},1\ttup{abcc}7\ttup{ba})\cdot{\it LZ}(\ttup{c},2\ttup{abcc}7\ttup{ba})\cdot\lambda\cdot$ ${\it LZ}(7,4\ttup{abcc}7\ttup{ba})\cdot\lambda\cdot{\it LZ}(\ttup{a},6\ttup{abcc}7\ttup{ba})$ \newline\indent\hspace{28.5pt} $={\it codeLZ}(B_{1}){\it codeLZ}(B_{2}){\it codeLZ}(B_{3}){\it codeLZ}(B_{4})$ \newline\indent\hspace{28.5pt} $=101101100100001010111$. \end{example} \section{Adaptive codes and convolutional codes} Convolutional codes \cite{p:h:1} are one of the most widely used channel codes in practical communication systems. These codes are developed with a separate strong mathematical structure and are primarily used for real time error correction. Convolutional codes convert the entire data stream into one single codeword: the encoded bits depend not only on the current $k$ input bits, but also on past input bits. The same strategy is used by adaptive variable-length codes. The aim of this section is to discuss the connection between adaptive codes and convolutional codes. Specifically, we show how a convolutional code can be modelled as an adaptive code. Before stating the results, let us first present a brief description of convolutional codes. Convolutional codes are commonly specified by three parameters: $n$, $k$, and $m$, where \begin{itemize} \item $n$ is the number of output bits, \item $k$ is the number of input bits, \item and $m$ is the number of memory registers. \end{itemize} The quantity $km$ is called the {\it constraint length}, and represents the number of bits in the encoder memory that affect the generation of the $n$ output bits. Also, the quantity $k/n$ is called the {\it code rate}, and is a measure of the efficiency of the code. A convolutional code with parameters $n$, $k$, $m$ is usually referred to as an $(n,k,m)$ convolutional code. For an $(n,k,m)$ convolutional code, the encoding procedure is entirely defined by $n$ {\it generator polynomials}. Usually, these generator polynomials are represented as binary $(m+1)$-tuples. Also, throughout this section, we consider only $(n,1,m)$ convolutional codes. Let us consider an $(n,1,m)$ convolutional code with $P_{1},P_{2},\ldots,P_{n}$ being its generator polynomials, and let $x=x_{1}x_{2}\ldots{x_{t}}\in\{0,1\}^{+}$ be an input data string. The string $x$ is encoded by $y=y_{1}y_{2}\ldots{y_{nt}}$, where the substring $y_{in+1}\ldots{y_{in+n}}$ encodes the input bit $x_{i+1}$, for all $i\in\{0,1,\ldots,t-1\}$. Precisely, if $i\in\{0,1,\ldots,t-1\}$ and $j\in\{1,2,\ldots,n\}$, then \begin{center} $y_{in+j}=w_{i-i_{1}^{j}+1}\oplus{w_{i-i_{2}^{j}+1}}\oplus\ldots\oplus{w_{i-i_{q(j)}^{j}+1}}$, \end{center} where $\{i_{1}^{j},i_{2}^{j},\ldots,i_{q(j)}^{j}\}=\{z\in\{1,2,\ldots,m+1\} \mid P_{j}.z=1\}$, $i_{1}^{j}\leq{i_{2}^{j}}\leq\ldots\leq{i_{q(j)}^{j}}$, $\oplus$ denotes the modulo-2 addition, and \begin{displaymath} w_{i-l}= \left\{ \begin{array}{ll} x_{i-l+1} & \textrm{if $i-l+1\geq{1}$.}\\ 0 & \textrm{otherwise.} \end{array} \right. \end{displaymath} for all $l\in\{0,1,\ldots,m\}$. \begin{example} \label{exmp:conv} Let us consider a $(2,1,2)$ convolutional code with $P_{1}=(0,1,1),P_{2}=(1,0,1)$ being its generator polynomials. This convolutional code can be represented graphically as in the figure below. \begin{figure}[hbtp] \setlength{\unitlength}{1pt} \begin{picture}(400,70)(-40,0) \put(50,40){input} \put(75,42){\vector(1,0){40}} \put(115,32){\framebox(20,20){$m_{1}$}} \put(135,42){\vector(1,0){40}} \put(175,32){\framebox(20,20){$m_{2}$}} \put(195,42){\line(1,0){40}} \put(229.5,60.5){\Large{$\oplus$}} \put(229.5,16.5){\Large{$\oplus$}} \put(235,42){\vector(0,1){17.5}} \put(235,42){\vector(0,-1){17}} \put(155,42){\line(0,1){22}} \put(155,64){\vector(1,0){75.2}} \put(95,42){\line(0,-1){22}} \put(95,20){\vector(1,0){135.2}} \put(239,64){\vector(1,0){40}} \put(239,20){\vector(1,0){40}} \put(282,62){output 1} \put(282,18){output 2} \end{picture} \caption{A $(2,1,2)$ convolutional code.} \end{figure} \hspace{0pt} Let us now describe the encoding mechanism. Let $b$ be the current input bit being encoded, and let $b_{1}$ and $b_{2}$ be the current bits stored in the memory registers $m_{1}$ and $m_{2}$, respectively. Given that $P_{1}=(0,1,1)$, the first output bit is obtained by adding (modulo-2) $b_{1}$ and $b_{2}$. The second bit is obtained by adding (modulo-2) $b$ and $b_{2}$. After both output bits have been obtained, $b$ and $b_{1}$ become the new values stored in the memory registers $m_{1}$ and $m_{2}$, respectively. For example, if $x=0101$ is an input bitstring, one can verify that the output is $00011010$ (for each input bit, the output is obtained by concatenating the two output bits). \end{example} \begin{theorem} \label{thm:conv} Any $(n,1,m)$ convolutional code with $P_{1},P_{2},\ldots,P_{n}$ being its generator polynomials, and satisfying the condition \begin{center} $\{z\in\{1,2,\ldots,n\} \mid P_{z}.1=1\}\neq\emptyset$, \end{center} is an adaptive code of order $m$. \end{theorem} \begin{proof} Let $c:\{0,1\}\times\{0,1\}^{\leq{m}}\rightarrow\{0,1\}^{n}$ be a function. Consider an $(n,1,m)$ convolutional code with $P_{1},P_{2},\ldots,P_{n}$ being its generator polynomials. Also, let us consider that $c$ is given by: \begin{center} $c(x,x_{1}x_{2}\ldots{x_{p}})=P_{1}[xx_{p}x_{p-1}\ldots{x_{1}}z_{p}^{m}]P_{2}[xx_{p}x_{p-1}\ldots{x_{1}}z_{p}^{m}]\ldots$ $P_{n}[xx_{p}x_{p-1}\ldots{x_{1}}z_{p}^{m}]$, \end{center} for all $x\in\{0,1\}$ and $x_{1}x_{2}\ldots{x_{p}}\in\{0,1\}^{\leq{m}}$, where \begin{itemize} \item $z_{p}^{m}=\underbrace{00\ldots{0}}_{m-p}$, \item and $P_{j}[b_{1}b_{2}\ldots{b_{m+1}}]=b_{i_{1}^{j}}\oplus{b_{i_{2}^{j}}}\oplus\ldots\oplus{b_{i_{q(j)}^{j}}}$, with $\{i_{1}^{j},i_{2}^{j},\ldots,i_{q(j)}^{j}\}=\{z \mid P_{j}.z=1\}$ and $i_{1}^{j}\leq{i_{2}^{j}}\leq\ldots\leq{i_{q(j)}^{j}}$. \end{itemize} Let $b_{1}b_{2}\ldots{b_{q}}\in\{0,1\}^{\leq{m}}$. By hypothesis, there exists $j\in\{1,2,\ldots,n\}$ such that $P_{j}.1=1$. This leads to the conclusion that \begin{center} $\{c(0,b_{1}b_{2}\ldots{b_{q}}),c(1,b_{1}b_{2}\ldots{b_{q}})\}$ \end{center} is a prefix code. Thus, we have obtained that $C_{u}$ (as defined in section 2) is a prefix code, for all $u\in\{0,1\}^{\leq{m}}$. According to Theorem \ref{thm:1}, $c$ is an adaptive code of order $m$. \end{proof} \section{A cryptographic scheme based on convolutional codes} The results presented in the previous section lead to an efficient data encryption scheme. Specifically, every $(1,1,m)$ convolutional code satisfying the condition in Theorem \ref{thm:conv} can be used for data encryption (and decryption), without any additional information. Let us consider an $(1,1,m)$ convolutional code with $P$ being its generator polynomial. If $P.1=0$ (that is, the condition in Theorem \ref{thm:conv} is not satisfied), then the output bits depend {\it only} on the bits stored in the memory registers. For example, let $b$ be the current input bit, and $b_{1},b_{2},\ldots,b_{m}$ the bits stored in the memory registers before encoding the bit $b$. The output bit $b_{{\it out}}$ depends, in this case, only on the bits $b_{1},b_{2},\ldots,b_{m}$. This makes the decryption procedure impossible (without any additional information), since the output cannot be uniquely decoded. Thus, we consider only $(1,1,m)$ convolutional codes that satisfy the condition given in Theorem \ref{thm:conv}. Also, we consider that any $(1,1,m)$ convolutional code is completely specified by \begin{itemize} \item $P$, its generator polynomial, \item and a binary $m$-uple $Q$, where $Q.i$ denotes the bit stored initially in the memory register $m_{i}$, for all $i\in\{1,2,\ldots,m\}$. \end{itemize} {\bf Public and Private Keys.} Let us denote by ${\it Public}$ the set of public keys, and by ${\it Private}$ the set of private keys. There are three parameters in our cryptographic scheme: $m$, $P$, and $Q$. Note that by making $P$ and/or $Q$ available to any user, the parameter $m$ is implicitly made available as well (since $P$ consists of $m+1$ elements, and $Q$ has $m$ elements). Thus, if $P$ and $Q$ are both public keys, then the information can be correctly decoded. Except for the case when both $P$ and $Q$ are public keys, all other cases lead to a powerful cryptographic scheme. The parameters $P$ and $Q$ shouldn't normally be among the public keys, since both $P$ and $Q$ give partial information about the encryption/decryption procedures. Thus, we consider that in practice only the parameter $m$ should be included among the public keys. Keeping all three parameters as private keys increases the security level as well (by a constant factor). \begin{table}[htbp] \caption{Possible ways of partitioning the keys.} \begin{center} \begin{tabular}{|c|c|c|c|} \hline ${\it Public}$ & ${\it Private}$ & Security level & Complexity \\ \hline $\emptyset$ & $\{P,Q,m\}$ & High & $\mathcal{O}(4^{m})$ \\ $\{m\}$ & $\{P,Q\}$ & High & $\mathcal{O}(4^{m})$ \\ $\{P\}$ & $\{Q\}$ & High & $\mathcal{O}(2^{m})$ \\ $\{Q\}$ & $\{P\}$ & High & $\mathcal{O}(2^{m})$ \\ \hline \end{tabular} \end{center} \end{table} \hspace{0pt} \newline {\bf Security and Complexity.} There are four ways of partitioning the keys, as shown in the table above. Note that if $P$ or $Q$ is a public key, then it doesn't make sense to include $m$ as a public or private key, since if $P$ or $Q$ is made available then $m$ is implicitly a public key. Let us discuss each case separately. \begin{description} \item[${\it Public}=\emptyset$ and ${\it Private}=\{P,Q,m\}$.] In this case, an unauthorized user has no information about the encryption/decryption procedure. A possible attack cannot be more efficient than a naive search, starting with $m=1$ and trying all possible cases for $P$ and $Q$. Since there are $2^{m}$ possible binary $m$-tuples, we can conclude that the total number of decoding attempts is at most \begin{displaymath} 2^{1}\cdot{2^{1}}+2^{2}\cdot{2^{2}}+\ldots{+}2^{m}\cdot{2^{m}}=\frac{4^{m+1}-4}{3}=\frac{2^{2m+2}-4}{3}. \end{displaymath} For example, if $m=100$, then the total number of decoding attempts is at most \begin{displaymath} \frac{2^{202}-4}{3}\approx{2.1\cdot{10^{60}}}. \end{displaymath} Definitely, the scheme is highly efficient in this case. \item[${\it Public}=\{m\}$ and ${\it Private}=\{P,Q\}$.] Even if $m$ is a public key, the efficiency of our scheme is not affected at all. A possible attack must try all possible cases for $P$ and $Q$ (in the worst case). Thus, the total number of decoding attempts is at most $2^{m}\cdot{2^{m}}=2^{2m}$. For $m=100$, the total number of decoding attempts is at most $2^{200}\approx{1.6\cdot{10^{60}}}$. \item[${\it Public}=\{P\}$ and ${\it Private}=\{Q\}$.] Since only $Q$ is a private key in this case, we can conclude that the total number of decoding attempts is at most $2^{m}$. For $m=100$, $2^{100}\approx{1.2\cdot{10^{30}}}$. \item[${\it Public}=\{Q\}$ and ${\it Private}=\{P\}$.] The total number of decoding attempts is at most $2^{m}$, since only $P$ is a private key in this case. \end{description} {\bf Encryption and Decryption.} A detailed description of the encryption algorithm is provided below. Note that $m$ is a positive integer, $P$ is a binary $(m+1)$-tuple that satisfies the condition in Theorem \ref{thm:conv}, and $Q$ is a binary $m$-tuple. \begin{figure}[ht] \begin{center} {\footnotesize \fbox{ \begin{minipage}{300pt} \begin{tabbing} \hspace*{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\= \kill {\tt Input:} $m$, $P$, $Q$, and $x=x_{1}x_{2}\ldots{x_{t}}\in\{0,1\}^{+}$\\ {\tt Output:} $y\in\{0,1\}^{+}$\\ \rule[3pt]{1.0\textwidth}{0.3pt}\\ $S\leftarrow\emptyset$; $y\leftarrow\lambda$\\ {\tt For} $i\leftarrow{2}$ to $m+1$ {\tt do}\\ \>\>{\tt If} $P.i=1$ {\tt then}\\ \>\>\> $S\leftarrow{S\cup\{i-1\}}$\\ \>\>{\tt Endif}\\ {\tt Endfor}\\ {\tt For} $i\leftarrow{1}$ to $t$ {\tt do}\\ \>\> $z\leftarrow{x_{i}}$\\ \>\>{\tt For each} $j\in{S}$ {\tt do}\\ \>\>\> $z\leftarrow{z\oplus{Q.j}}$\\ \>\>{\tt Endfor}\\ \>\> $y\leftarrow{y\cdot{z}}$; $Q\leftarrow{Q\vartriangleright{m}}$; $Q\leftarrow{(x_{i})\vartriangle{Q}}$\\ {\tt Endfor} \end{tabbing} \end{minipage} } } \end{center} \caption{Convolutional encryption.} \end{figure} \hspace{0pt} \newline As mentioned in the beginning of this section, the decryption algorithm is based on the equality $P.1=1$. Let $y_{i}$ be the current bit being decoded, $Q$ the content of the memory registers before decoding $y_{i}$, $S=\{i_{1},i_{2},\ldots,i_{j}\}$ the set of indexes of those memory registers that contribute to the output bit, and $z=Q.i_{1}\oplus{Q.i_{2}}\oplus\ldots\oplus{Q.i_{j}}$. If $y_{i}=0$, we can conclude that $x_{i}=z$ (since $x_{i}\oplus{z}=0$). Otherwise, if $y_{i}=1$, it follows that $x_{i}=\overline{z}$, where $\overline{z}$ denotes the complement of $z$. A complete description of the algorithm is given below. \begin{figure}[ht] \begin{center} {\footnotesize \fbox{ \begin{minipage}{300pt} \begin{tabbing} \hspace*{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\=\hspace{5mm}\= \kill {\tt Input:} $y=y_{1}y_{2}\ldots{y_{t}}\in\{0,1\}^{+}$, $m$, $P$, and $Q$\\ {\tt Output:} $x\in\{0,1\}^{+}$\\ \rule[3pt]{1.0\textwidth}{0.3pt}\\ $S\leftarrow\emptyset$; $x\leftarrow\lambda$\\ {\tt For} $i\leftarrow{2}$ to $m+1$ {\tt do}\\ \>\>{\tt If} $P.i=1$ {\tt then}\\ \>\>\> $S\leftarrow{S\cup\{i-1\}}$\\ \>\>{\tt Endif}\\ {\tt Endfor}\\ {\tt For} $i\leftarrow{1}$ to $t$ {\tt do}\\ \>\> $z\leftarrow{0}$\\ \>\>{\tt For each} $j\in{S}$ {\tt do}\\ \>\>\> $z\leftarrow{z\oplus{Q.j}}$\\ \>\>{\tt Endfor}\\ \>\>{\tt If} $z\neq{y_{i}}$ {\tt then}\\ \>\>\> $z\leftarrow{1}$\\ \>\>{\tt Endif}\\ \>\> $x\leftarrow{x\cdot{z}}$; $Q\leftarrow{Q\vartriangleright{m}}$; $Q\leftarrow{(z)\vartriangle{Q}}$\\ {\tt Endfor} \end{tabbing} \end{minipage} } } \end{center} \caption{Convolutional decryption.} \end{figure} \begin{example} Consider an $(1,1,2)$ convolutional code with $P=(1,0,1)$ and $Q=(0,1)$. Graphically, this code is represented as in the figure below. \begin{figure}[hbtp] \setlength{\unitlength}{1pt} \begin{picture}(400,60)(-40,20) \put(50,40){input} \put(75,42){\vector(1,0){40}} \put(115,32){\framebox(20,20){$m_{1}$}} \put(135,42){\vector(1,0){40}} \put(175,32){\framebox(20,20){$m_{2}$}} \put(195,42){\vector(1,0){40.2}} \put(234.5,38.5){\Large{$\oplus$}} \put(95,42){\line(0,1){22}} \put(95,64){\line(1,0){145.2}} \put(240.2,64){\vector(0,-1){17}} \put(245,42){\vector(1,0){40}} \put(287,40){output} \end{picture} \caption{An $(1,1,2)$ convolutional code.} \end{figure} \hspace{0pt} Initially, the memory register $m_{1}$ stores the bit $0$ ($=Q.1$), and the memory register $m_{2}$ stores the bit $1$ ($=Q.2$). Let $x=001\in\{0,1\}^{+}$ be an input bitstring. Using the encryption algorithm, we encode $x$ by $y=101$. Given that $P.1=1$, we can use the convolutional decryption algorithm to decode $y$ into $x$ (using the private keys $m$, $P$, and $Q$). \end{example} \section{$(p,q)$-adaptive codes} In order to have more flexibility when developing applications based on adaptive codes, we introduce a natural generalization of adaptive codes, called $(p,q)$-adaptive codes. For example, extending the algorithms presented in \cite{t:2} to $(p,q)$-adaptive codes is expected to give better results. Let us give a formal definition. \begin{definition} \label{def:pqac} Let $\Sigma$ and $\Delta$ be alphabets. A function $c:\Sigma^{q}\times\Sigma^{\leq{p}}\rightarrow\Delta^{+}$ is called $(p,q)$-\emph{adaptive code} if its unique homomorphic extension $\overline{c}:\Sigma^{*}\rightarrow\Delta^{*}$, given by: \begin{itemize} \item $\overline{c}(\lambda)=\lambda$, \item $\overline{c}(\sstring{m})=$ $c(\sigma_{1}\ldots{\sigma_{q}},\lambda)$ $c(\sigma_{2}\ldots{\sigma_{q+1}},\sigma_{1})$ $\ldots$ $c(\sigma_{p+1}\ldots{\sigma_{p+q}},\sigma_{1}\ldots{\sigma_{p}})$ \newline $c(\sigma_{p+2}\ldots{\sigma_{p+q+1}},\sigma_{2}\ldots\sigma_{p+1})$ $\ldots$ $c(\sigma_{m-q+1}\ldots{\sigma_{m}},\sigma_{m-q-p+1}\ldots\sigma_{m-q})$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$, is injective. \end{definition} Developing applications based on $(p,q)$-adaptive codes is not a subject of this paper. The concept is presented here just to show how much flexibility we get when using various generalizations of adaptive codes. Let us give an example. \begin{example} Let $\Sigma=\{\ttup{a},\ttup{b}\}$, $\Delta=\{0,1\}$ be two alphabets, and $c:\Sigma^{2}\times\Sigma^{\leq{1}}\rightarrow\Delta^{+}$ a function given as in the table below. One can verify that $\overline{c}$ is injective, and according to Definition \ref{def:pqac}, $c$ is an $(1,2)$-adaptive code. \begin{table}[htbp] \caption{An $(1,2)$-adaptive code.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\Sigma^{2}\backslash\Sigma^{\leq{1}}$ & $\ttup{a}$ & $\ttup{b}$ & $\lambda$ \\ \hline $\ttup{aa}$ & 0 & 11 & 00 \\ \hline $\ttup{ab}$ & 10 & 101 & 11 \\ \hline $\ttup{ba}$ & 111 & 01 & 10 \\ \hline $\ttup{ba}$ & 110 & 00 & 01 \\ \hline \end{tabular} \end{center} \end{table} \hspace{0pt} \newline Let $x=\ttup{ababa}\in\Sigma^{+}$ be an input data string. Using the definition above, we encode $x$ by \begin{center} $\overline{c}(x)=c(\ttup{ab},\lambda)c(\ttup{ba},\ttup{a})c(\ttup{ab},\ttup{b})c(\ttup{ba},\ttup{a})=11111101111$. \end{center} \end{example} \section{Adaptive codes and time-varying codes} Time-varying codes have been recently introduced in \cite{tme:1} as a proper extension of L-codes \cite{m:s:w}. Intuitively, a time-varying code associates a codeword to the symbol being encoded depending on its position in the input data string. The connection to gsm-codes and SE-codes has been also discussed in \cite{tme:1}. Several characterizations results for time-varying codes can be found in \cite{tmte:1}. Let us now give a formal definition. \begin{definition} \label{def:tvc} Let $\Sigma$ and $\Delta$ be two alphabets. A function $c:\Sigma\times\mathbb{N}^{*}\rightarrow\Delta^{+}$ is called \emph{time-varying code} if its unique homomorphic extension $\overline{c}:\Sigma^{*}\rightarrow\Delta^{*}$, given by: \begin{itemize} \item $\overline{c}(\lambda)=\lambda$, \item $\overline{c}(\sstring{m})=c(\sigma_{1},1)c(\sigma_{2},2)\ldots{c(\sigma_{m},m)}$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$, is injective. \end{definition} {\bf Motivation.} This section is intended to introduce a new class of variable-length codes, called {\it adaptive time-varying codes}. Combining adaptive codes with time-varying codes can be useful when the input string consists of substrings with different characteristics. Let $x=u_{1}u_{2}\ldots{u_{t}}\in\Sigma^{+}$ be an input string, where $u_{1},u_{2},\ldots,u_{t}$ are substrings with different characteristics. Instead of associating an adaptive code to $x$, it is desirable to associate an adaptive code to each substring $u_{i}$. For sure, this technique can be exploited further in data compression to improve the results. Combining adaptive codes with time-varying codes leads to the following encoding mechanism: the codeword associated to the current symbol being encoded depends not only on the previous symbols in the input string, but also on the position of the current symbol in the input string. A formal definition is given below. \begin{definition} \label{def:tvac} Let $\Sigma$ and $\Delta$ be alphabets. A function $c:\Sigma\times\Sigma^{\leq{n}}\times\mathbb{N}^{*}\rightarrow\Delta^{+}$ is called \emph{adaptive time-varying code of order $n$} if its unique homomorphic extension $\overline{c}:\Sigma^{*}\rightarrow\Delta^{*}$, given by: \begin{itemize} \item $\overline{c}(\lambda)=\lambda$, \item $\overline{c}(\sstring{m})=$ $c(\sigma_{1},\lambda,1)$ $c(\sigma_{2},\sigma_{1},2)$ $\ldots$ $c(\sigma_{n-1},\sstring{n-2},n-1)$ \newline $c(\sigma_{n},\sstring{n-1},n)$ $c(\sigma_{n+1},\sstring{n},n+1)$ $c(\sigma_{n+2},\sigma_{2}\sigma_{3}\ldots\sigma_{n+1},n+2)$ \newline $c(\sigma_{n+3},\sigma_{3}\sigma_{4}\ldots\sigma_{n+2},n+3)\ldots$ $c(\sigma_{m},\sigma_{m-n}\sigma_{m-n+1}\ldots\sigma_{m-1},m)$ \end{itemize} for all strings $\sstring{m}\in\Sigma^{+}$, is injective. \end{definition} \begin{example} Let $\Sigma=\{\ttup{a},\ttup{b}\}$, $\Delta=\{0,1\}$ be two alphabets, and let $c:\Sigma\times\Sigma^{\leq{2}}\times\mathbb{N}^{*}\rightarrow\Delta^{+}$ be a function given by: \begin{displaymath} c(\sigma,u,i)= \left\{ \begin{array}{ll} {\it zero[i]} & \textrm{if $\sigma=\ttup{a}$.}\\ {\it one[i]} & \textrm{if $\sigma=\ttup{b}$.} \end{array} \right. \end{displaymath} for all $(\sigma,u,i)\in\Sigma\times\Sigma^{\leq{2}}\times\mathbb{N}^{*}$, where \begin{itemize} \item ${\it zero[i]}=\underbrace{00\ldots{0}}_{i}$, \item and ${\it one[i]}=\underbrace{11\ldots{1}}_{i}$. \end{itemize} One can verify that $\overline{c}$ is injective, and according to Definition \ref{def:tvac}, $c$ is an adaptive time-varying code of order two. For example, the string $x=\ttup{abaa}$ is encoded by \begin{displaymath} \overline{c}(x)=c(\ttup{a},\lambda,1)c(\ttup{b},\ttup{a},2)c(\ttup{a},\ttup{ab},3)c(\ttup{a},\ttup{ba},4)=0110000000. \end{displaymath} \end{example} \section{Conclusions and further work} Adaptive codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. This class of codes has been presented in \cite{t:2} as a new class of non-standard variable-length codes. Generalized adaptive codes (GA codes, for short) have been also presented in \cite{t:2}, not only as a new class of non-standard variable-length codes, but also as a natural generalization of adaptive codes of any order. In this paper, we contributed the following results. First, we proved that adaptive Huffman encodings and Lempel-Ziv encodings are particular cases of encodings by GA codes (sections 3 and 4). In section 5, we proved that any $(n,1,m)$ convolutional code satisfying a certain condition can be modelled as an adaptive code of order $m$. This result was exploited further in section 6, where an efficient cryptographic scheme based on convolutional codes is described. An insightful analysis of this cryptographic scheme was provided in the same section. In sections 7 and 8, we extended adaptive codes to $(p,q)$-adaptive codes, and presented a new class of variable-length codes, called adaptive time-varying codes. Further work in this area is intended to establish new interesting connections between adaptive codes and other classes of codes, along with showing their effectiveness in concrete applications. Future directions related to adaptive codes also include the data compression algorithms recently presented in \cite{t:2}. For example, combining the extensions described in sections 7 and 8 with the algorithms presented in \cite{t:2} may lead to better results. \bibliographystyle{fundam} \begin{small}
2,877,628,089,260
arxiv
\section{Introduction} The goal of reinforcement learning (RL) is to learn an optimal behavior within an unknown dynamic environment, usually modeled as a Markov decision process (MDP), through trial and error \cite{sutton1998reinforcement}. Over the past years, deep RL (DRL) has achieved great successes. It has been practically shown to successfully master various complex problems \cite{mnih_playing_2013,mnih_human-level_2015}. To a large extent, these successes can be credited to the incorporation of the experience replay and target network that stabilizes the network training \cite{mnih2016asynchronous,mnih_playing_2013,mnih_human-level_2015,schaul_prioritized_2015,van2016deep,wang2015dueling}. Approaches like \cite{Bloembergen2011ETS,matignon2007,matignon2012independent,panait2006lenient,wei2016lenient} have been proposed by extending Q-learning to address the coordination problems in cooperative multiagent systems. They are able to achieve coordination in relatively simple cooperative multiagent system. However, none of them has been combined with deep learning techniques. Recently, increasing wide attention has been drawn in employing DRL in multiagent environments. Unfortunately, these multiagent DRL algorithms still suffer from two intrinsic difficulties in the interactive environments \cite{gupta2017cooperative,lanctot2017unified,matignon2012independent}: stochasticity due to the noisy reward signals; and non-stationarity due to the dynamicity of coexisting agents. The stochasticity introduces additional biases in estimation, while the non-stationarity harms the effectiveness of experience replay, which is crucial for stabilizing deep Q-networks. These two characteristics result in the lack of theoretical convergence guarantees of most multiagent DRL algorithms and amplify the difficulty of finding the optimal Nash equilibriums, especially in cooperative multiagent problems. This work focuses on learning algorithms of independent learners (ILs) in cooperative multiagent systems. Here, we assume that agents are unable to observe other agents' actions and rewards \cite{claus1998dynamics}; they share a common reward function and learn to maximize the common expected discounted reward (a.k.a. return). To handle the stochastic and non-stationary challenges in the multiagent systems, we propose the weighted double deep Q-network (WDDQN) with two auxiliary mechanisms, the lenient reward network and the scheduled replay strategy, to help ILs in finding the optimal policy that maximizes the common return. Our contributions are three-fold. First, we extend weighted double Q-learning (WDQ) \cite{zhangweighted}, a state-of-the-art traditional RL method, to the multiagent DRL settings. Second, we introduce a lenient reward network inspired by the lenient Q-learning \cite{palmer2017lenient,panait2006lenient}. Third, we modify the exisitin prioritized experience replay strategy to stabilize and speed up the learning process in complex multiagent problems with raw visual inputs. Empirical results demonstrate that on a fully cooperative multiagent problem WDDQN with the new mechanisms indeed contribute to increasing the algorithm's convergence, decreasing the instability and helping ILs to find an optimal policy simultaneously. \section{Preliminaries} This section briefly introduces the definition of cooperative Markov games, Q-learning and its variants. \subsection{Cooperative Markov Game} Markov (stochastic) games, as an extension of repeated games and MDPs, provide a commonly used framework for modeling interactions among agents. They can be formalized as a tuple $<N,S,\mathbf{A},Tr,R_1,...R_N,\gamma>$. Here, $N$ is the number of players (or agents), $S$ is the set of states, $\mathbf{A}=A_1 \times ... \times A_N$ is the joint action set, where $A_i$ is the action space of player $i$, $Tr$ is the transition function $S\times \mathbf{A}\times S\rightarrow [0,1]$ such that $\exists s \in S, \exists a \in \mathbf{A}, \sum_{s^\prime \in S}Tr(s,\mathit{a}, s^\prime) = 1$, $R_i$ is the reward function $S\times \mathbf{A}\rightarrow \mathbb{R}$ for player $i$, and $\gamma \in \left[0, 1\right] $ is a discount factor. The state $s$ is assumed to be observable for all players. A fully cooperative Markov game is a specific type of Markov games where all agents receive the same reward under the same outcome, and thus share the same best-interest action. \subsection{Q-learning and Its Variants} \subsubsection{Q-learning} is based on the core idea of temporal difference (TD) learning \cite{sutton1988learning} and is well suited for solving sequential decision making problems \cite{claus1998dynamics,watkins1989learning}. Q-learning tries to find an accurate estimator of the Q-Values, i.e. $Q_t(s,a)$, for state-action pairs \cite{claus1998dynamics}. Each Q-value is an estimate of the discounted sum of future rewards that can be obtained at time $t$ through selecting action $a$ in state $s$. The iterative update formula is outlined in Equation \ref{eq:q-learning}: \begin{equation}\label{eq:q-learning} Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \max_{a^\prime}Q(s^\prime, a^\prime) - Q(s,a)], \end{equation} where $ r $ is the immediate reward and $\alpha \in [0,1)$ is the learning rate. The updating process always chooses the action $a^\prime$ with the maximum Q value and updates Q with the saved Q value. Once the process terminates, an optimal policy can be obtained by selecting the action with the maximum Q-value in each state \cite{bellman1958dynamic}. However, Q-learning uses a single estimator to estimate $E\{\max_{a^\prime}Q(s^\prime, a^\prime)\}$, which has been proved to be greater than or equal to $\max_{a^\prime}E\{Q(s^\prime, a^\prime)\}$ \cite{smith2006optimizer}. Thus, a positive bias always exists in the single estimator. \subsubsection{Deep Q-Network (DQN)} extends Q-learning with neural network to solve complex problems with extensive state spaces. It uses an online neural network parametrized by $\theta$ to approximate the vector of action values $Q(s,\cdot;\theta)$ for each state $s$, and a target network parameterized by $\theta'$ which is periodically copied from $\theta$ to reduce oscillation during training. The neural network is optimized by minimizing the difference between the predicted value $Q(s_t, a_t; \theta_{t})$ and the target value $Y^{Q}_{t} = r_{t+1} + \gamma\max_a Q(s_{t+1}, a; \theta_t^\prime)$, using experienced samples $(s_t, a_t, r_{t+1}, s_{t+1})$ drawn from a replay memory. To minimize the difference, the parameters of the network are updated along with the direction of the target value $Y_t^Q$ estimated by experienced samples $(s_t, a_t, r_{t+1}, s_{t+1})$ drawn from a replay memory using the following formula: \begin{equation} \label{eq:gradient-update} \theta_{t+1} = \theta_{t} + \alpha \mathbb{E}[(Y_t^Q - Q(s_t, a_t; \theta_{t}))\nabla_{\theta_{t}}Q(s_t, a_t; \theta_{t})], \end{equation} where $ \nabla_{\theta_{t}}Q(s_t, a_t; \theta_{t}) $ is the gradient. Both the replay memory and the target network help DQN to stabilize learning and can dramatically improve its performance. However, like tabular Q-learning, using the single maximum estimator is prone to cause overestimating, leading to poor performance in many situations. \subsubsection{Double Q-learning} uses the double estimator to ease the overestimation. The double estimator selects the action with the maximum Q value and evaluates the Q values of different actions separately in turn \cite{hasselt2010double}. The double Q-learning algorithm stores two Q-values, denoted $ Q^U $ and $ Q^V $, and replaces the estimated value $\max_{a^\prime}Q(s^\prime, a^\prime)$ in Equation \ref{eq:q-learning} with the combination $Q^U(s^\prime, \arg\max_{a^\prime}Q^V(s^\prime, a^\prime))$. Unfortunately, Hasselt \cite{hasselt2010double} proved that though the double estimator can overcome the overestimation issue, a negative bias is introduced in the same time which may harm the resulting algorithm's performance and effectiveness. \subsubsection{Double DQN} incorporates the idea of double Q-learning into DQN to avoid the overestimation \cite{van2016deep}. It uses two sets of Q-networks $Q(s,a;\theta)$ and $Q(s,a,\theta^\prime)$: one for selecting action and the other for estimating the target Q-value. At each time the Q-network $Q(s,a;\theta)$ is updated using the following target value: \begin{equation}\label{eq5} Y_t^{Q} \equiv R_{t+1} + \gamma Q(s_{t+1}, \arg\max_a Q(s_{t+1}, a, \theta_t); \theta_t^\prime). \end{equation} By leveraging the above two Q-networks to select and evaluate the Q-values symmetrically in turn, this algorithm takes advantage of the double estimator to reduce the overestimation of Q values and lead to better performance in a variety of complex RL scenarios. \subsubsection{Weighted Double Q-learning (WDQ)} uses a dynamic heuristic value $ \beta $, which depends on a constant $ c $, to balance between the overestimation of the single estimator and the underestimation of the double estimator during the iterative Q-value update process: \begin{equation}\label{eq10} Q(s,a)^{U,WDQ} = \beta Q^U(s,a^*) + (1-\beta)Q^V(s,a^*), \end{equation} where a linear combination of $Q^U$ and $Q^V$ is used for updating Q-value. When $a^*$ is chosen by $Q^U, i.e., a^* \in \arg\max_aQ^U(s,a)$, $Q^U(s,a^*)$ will be positively biased and $Q^V(s,a^*)$ will be negatively biased, and vice versa. $\beta \in \left[ 0,1\right]$ balances between the positive and negative biases. Experiments on tabular MDP problems show that more accurate value estimation can indeed boost Q-learning's performance. However, it is still not clear whether this idea can be extended to the end-to-end DRL framework to handle high-dimensional problems. \subsubsection{Lenient Q-learning} \cite{potter1994cooperative} updates the policies of multiple agents towards an optimal joint policy simultaneously by letting each agent adopt an optimistic dispose at the initial exploration phase. This has been empirically proved to be efficient at increasing the likelihood of discovering the optimal joint policy in stochastic environments and avoiding agents gravitating towards a sub-optimal joint policy \cite{bloembergen2015evolutionary,palmer2017lenient,panait2006lenient,wei2016lenient}. During training, lenient agents keep track of the temperature $T_t(s,a)$ for each state-action pair ($ s,a $) at time $ t $, which is initially set to a defined maximum temperature value and used for measuring the leniency $ l(s,a) $ as follows: \begin{equation} l(s_t; a_t) = 1 - e^{-K * T_t(s_t, a_t)}, \end{equation} where $ K $ is a constant determining how the temperature affects the decay in leniency. As suggested by \cite{wei2016lenient}, $T_t(s_t, a_t)$ is decayed using a discount factor $\kappa \in [0, 1]$ and $T_{t+1}(s_t, a_t)=\kappa T_t(s_t, a_t)$. Given the TD error $\delta = Y_t^Q - Q_t(s_t, a_t;\theta_t)$, the iterative update formula of lenient Q-learning is defined as follows: \begin{equation} \label{eq:lenient-q} Q(s_t, a_t) = \left\{ {\begin{array}{*{20}{l}} Q(s_t, a_t) + \alpha\delta & {\text{ if } \delta > 0 \text{ or } x > l(s_t, a_t), }\\ Q(s_t, a_t) &{\text{ otherwise.}} \end{array}} \right. \end{equation} The random variable $x \sim U(0,1)$ is used to ensure that a negative update $\delta$ is performed with a probability $1-l(s_t, a_t)$. Due to the initial state-action pairs being visited more often than the later ones, the temperature values for states close to the initial state can decay rapidly. One solution to address this is to fold the average temperature $\bar{T}(s^\prime) = \frac{1}{|A|} \sum_{a_i\in A}{T(s^\prime, a_i)}$ for next state $s^\prime$ into the temperature that is being decayed for $(s_t, a_t)$ \cite{wei2016lenient}, as below: \begin{equation} \label{eq:lenient-temperature} T_{t+1}(s_t, a_t) = \kappa*\left\{ {\begin{array}{*{20}{l}} T_t(s_t, a_t) & {\text{\textup{ if} $s'$ \textup{is terminal,}}}\\ (1-\eta)*T_t(s_t, a_t)+\eta \bar{T_t}(s') &{\text{ otherwise.}} \end{array}} \right. \end{equation} where $\eta$ is a constant controlling the extent that $\bar{T}(s^\prime)$ is folded in. We absorb this interesting notion of forgiveness into our lenient reward network to boost the convergence in cooperative Markov games which will be explained later. \section{Weighted Double Deep Q-Networks} In the section, we introduce a new multiagent DRL algorithm, weighted double deep Q-networks (WDDQN), with two auxiliary mechanisms, i.e., the lenient reward approximation and the scheduled replay strategy, to achieve efficient coordination in stochastic multiagent environments. In these environments, reward could be extremely stochastic due to the environments' inherent characteristics and the continuous change of the coexisting agents' behaviors. For the stochastic rewards caused by the environments, WDDQN uses the combination of the weighted double estimator and the reward approximator to reduce the estimation error. As for the non-stationary coexisting agents, we incorporate the leniency from lenient Q-learning \cite{palmer2017lenient,panait2006lenient} into the reward approximator to provide an optimistic estimation of the expected reward under each state-action pair $r(s,a)$. In addition, directly applying prioritized experience replay \cite{schaul_prioritized_2015} in multiagent DRL leads to poor performance, as stored transitions can become outdated because agents update their policies simultaneously. To address this, we propose a scheduled replay strategy to enhance the benefit of prioritization by adjusting the priority for transition sample dynamically. In the remainder of this section, we will describe these facets in details. \subsection{Network Architecture} WDDQN outlined in Algorithm \ref{alg1} is adapted from WDQ by leveraging neural network as the Q-value approximator to handle problems with high-dimensional state spaces. The overall network architecture of the algorithm is depicted in Fig. \ref{fig-reward-network}. To reduce the estimation bias, WDDQN uses the combination of two estimators, represented as Deep Q-networks $Q^U$ and $Q^V$ with the same architecture, to select action $a = \max_{a'}\frac{Q^U(s,a') + Q^V(s,a')}{2}$ (line 5). Besides, the target $Q^{\tt Target}(s,a)$ (lines 12 and 17) used for Q-value updating in back-propagation is replaced with a weighted combination as well (lines 11 and 16). Intuitively, the combination balances between the overestimation and the underestimation. In addition, we also propose to use a reward approximator and an efficient scheduled replay strategy in WDDQN to achieve bias reduction and efficient coordination in multiagent stochastic environments. \begin{figure}[h] \centering \includegraphics[width=.8\linewidth]{./reward_network.png} \caption{Network Architecture of WDDQN} \label{fig-reward-network} \end{figure} \begin{algorithm}[h] \caption{WDDQN} \label{alg1} \begin{algorithmic}[1] \State The maximum number of episodes: $Max_E$, the maximum number of steps: $Max_S$, global memory: $D^G$, episodic memory: $D^E$, reward network: $R^{N}$, deep Q-networks: $Q^U$ and $Q^V$ \For{episode = 1 to $Max_E$} \State Initialize $D^E$ \For{step = 1 to $Max_S$} \State $a \gets \max_{a'}\frac{Q^U(s,a') + Q^V(s,a')}{2}$ (with $\varepsilon$-greedy) \State Execute $a$ and store transitions into $D^E$ \State Sample mini-batch $(s,a,r,s')$ of transitions from $D^G$ \State Update $Q^U$ or $Q^V$ randomly \If {update $Q^U$} \State $a^* \gets \arg\max_aQ^U(s',a)$ \State {$Q_U^w(s', a^*) \gets \beta Q^U(s',a^*) + (1-\beta) Q^V(s',a^*)$} \State $Q^{\tt Target}(s,a) \gets R^{N}(s,a) + Q_U^w(s', a^*)$ \State{Update network $Q^U$ towards $Q^{\tt Target}$} \Else \State $a^* \gets \arg\max_aQ^V(s',a)$ \State {$Q_V^w(s', a^*) \gets \beta Q^V(s',a^*) +(1-\beta) Q^U(s',a^*)$} \State $Q^{\tt Target}(s,a) \gets R^{N}(s,a) + Q_V^w(s', a^*)$ \State{Update network $Q^V$ towards $Q^{\tt Target}$} \EndIf \State{Update $R^{N}$ according to transitions in $D^G$} \EndFor \State{Store $D^E$ into $D^G$} \EndFor \end{algorithmic} \end{algorithm} \subsection{Lenient Reward Network} To reduce noise in stochastic rewards, we use a reward network, which is a neural network estimator, to approximate the reward function $R(s,a) $ explicitly. The reward network can reduce bias in immediate reward $ r $ yielded from stochastic environments by averaging all rewards for distinct $(s,a)$ pair and be trained using the transitions stored in the experience replay during the online interaction. When updating the network, instead of using the reward $r$ in transition $(s,a,r,s^\prime)$ from experience memory, WDDQN uses the estimated reward by the reward network (lines 12 and 17). In addition to stochasticity, in a cooperative multiagent environment, the coexisting agents introduce additional bias to $ r $ as well. The mis-coordination of coexisting teammates may lower the reward $r$ for ($s$, $a^*$) despite the agent has adopted the optimal action. To address this, we use a lenient reward network (LRN) enhanced with the lenient concept in \cite{potter1994cooperative} to allow the reward network to be optimistic during the initial exploration phase. The LRN is updated periodically (line 20) as follows: \begin{equation} \label{eq:lenient-reward} R_{t+1}(s_t, a_t) = \left\{{ \begin{array}{*{20}{l}} R_t(s_t, a_t) + \alpha\delta &{\textup{ if }\delta > 0 \text{ or } x < l(s_t, a_t),} \\ R_t(s_t, a_t) &{\textup{ otherwise.}} \end{array}}\right. \end{equation} where $ R_t(s_t, a_t) $ is the reward approximation of state $ s $ and action $ a $ at time $ t $, and $\delta = \bar{r}_t^{(s,a)} - R(s_t,a_t)$ is the TD error between the $ R_t(s_t, a_t) $ and the target reward $ \bar{r}_t^{(s,a)} = 1/n \sum_{i = 1...n}{r_i^{(s,a)}} $ obtained by averaging all immediate reward $ r_i^{(s,a)} $ of $ (s,a) $ pairs in experience memory. Note that $l(s_t,a_t)$ inherits from Equation \ref{eq:lenient-q} and has the same meaning, which is gradually decayed each time a state-action $(s,a)$ pair is visited. Consequently, the LRN contributes to reduce bias by reward approximation and can help agents to find optimal joint policies in cooperative Markov games. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{./srp.png} \caption{Comparison between the prioritized experience replay and the scheduled replay strategy: each dot represents a sample $(s,a,r,s)$, and a trajectory consists of an ordered sequence of samples. The x-axis represents the order that each sample comes into the relay memory and the y-axis is the priority of each sample. } \label{fig:variant} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./pacman-training.png} \caption{Comparisons of DDQN, WDDQN w.o. LRN+SRS and WDDQN on pacman with 4 different sizes. The X-axis is the number of training episodes and the Y-axis is a ratio of the number of minimum steps to the goal to the number of steps that the agent actually used during training.} \label{fig:pacman-training} \end{figure*} \subsection{Scheduled Replay Strategy} Prioritized experience replay (PER) can improve the DQN algorithm's training efficiency by allocating samples with different priorities according to their TD error. Samples with higher priorities are more likely to be chosen for network training. However, in stochastic multiagent environments, due to the noisy reward and the continuous behavior changes of coexisting agents, PER may deteriorate the algorithm's convergence and perform poorly. Given a transition $(s,a,r,s,d)$ with an extremely biased reward $ r $, PER will treat it as an important sample for its large TD error and will frequently select it to update the network, though it is incorrect due to the big noise in $ r $ at the beginning. To address this, we replace $ r $ with an estimation $ R^{N}(s,a)$ using LRN to correct TD error, by which the PER can distinguish truly important samples. Another potential problem is that PER gives all samples in the new trajectory the same priority, thus resulting in the indistinguishability of importance for all new samples. To be specific, in Fig. \ref{fig:variant}, the sample with the maximum priority is colored by red dot. PER gives all samples (blue dots) in the latest trajectory with an identical priority \footnote{See OpenAI source code for details: https://github.com/openai/baselines.}. However, in cooperative multiagent environments, the trajectories that agents succeed in cooperation are relatively rare, and in these trajectories the samples closer to the terminal state is even more valuable than the ones far from the terminal state. Besides, the $Q(s,a) = r + Q(s^\prime, a^*) $ far from the terminal state can further deteriorate if bootstrap of action value $Q(s^\prime, a^*)$ is already highly inaccurate, since inaccurate estimation will propagate throughout the whole contiguous samples. These two traits explain why samples that are close to the terminal state should be frequently used for network training. To this end, we develop a scheduled replay strategy (SRS) using a precomputed rising schedule $ [w_0,w_1,...,w_n] $ with size $ n $ to assign different priorities according to the sample's position $ i $ in the trajectory with $n$ samples. The values for $ w_i = e^{\rho_c*{u^i}} $ are computed using an exponent $ \rho^c $ which grows with a rising rate $ u > 1 $ for each $i$, $0 \leqslant i < n$. The priority $ p_i $ assigned to sample with index $ i $ is obtained by multiplying the current maximum priority $ p_{\max} $ in experience memory (priority of the red point in Fig. \ref{fig:variant}) by $ w_i $: \[ p_i = p_{\max} \times w_i \] The SRS assigns higher priority to samples near the terminal state (the green dot in Fig. \ref{fig:variant}) to ensure they are more likely to be sampled for network training. In this way, the estimation bias of the $ Q(s,a) $ near the terminal state is expected to decrease rapidly. This can significantly speed up the convergence and improve the training performance, as to be experimentally verified in the following section. \section{Experiments} Empirical evaluation is conducted to verify the effectiveness of WDDQN in terms of reducing bias and achieving coordination in stochastic multiagent domains. First, we present comparisons of double DQN (DDQN) and WDDQN with /without LRN and SRS, denoted by WDDQN and WDDQN w.o. LRN+SRS, in terms of the bias reduction, learning speed and performance on a gridworld game with raw visual input. Then, we use a cooperative Markov game to investigate WDDQN's effectiveness of finding an optimal cooperative policy. A discussion about benefits of WDDQN, LNR and SRS is given in the end. \begin{figure}[h] \centering \begin{minipage}[h]{.49\linewidth} \centerline{\includegraphics[width =.5\textwidth]{./pacman.png}} \caption{Gridworld game.} \label{fig:picman} \end{minipage} \hfill \begin{minipage}[h]{.49\linewidth} \centerline{\includegraphics[width =.5\textwidth]{./predator.png}} \caption{Predator game.} \label{fig:predator} \end{minipage} \vfill \end{figure} \vspace{-0.5cm} \begin{table}[h] \caption{Network architectures in WDDQN} \label{tab:architecture} \centering \small \begin{tabular}{cccc} \hline \# Network & Visual input & Filters in Conv. 1/2/3 & Unit in F.C \\ \hline DQN & 84 * 84 * 3 & 32/64/64 & 512 \\ LRN & 84 * 84 * 3 & 16/16/16 & 128\\ \hline \end{tabular} \end{table} We set the constant $ c $ in $\beta$ to 0.1 in WDDQN, parameters $ K, \kappa, \eta $ in lenient Q-learning to 2, 0.95 and 0.6 respectively. Besides, the learning rate $\alpha$ for network training of DDQN, lenient Q-learning is set to 0.0001. Table \ref{tab:architecture} depicts the architecture of deep Q-networks and LRN in WDDQN. We use three hidden convolution layers (using rectifier non-linearities between each two consecutive layers), and a fully-connected hidden layer. The output layer of DQN and LRN is a fully-connect linear layer with a single output layer for each valid action $Q(s,a)$ and reward $R(s,a)$. For exploration purpose, DQN($ \epsilon $-greedy) is adopted with the $ \epsilon $ annealed linearly from 1 to 0.01 over the first 10000 steps. We used the Adam algorithm with 0.0001 learning rate and the minibatches of size 32. We trained for a total of 2500 episodes and used a replay memory of 8192 most recent frames. Last, to be fair, $ K, \kappa, \eta $ in LRN is the same as the lenient Q-learning while $\rho_c$ and $\mu$ in SRS is set to 0.2 and 1.1. \begin{figure*}[t] \centering \subfloat[deterministic rewards.]{\includegraphics[width=0.49\textwidth]{./predator-exp1.png}\label{fig:sf1}} \hfill \subfloat[stochastic rewards.]{\includegraphics[width=0.49\textwidth]{./predator-exp2.png}\label{fig:sf2}} \caption{(Left) Comparisons of WDDQN and its variants using the predator game with deterministic rewards; and (right) comparisons of WDDQN and other algorithms using the predator game with stochastic rewards. Note that, each point in the x-axis consists of 50 episodes, and the y-axis is the corresponding averaged reward. The shadow area ranges from the lowest reward to the highest reward within the 50 episodes.} \label{fig:exp-pre} \end{figure*} \subsection{Pacman-like Grid World} The first experiment is an $n \times n$ pacman-like grid-world problem (Fig. \ref{fig:picman}), where the agent starts at the $s_0$ (top left cell), and moves towards the goal cell (pink dot at right bottom cell) using only four actions: \{north, south, east, west\}. Every movement leads the agent to move one cell in the corresponding direction, except that a collision on the edge of the grid results in no movement. The agent tries to search the goal cell which may appear randomly in any position in the grid world. The agent receives a stochastic reward of -30 or 40 with equal probability for any action entering into the goal and ending an episode. Choosing north or west will get a reward of -10 or +6, and south or east get a reward of -8 or +6 at a non-goal state. The environment is extremely noisy due to the uncertainty in the reward function. Empirical results in Figure \ref{fig:pacman-training} demonstrate that, under extremely stochastic environments, DDQN takes a long time to optimize the policy, while WDDQN w.o. LRN+SRS and WDDQN need much less episodes to get a better policy due to the weighted double estimator. DDQN and WDDQN w.o. LRN+SRS oscillate too frequently to converge to an optimal policy, while WDDQN performs steadily and smoothly because of the use of LRN. Another finding is that the training speed of WDDQN is faster than the others, which is attributed to the SRS. In general, WDQ works not as well as in relatively simple RL problems and both DDQN and WDDQN w.o. LRN+SRS may not converge even after a very long training time. By contrast, as shown in Fig. \ref{fig-reward-network}, WDDQN learns efficiently and steadily due to the use of both LRN and SRS. \subsection{Cooperative Markov Game} In this section, we consider the two predators pursuit problem. It is a more complex cooperative problem and firstly defined in \cite{benda1985optimal}. Here we redefine it in a simple way. The robots in Figure \ref{fig:predator} represent two agents trying to enter into the goal state at the same time. The cell with letter S is a suboptimal goal with a reward of +10 while G is a global optimal with a reward of +80. There is a thick wall (in gray) in the middle that separates the area into two zones. In each episode, two agents start at the left bottom cell and right bottom cell separately and try to go to the green goal cell together. Each agent has four actions: \{north, south, east, west\}. Every movement leads the agent to move one grid in the corresponding direction, except that a collision on the edge of the grid or thick wall results in no movement. A reward of 0 is received whenever entering into a non-goal state. The agent receives a positive reward for any action entering into the goal together and ending an episode, otherwise a negative reward of -1 is received with miscoordination. There are two types of cooperative policies moving towards the suboptimal goal cell S or the global optimal cell G, as shown in the Fig. \ref{fig:predator}. In the remaining part, we investigate whether WDDQN and related algorithms can find cooperative policies, especially the optimal policy. \subsubsection{Evaluation on WDDQN} Our goal is to train two agents simultaneously to coordinate in order to get higher rewards. The performance of WDDQN w.o. LRN+SRS, WDDQN(LRN)\footnote{WDDQN(LRN) uses only LRN and is identical to WDDQN w.o. SRS}, and WDDQN in terms of the average reward is depicted in Figure \ref{fig:exp-pre}\subref{fig:sf1}. As WDDQN w.o. LRN+SRS's convergence is no longer guaranteed in the neural network representation, it is not surprising that it fails in finding the cooperative policy by directly combining WDQ with neural network. By contrast, WDDQN(LRN), due to the LRN, achieves coordination more quickly and finds the optimal policy after a period of exploration. By leveraging the SRS, WDDQN shows a more promising result that the optimal policy is learned much faster than the two others. \subsubsection{Evaluation Against Other Algorithms} Here, we compare WDDQN against DDQN, a DRL algorithm, and lenient Q-learning, a multiagent RL algorithm on the same game except that the agent receives a reward of +10 or +100 with the possibility of 60\% or 40\% at goal S and a deterministic reward of +80 at goal G. Goal S is still suboptimal as its average reward is 46. This slight adjustment may affect the algorithm's performance by misleading the agent to converge to the suboptimal goal where a higher reward may appear accidentally. Results in terms of the average reward are depicted in Fig. \ref{fig:exp-pre}\subref{fig:sf2}, where two dashed lines indicate optimal and suboptimal policy with the expected rewards of 80 and 46, respectively. Both WDDQN and lenient Q-learning outperform DDQN in terms of the convergence speed and the average reward in all experiments, which confirms the infeasibility of directly applying DRL algorithms in multiagent problems. Note that, WDDQN, due to the use of both LRN and SRS, is more stable, performs better and is more likely to find the optimal solution with the average reward of 80 than lenient Q-learning with the average reward of 46 in such a stochastic multiagent environment. \section{Conclusion} This paper proposes WDDQN with the lenient reward network and the scheduled replay strategy to boost the training efficiency, stability and convergence under stochastic multiagent environments with raw image inputs, stochastic rewards, and large state spaces. Empirically, WDDQN performs better than WDDQN w.o. LRN+SRS, DDQN and lenient Q-learning in terms of the average reward and convergence rate on the pacman and two predators pursuit domains. One downside to our approach is that it only uses one agent to explore the large-scale RL problems and train network at the same time. These can significantly slow down the exploration procedure and affect WDDQN's performance and efficiency. This could be remedied in practice by accelerating the training procedure of WDDQN using asynchronization, as being used in the A3C algorithm \cite{mnih2016asynchronous}. We leave this investigation to future work. \bibliographystyle{named}
2,877,628,089,261
arxiv
\section{Introduction} \label{section.01} \textit{We motivate the study of topological vertices from a 2D conformal field theory point of view.} \subsection{A web of relations} The relation of 2D conformal field theories, which describe critical surface phenomena, and exact solutions in 2D statistical mechanical models, which describe off-critical surface phenomena, is well-understood since the 1970's. More recently, 4D, 5D and 6D instanton and topological string partition functions, as well as other topics in modern mathematical physics, were related to 2D conformal field theories in terms of \textit{dualities}, and one can study any of these topics from the viewpoint of any of the others \footnote{\, For a review of recent developments, see \cite{teschner.review, pestun.zabzine.review} }. In the following, we motivate the present work from the viewpoint of 2D conformal field theory. \subsection{From 2D correlation functions to plane partitions} In 1984, Belavin, Polyakov and Zamolodchikov showed that correlation functions in 2D conformal field theories split into sums of products of structure constants, chiral conformal blocks, and anti-chiral conformal blocks. In 2009, Alday, Gaiotto and Tachikawa conjectured \cite{alday.gaiotto.tachikawa}, and Alba, Fateev, Litvinov and Tarnopolskiy proved \cite{alba.fateev.litvinov.tarnopolskiy}, that in the presence of an extra Heisenberg algebra, a 2D conformal block splits into products of 4D Nekrasov partition functions \footnote{\, Alday \textit{et al.} also conjecture 4D interpretations of the structure constants as well as other aspects of 2D correlation functions, but in the present work, we focus on the conformal blocks. }, which are limits of 5D instanton partition functions. These 5D instanton partition functions split into products of topological vertices that are also 5D partition functions \footnote{\, Topological vertices are 5D partition functions in the sense that they depend $R$, the radius of the $M$-theory circle. Gluing topological vertices leads to $R$-deformed 2D conformal blocks. In 2D terms, $R$ is an off-critical deformation parameter, and the critical 2D conformal blocks are obtained in the $R \rightarrow 0$ limit. }. Since a topological vertex has a combinatorial interpretation as a generating function of weighted plane partitions that satisfy specific boundary conditions \cite{okounkov.reshetikhin, okounkov.reshetikhin.vafa}, the (difficult) analytic problem of computing correlation functions in 2D conformal field theory is recast as a (hopefully) simpler exercise in algebraic combinatorics. \subsection{The algebraic combinatoric point of view} Viewing the 2D correlation functions in terms of algebraic combinatorial objects, which in this case are plane partitions with specific weights and specific boundary conditions, allows one to study more general classes of them. One way to do that is to change the weights while maintaining computability. All known topological vertices, starting from the original vertex $\mathcal{O}_{\, Y_1 Y_2 Y_3} \left\lgroup x \right\rgroup$, which depends on three Young diagrams $Y_1, Y_2$ and $Y_3$, and a single parameter $x$ \footnote{\, In this work, as in \cite{foda.wu.02}, we use $x$ and $y$ for the parameters of the refined vertex, $x = \exp \left\lgroup - R \epsilon_1 \right\rgroup$ and $y = \exp \left\lgroup R \epsilon_2 \right\rgroup$, where $\epsilon_1$ and $\epsilon_2$ are Nekrasov's deformation parameters \cite{nekrasov}, and $R$ is the radius of the M-theory circle. We reserve $q$ and $t$ for the Macdonald deformation parameters \cite{macdonald.book}. }, and leads to conformal blocks in conformal field theories with integral central charges \cite{aganagic.klemm.marino.vafa}, to the refined vertex $\mathcal{R}_{\, Y_1 Y_2 Y_3} \left\lgroup x, y \right\rgroup$, which depends on an additional refinement parameter $y$, and leads to conformal blocks in conformal field theories with generic central charges \cite{awata.kanno.01, awata.kanno.02, iqbal.kozcaz.vafa}, to the Macdonald vertex $\mathcal{M}_{\, Y_1 Y_2 Y_3} \left\lgroup x, y \, | \, q, t \right\rgroup$ which depends on two additional Macdonald parameters $q$ and $t$ \cite{foda.wu.02}, and leads to conformal blocks in the presence of vertex-operator condensates \cite{foda.manabe}, are generating functions of plane partitions that are given different weights. \subsection{The present work} Following Saito's construction of an elliptic version of Ding-Iohara-Miki algebra \cite{ding.iohara, miki}, using \textit{two} commuting Heisenberg algebras, one deformed by $q$, and the other by $1/q$ \cite{saito.01, saito.02, saito.03}, we construct an \textit{elliptic} vertex $\mathcal{E}_{\, \pmb{Y}_1 \pmb{Y}_2 Y_3}$ $ \left\lgroup \, x, \, y \, | \, \pmb{q} \, \right\rgroup$, where $\pmb{Y}_1 = \left\lgroup Y_{\, 1 \, A}, Y_{\, 1 \, B} \right\rgroup$ and $\pmb{Y}_2 = \left\lgroup Y_{\, 2 \, A}, Y_{\, 2 \, B} \right\rgroup$ are pairs of Young diagrams, $Y_3$ is a single Young diagram, and $\pmb{q} = \left\lgroup q, 1/q \right\rgroup$, where $q$ is a deformation parameter. \subsubsection{Two components} $\mathcal{E}$ is a product of two components, \begin{equation} \mathcal{E}_{\, \pmb{Y}_{1 } \pmb{Y}_{2 } Y_3} \left\lgroup \, x, \, y \, | \, \pmb{q} \right\rgroup = \mathcal{M}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, | \, q \, \right\rgroup \, \mathcal{M}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, | \, 1/q \, \right\rgroup, \end{equation} where $\mathcal{M}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, | \, q \, \right\rgroup$ is a Macdonald vertex with Macdonald parameters $q \neq 1$, and $t = 0$, and refinement parameters $x$ and $y$, and $\mathcal{M}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, | \, 1/q \, \right\rgroup$ is a Macdonald vertex with Macdonald parameters $1/q \neq 1$, and $t = 0$, and refinement parameters $1/x$ and $1/y$. The Young diagrams $Y_{\, 1 \, A}$ and $Y_{\, 1 \, B}$ that label the initial non-preferred legs of the component Macdonald vertices are independent, the Young diagrams $Y_{\, 2 \, A}$ and $Y_{\, 2 \, B}$ that label the final non-preferred legs are also independent, but the same Young diagram $Y_3$ labels the (common) preferred leg of both component vertices \footnote{\, These terms will be defined when we construct the elliptic vertex explicitly in section \ref{section.09}. }. The original Macdonald vertex depends on two Macdonald parameters $q$ and $t$ and has basically the same structure as the refined topological vertex, but the Schur functions replaced by Macdonald functions. The component Macdonald vertices depend on a single Macdonald parameter $q$ or $1/q$, and the second Macdonald parameter $t=0$. In this case, the Macdonald functions are one-parameter deformations of the Schur functions called $q$-Whittaker functions \cite{ gerasimov.lebedev.oblezin.01, gerasimov.lebedev.oblezin.02, gerasimov.lebedev.oblezin.03, borodin.corwin, borodin.petrov, borodin.wheeler}. \subsubsection{The refined vertex limit} In the limit $q \rightarrow 0$, \begin{multline} \mathcal{M}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, | \, q \, \right\rgroup \rightarrow \mathcal{R}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, \right\rgroup, \quad \mathcal{M}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, | \, 1/q \, \right\rgroup \rightarrow 1, \\ \mathcal{E}_{\, \pmb{Y}_{1 } \pmb{Y}_{2 } Y_3} \left\lgroup \, x, \, y \, | \, \pmb{q} \right\rgroup \rightarrow \mathcal{R}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, \right\rgroup, \end{multline} where $\mathcal{R}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, \right\rgroup$ is a refined vertex \footnote{\, In this work, we always refer to the formulation of the refined vertex in \cite{iqbal.kozcaz.vafa} }. In the limit $q \rightarrow \infty$, \begin{multline} \mathcal{M}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, | \, q \, \right\rgroup \rightarrow 1, \quad \mathcal{M}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, | \, 1/q \, \right\rgroup \rightarrow \mathcal{R}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, \right\rgroup, \\ \mathcal{E}_{\, \pmb{Y}_{1 } \pmb{Y}_{2 } Y_3} \left\lgroup \, x, \, y \, | \, \pmb{q} \right\rgroup \rightarrow \mathcal{R}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, \right\rgroup \end{multline} In this sense, the elliptic vertex $\mathcal{E}_{\, \pmb{Y}_{1 } \pmb{Y}_{2 } Y_3} \left\lgroup \, x, \, y \, | \, \pmb{q} \right\rgroup$ is a one-parameter deformation of the refined vertex that interpolates $\mathcal{R}_{\, Y_{\, 1 \, A} Y_{\, 2 \, A} Y_3} \left\lgroup \, x, \, y \, \right\rgroup$ and $\mathcal{R}_{\, Y_{\, 1 \, B} Y_{\, 2 \, B} Y_3} \left\lgroup \, 1/x, \, 1/y \, \right\rgroup$, which are equivalent in the sense that they lead to the same 4D and 5D instanton partition functions and the same 2D conformal blocks, up an overall normalization. \subsubsection{Twisted vertices} We also introduce \textit{twisted} version of $\mathcal{E}$ that we call $\mathcal{E}^{\, \star}$, that depend on \textit{twisted} $q$-Whittaker functions that we define. \subsubsection{6D instanton partition functions from non-periodic web diagrams} Starting from a non-periodic web diagram, constructed by gluing refined topological vertices, such that the corresponding partition function is a 5D instanton partition function, and replacing the refined vertices by $\mathcal{E}$ and $\mathcal{E}^{\, \star}$ alternately, we reproduce the 6D version of the 5D instanton partition function that we start with, while keeping the connectivity of the web diagram intact. These 6D instanton partition functions are the same as those obtained by taking traces, that is, by identifying opposite external legs and summing over the intermediate states to form periodic web diagrams, then computing their partition functions, as first proposed in the work of Hollowood, Iqbal and Vafa \cite{hollowood.iqbal.vafa} \footnote{\, The literature on 6D instanton partition functions and related topics has grown rapidly over the past few years. The present work was motivated by the recent works of Iqbal, Kozcaz and Yau \cite{iqbal.kozcaz.yau} and Nieri \cite{nieri}. }. \subsection{Two routes} We came to the elliptic vertex $\mathcal{E}$ \textit{via} two different routes. \subsubsection{The Iqbal-Kozcaz-Vafa refined vertex route} \label{route.01} In \cite{foda.wu.02}, the first co-author, together with Jian-Feng Wu, proposed a Mac\-donald-type deformation of the refined vertex of Iqbal, Kozcaz and Vafa \cite{iqbal.kozcaz.vafa}, in terms of the Macdonald parameters $q$ and $t$, and noted its connection to Ding-Iohara-Miki algebra \cite{ding.iohara, miki, feigin.hashizume.hoshino.shiraishi.yanagida}. In \cite{foda.wu.03}, an elliptic extension of the Macdonald vertex of \cite{foda.wu.02} based on Saito's elliptic extension of the Ding-Iohara-Miki algebra was obtained. This extension introduced a parameter $p$, and represents the initial and final states of the new vertex in terms of pairs of $p$-deformed Macdonald functions \footnote{\, Saito's elliptic extension of the Ding-Iohara-Miki algebras makes use of two Heisenberg algebras, and the corresponding states are naturally represented in terms of pairs of symmetric functions \cite{wu.private.communication}. }. The properties of these $p$-deformed Macdonald functions were not completely well-understood, and to work with them, one had to conjecture that they satisfied suitable Cauchy-type identities. However, setting $q=t$, the $p$-deformed Macdonald functions reduced to $p$-deformed Schur functions, whose properties were still not completely well-understood, but appeared to be more amenable to analysis \cite{foda.unpublished}. \subsubsection{The Awata-Feigin-Shiraishi refined vertex route} \label{route.02} In \cite{zhu.01}, the second co-author proposed a Saito-type elliptic extension of the refined vertex of Awata, Feigin and Shiraishi \cite{awata.feigin.shiraishi}, and in \cite{zhu.02}, he obtained another version in a form related to that of Iqbal, Kozcaz and Vafa \cite{zhu.02}, where the initial and final states are also labelled by pairs of $p$-deformed Schur functions. This elliptic vertex is exactly that of \cite{foda.wu.03}, restricted to the $p$-deformed Schur functions of \cite{foda.unpublished}. \subsubsection{$q$-Whittaker functions} The starting point of the present work is the observation that the $p$-deformed Schur functions of sections \ref{route.01} and \ref{route.02} are $q$-Whittaker functions obtained from Macdonald functions in the limit $t \rightarrow 0, q \neq 0$ \footnote{\, The $q$-Whittaker functions are dual to the Hall-Littlewood functions in the sense that the former are obtained in the limit $t \rightarrow 0, q \neq 0$ of Macdonald functions, while the latter are obtained in the limit $q \rightarrow 0, t \neq 0$. }, where the Macdonald parameter $q$ plays the role of Saito's elliptic deformation parameter $p$ \footnote{\, In Saito's work \cite{saito.01, saito.02, saito.03}, and in the present work, whenever the parameters $p$ and $q$ are both non-zero, they appear on equal footing. }. This observation allows us to use the tools of Macdonald functions, in the limit $t \rightarrow 0, q \neq 0$, to put our derivations on a solid footing. \subsection{Outline of contents} In section \ref{section.02}, we recall basic facts related to $q$-Whittaker functions and their Cauchy identities, then in in section \ref{section.03}, we introduce an involution that we use to define twisted $q$-Whittaker function and their Cauchy identities. In section \ref{section.04}, we recall Saito's pair of $pqt$-Heisenberg algebras and pair of $pqt$-vertex operators. In each pair, one component depends on $p, q$ and $t$, and the other on $1/p, 1/q$ and $1/t$. We show that Macdonald's parameter $q$ and Saito's parameter $p$ appear in all expressions on equal footing, so that setting $q=t$, or $p=t$, all dependence on the equated parameters disappears, and the remaining parameter can be identified with the parameter that deforms Schur functions to $q$-Whittaker functions. In section \ref{section.05}, we take the $p \rightarrow t$ limit of Saito's $pqt$-Heisenberg algebras and $pqt$-vertex operators, to obtain a pair of $q$-deformed Heisenberg algebras, and a pair of $q$-deformed vertex operators, such that in each pair, one component depends on $q$ and the other depends on $1/q$. In section \ref{section.06}, we recall the Heisenberg/power sum correspondence which allows us to derive useful operator-valued identities for one of the Heisenberg algebras of section \ref{section.05}, then in section \ref{section.07}, we do the same for the second Heisenberg algebra. In section \ref{section.08}, we define pairs of $q$-Whittaker functions and derive their Cauchy identities. In section \ref{section.09}, we construct the elliptic vertex, and in section \ref{section.10}, we show that gluing copies of this vertex produces an elliptic version of the strip partition function \cite{iqbal.kashani-poor}, so that gluing copies of the latter produces the 6D instanton partition functions of \cite{hollowood.iqbal.vafa}. Section \ref{section.11} includes a number of comments. \subsection{Notations and other conventions} \subsubsection{Sets} $\pmb{\iota }$, and similarly $\bj$, is the set of non-zero natural numbers $ \left\lgroup 1, 2, \cdots \right\rgroup$, $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$ and $\pmb{y} = \left\lgroup y_1, y_2, \cdots \right\rgroup$ are sets of (possibly infinitely-many) variables, $\pmb{a}_{-} = \left\lgroup a_{-1}, a_{-2}, \cdots \right\rgroup$ and $\pmb{a}_{+} = \left\lgroup a_{ 1}, a_{ 2}, \cdots \right\rgroup$ are the free-boson creation and annihilation mode operators. \subsubsection{Pairs of Young diagrams and pairs of variables} $\pmb{Y} = \left\lgroup Y_1, Y_2 \right\rgroup$ is a pair of Young diagrams, $\pmb{\emptyset} = \left\lgroup \emptyset, \emptyset \right\rgroup$ is a pair of empty Young diagrams, and $\pmb{q}$ is the pair $ \left\lgroup q, 1/q \right\rgroup$. \subsubsection{Number of elements in sets and number of cells in Young diagrams} $| \, \pmb{x} \, |, \, | \, \pmb{y} \, |, \, \cdots,$ are the numbers of elements in the sets $\pmb{x}, \, \pmb{y}, \, \cdots,$ and $| \, \pmb{Y} \, | = | \, Y_1 \, | + | \, Y_2 \, |$, where $| \, Y_i \, |$ is the number of cells in the Young diagram $Y_i$. \subsubsection{Primed variables and transpose Young diagrams} To simply the notation, we use the primed variables $x^{\, \prime}, y^{\, \prime}, p^{\, \prime}, q^{\, \prime}, t^{\, \prime}, \cdots$, for the inverse variables $1/x, 1/y, 1/p, 1/q, 1/t, \cdots$, and the primed set variable $\, \pmb{x}^{\, \prime} = \left\lgroup x_1^{\, \prime}, x_2^{\, \prime}, \cdots \right\rgroup$ for the set of inverse variables $ \left\lgroup 1/x_1, 1/x_2, \cdots \right\rgroup$. The Young diagram $Y^{\, \prime}$ is the transpose of the Young diagram $Y$. We use $W^{\, \prime}_q$ for the dual $q$-Whittaker symmetric function as in section \ref{macdonald.q.whittaker}. \subsubsection{Macdonald and $q$-Whittaker symmetric functions} \label{macdonald.q.whittaker} We use $P_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ and $Q_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ for the Macdonald and dual Macdonald symmetric functions. Each of these functions is labelled by a Young diagram $Y$, depends on two parameters $q$ and $t$, and is symmetric in a (possibly infinite) set of variables $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$. We use $W_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ for the $q$-Whittaker and dual $q$-Whittaker symmetric functions. Each of these functions is labelled by a Young diagram $Y$, a parameter $q$, and is symmetric in a (possibly infinite) set of variables $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$ \footnote{\, We show the dependence on the variable $q$ explicitly because this will often be a dependence on $q^{\prime} = 1/q$. }. \subsubsection{Pairs of $q$-Whittaker symmetric functions} We use $\pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}} \left\lgroup \pmb{x} \right\rgroup$ and $\pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}} \left\lgroup \pmb{x} \right\rgroup$, $\pmb{q} = \left\lgroup q, 1/q \right\rgroup$, $\pmb{Y} = \left\lgroup Y_1, Y_2 \right\rgroup$, for a pair of $q$-Whittaker and a pair of dual $q$-Whittaker symmetric functions. The first symmetric function in a pair depends on a parameter $q$, a Young diagram $Y_1$, and is symmetric in a (possibly infinite) set of variables $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, while the second symmetric function in the pair depends on a parameter $1/q$, a Young diagram $Y_2$, and is symmetric in a (possibly infinite) set of variables $\pmb{x}^{\, \prime} = \left\lgroup x_1^{\, \prime}, x_2^{\, \prime}, \cdots \right\rgroup$. \subsubsection{Parameters} Our refinement parameters $ \left\lgroup x, y\right\rgroup$ are the parameters $ \left\lgroup q, t \right\rgroup$ in \cite{iqbal.kozcaz.vafa} \footnote{\, More precisely, our $x$ is $t$, and our $y$ is $q$ in \cite{iqbal.kozcaz.vafa} }. Our $q$-Whittaker deformation parameter (which will be called either $q$ or $1/q$) is the first Macdonald parameter $q$, while the second Macdonald parameter $t=0$. \subsubsection{Exponentiated sequences} \label{sequences} Given a Young diagram $Y$ that consists of an infinite sequence of rows $Y = \left\lgroup y_1, y_2, \cdots \right\rgroup$, such that only finitely-many rows have non-zero length, together with an infinite sequence of integers $\pmb{\iota } = \left\lgroup 1, 2, \cdots \right\rgroup$, and two variables $u$ and $v$, we define the exponentiated sequences $u^{\, \pmb{\iota }}$, $v^{\, \pm Y}$, $\cdots$, as, \begin{equation} u^{\, \pmb{\iota }} = \left\lgroup u, u^2, \cdots \right\rgroup, \quad u^{\, \pmb{\iota } - 1} = \left\lgroup 1, u, \cdots \right\rgroup, \quad v^{\, \pm Y} = \left\lgroup v^{\pm y_1}, v^{\pm y_2}, \cdots \right\rgroup, \quad \cdots, \end{equation} and the products of exponentiated sequences $u^{\, \pmb{\iota }}\, v^{\, \pm Y}$, $u^{\, \pmb{\iota } - 1}\, v^{\, \pm Y}$, $\cdots$, as, \begin{equation} u^{\, \pmb{\iota }}\, v^{\, \pm Y} = \left\lgroup u \, v^{\, \pm y_1}, u^2\, v^{\, \pm y_2} \cdots \right\rgroup, \quad u^{\, \pmb{\iota } - 1}\, v^{\, \pm Y} = \left\lgroup v^{\, \pm y_1}, u\, v^{\, \pm y_2} \cdots \right\rgroup, \quad \cdots \end{equation} \subsubsection{More on sequences} \label{more.on.sequences} Let $\pmb{x} = \left\lgroup x_1, \cdots, x_m \right\rgroup$ be a set of $m$ variables, and $Y = \left\lgroup y_1, \cdots, y_n \right\rgroup$ be a Young diagram that consists of $n$ non-zero parts, such that $n \leqslant m$. The notation, \begin{equation} \pmb{x}_{\pmb{\iota }}^Y = \left\lgroup x_{\iota_1}^{y_1}, \, \cdots, \, x_{\iota_n}^{y_n} \right\rgroup, \label{sum} \end{equation} where $\pmb{\iota } = \left\lgroup \iota_1, \cdots, \iota_n \right\rgroup$, is defined as follows. \textbf{ 1.} Consider the set of $m$ integers, ${\pmb m} = \left\lgroup 1, \cdots, m \right\rgroup$, for example, ${\pmb m} = \left\lgroup 1, 2, 3, 4 \right\rgroup$, \textbf{ 2.} Choose a subset of $n$ integers ${\pmb n} \subseteq {\pmb m}$, for example, ${\pmb n} = \left\lgroup 1, 2, 4 \right\rgroup$, \textbf{ 3.} Consider a specific permutation $\pmb{\iota }$ of ${\pmb n}$, for example, $\pmb{\iota } = \left\lgroup 2, 4, 1 \right\rgroup$. \textbf{ 4.} The set on the right hand side of Equation \ref{sum} is obtained by starting with the set $ \left\lgroup x_{\iota_1}, \, \cdots, \, x_{\iota_n} \right\rgroup$, and raising its elements sequentially to the powers $ \left\lgroup y_1, \, \cdots, \, y_n \right\rgroup$ \footnote{\, In applications of this notation, for example to the definition of the monomial symmetric functions, one sums over all permutations $\pmb{\iota }$ of all possible distinct subsets ${\pmb n}$ of the same cardinality. }. \subsubsection{Products on sequences} \label{products.on.sequences} We will use the notation, \begin{multline} \frac{1} {1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n}} = \prod_{i,\, j = \, 1}^\infty \left\lgroup \frac{1} {1\, -\, x_i\, y_j \, q^{\, n}} \right\rgroup, \quad 1\, + \, \pmb{x} \, \pmb{y} \, q^{\, n} = \prod_{i,\, j = \, 1}^\infty \left\lgroup 1\, + \, x_i\, y_j \, q^{\, n} \right\rgroup, \\ \phi_{\, q \, \pm} \left\lgroup \pmb{x} \right\rgroup = \prod_{i=1}^{\, \infty} \phi_{\, q \, \pm} \left\lgroup x_{\, i} \right\rgroup, \quad \Gamma_{\, q \, a \, \pm} \left\lgroup \pmb{x} \right\rgroup = \prod_{i \, =\, 1}^\infty \Gamma_{\, q \, a \, \pm} \left\lgroup x_i \right\rgroup, \quad \Gamma^{\, \pm}_{\, q \, b \, \pm} \left\lgroup \pmb{x} \right\rgroup = \prod_{i \, =\, 1}^\infty \Gamma^{\, \pm}_{\, q \, b \, \pm} \left\lgroup x_i \right\rgroup, \label{abbreviation} \end{multline} where $\phi_{\, q \, \pm} \left\lgroup x_{\, i}^{\, \prime} \right\rgroup$, \textit{etc.} are two-boson vertex operators, and $\Gamma_{\, q \, a \, -} \left\lgroup x_i \right\rgroup$, and $\Gamma^{\, \pm}_{\, q \, b \, -} \left\lgroup x_i \right\rgroup$ are one-boson vertex operators, to be defined in section \ref{section.04}. \subsubsection{Products on almost theta functions} We will also use, \begin{equation} \theta_{\, q} \left\lgroup \pmb{x}, \, \pmb{y} \right\rgroup = \prod_{i,\, j = 1}^\infty \theta_{\, q} \left\lgroup x_i \, y_j \right\rgroup, \quad \theta_{\, q} \left\lgroup x_i \, y_j \right\rgroup = \prod_{n = 0}^\infty \left\lgroup 1\, -\, x_i \, y_j \, q^{\, n } \right\rgroup \left\lgroup 1\, -\, x^{\, \prime}_i \, y^{\, \prime}_j \, q^{ \, n+1 } \right\rgroup, \label{abbreviation.02} \end{equation} that is, $\theta_{\, q} \left\lgroup x \right\rgroup$ is a Jacobi theta function $\Theta \left\lgroup x \, | \, q \right\rgroup$, up to an $x$-independent factor, \begin{multline} \theta_{\, q} \left\lgroup x \right\rgroup = \Theta \left\lgroup x \, | \, q \right\rgroup / \left\lgroup q \, | \, q \right\rgroup, \\ \Theta \left\lgroup x; q \right\rgroup = \prod_{n = 0}^\infty \left\lgroup 1\, -\, q^{\, n + 1 } \right\rgroup \left\lgroup 1\, -\, x \, q^{\, n } \right\rgroup \left\lgroup 1\, -\, x^{\, \prime}_i\, q^{ \, n + 1 } \right\rgroup, \quad \left\lgroup q \, | \, q \right\rgroup = \prod_{n = 1}^\infty \left\lgroup 1\, - \, q^{\, n} \right\rgroup \end{multline} \section{The $q$-Whittaker functions} \label{section.02} \textit{Starting from the properties of the Macdonald functions, which depend on $q$ and $t$, we take the limit $t \rightarrow 0$ to obtain the corresponding properties of the $q$-Whittaker functions, which depend on $q$ only. } \smallskip \subsection{The monomial symmetric functions} $m_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, where $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, indexed by a Young diagram $Y$, is \footnote{\, Ch. I, p. 18, Equation 2.1, in \cite{macdonald.book} }, \begin{equation} m_{\, Y} \left\lgroup \pmb{x} \right\rgroup = \sum_{\pmb{\iota }} x_{\pmb{\iota }}^Y, \end{equation} where the sum runs over all distinct permutations of the set $\pmb{\iota }$, which is defined as in section \ref{more.on.sequences}. For example, \begin{equation} m_\emptyset \left\lgroup \pmb{x} \right\rgroup = 1, \, \, m_1 \left\lgroup \pmb{x} \right\rgroup = \sum_i x_i, \, \, m_2 \left\lgroup \pmb{x} \right\rgroup = \sum_i x_i^2, \, \, \cdots, \, \, m_{443} \left\lgroup x_1, \cdots, x_5 \right\rgroup = \sum_{\pmb{\iota }} x_i^4 \, x_j^4 \, x_k^3, \end{equation} where the sum in the last example is over all distinct permutations $\pmb{\iota }$, of all distinct subsets ${\pmb m} \subseteq {\pmb n} = \left\lgroup 1, \cdots, 5 \right\rgroup$, such that the cardinality $| \, {\pmb m} \, | = 3$, and $i \neq j \neq k \in {\pmb n}$, as defined in section \ref{more.on.sequences}. \subsection{Power-sum symmetric functions} $p_n \left\lgroup \pmb{x} \right\rgroup$, where $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, indexed by an integer $n \in \left\lgroup 0, 1, \cdots \right\rgroup$, is \footnote{\, Ch. I, p. 23, in \cite{macdonald.book} }, \begin{equation} p_0 \left\lgroup \pmb{x} \right\rgroup = 1, \quad p_n \left\lgroup \pmb{x} \right\rgroup = \sum_i x_i^{\, n} = m_n \left\lgroup \pmb{x} \right\rgroup, \quad n = 1, 2, \cdots, \end{equation} and $p_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, indexed by a Young diagram $Y = \left\lgroup y_1, y_2, \cdots \right\rgroup$, is \footnote{\, Ch. I, p. 24, in \cite{macdonald.book} }, \begin{equation} p_{\, Y} \left\lgroup \pmb{x} \right\rgroup\, =\, p_{\, Y_1} \left\lgroup \pmb{x} \right\rgroup\, p_{\, Y_2} \left\lgroup \pmb{x} \right\rgroup\, \cdots \end{equation} \subsection{$q$-Whittaker functions as $t \rightarrow 0$ limits of Macdonald functions} Consider the ring of symmetric functions in the variables $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, with coefficients in the field of rational functions in two variables $ \left\lgroup q, t\right\rgroup$. In this ring, the Macdonald functions $P_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, each labelled by a Young diagram $Y$, and the dual Macdonald functions $Q_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, each also labelled by a Young diagram $Y$, form two orthogonal bases. In the limit $t \rightarrow 0$, the coefficients of the ring of symmetric functions are in the field of rational functions in a single variable $q$, the Macdonald functions $P_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ reduce to the $q$-Whittaker functions, which we denote by $W_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$, and the dual Macdonald functions $Q_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ reduce to the dual $q$-Whittaker functions, which we denote by $W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$. These functions were introduced by Gerasimov, Lebedev and Oblezin \cite{gerasimov.lebedev.oblezin.01, gerasimov.lebedev.oblezin.02, gerasimov.lebedev.oblezin.03}, and further studied in \cite{borodin.corwin, borodin.petrov, borodin.wheeler}. In the rest of this section, we deduce the properties of and relations satisfied by $W_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ by taking the $t \rightarrow 0$ limit of the corresponding properties and relations satisfied by $P_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ and $Q_{\, Y} \left\lgroup \pmb{x} \right\rgroup$. \subsection{The $q$-inner product of the power-sum symmetric functions} \label{power.sum.inner.product.macdonald.basis} From the orthogonality of the power-sum symmetric functions in the ring of symmetric functions with coefficients in the field of rational functions in $q$ and $t$ \footnote{\, Ch. VI, p. 225, Equation 4.11, in \cite{macdonald.book} }, the power-sum symmetric functions in the ring of symmetric functions with coefficients in the field of rational functions in $q$ are orthogonal with respect to the $q$-inner product, \begin{equation} \langle\, p_{\, Y_1} \left\lgroup \pmb{x} \right\rgroup\, |\, p_{\, Y_2} \left\lgroup \pmb{x} \right\rgroup \rangle_{\, q} = z_{\, q \, Y_1}\, \delta_{Y_1 Y_2}, \quad z_{\, q \, Y} = \left\lgroup 1^{n_1} \left\lgroup n_1 ! \right\rgroup 2^{n_2} \left\lgroup n_2 ! \right\rgroup \cdots \right\rgroup \prod_{i=1}^{y_1^{\, \prime}} \left\lgroup 1 - q^{\, y_i} \right\rgroup, \label{young.diagram.power.sum.inner.product.a} \end{equation} where $n_{\, r}$ is the number of rows of length $r$ in $Y$, and $y^{\, \prime}_1$ is the length of the first row in $Y^{\, \prime}$, that is, the number of non-zero rows in $Y$. This inner product can be understood as follows \footnote{\, Ch. I, p. 75--76, in \cite{macdonald.book} }. For every power-sum symmetric function $p_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, there is a differential operator $D_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ in $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, such that acting with $D_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ on $p_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, then setting $x_1 = x_2 = \cdots = 0$, one obtains the right hand side of the first of Equations \ref{young.diagram.power.sum.inner.product.a}. \subsection{A $q$-identity} The power-sum symmetric functions $p_n \left\lgroup \pmb{x} \right\rgroup$ satisfy the $q$-identity, \begin{equation} \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{1}{n} \frac{1}{ \left\lgroup 1 - q^{\, n} \right\rgroup} \, p_n \left\lgroup \pmb{x} \right\rgroup p_n \left\lgroup \pmb{y} \right\rgroup \right\rgroup = \prod_{n = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n}} \right\rgroup, \label{an.exponential.is.a.product} \end{equation} which follows from expanding the exponent on the left hand side, then using, \begin{equation} \exp \left\lgroup - \sum_{n = 1}^{\infty} \frac{x^{\, n}}{n} \right\rgroup = \exp \left\lgroup \log \left\lgroup 1 - x \right\rgroup \right\rgroup = 1 - x, \end{equation} to resum the result of the expansion in the form of the right hand side. \subsection{The $q$-Whittaker function} From the definition of the Macdonald function $P_{\, Y} \left\lgroup \, \pmb{x} \right\rgroup$ \footnote{\, Ch. VI, p. 322, in \cite{macdonald.book} }, we obtain the $q$-Whittaker function $W_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup$, $Y = \left\lgroup y_1, y_2, \cdots \right\rgroup$, as the unique symmetric function in $\pmb{x} = \left\lgroup x_1, x_2, \cdots \right\rgroup$, $| \, \pmb{x} \, | \geqslant y_1^{\, \prime}$, that satisfies two properties. \subsubsection{The expansion in terms of monomial symmetric functions} \begin{equation} W_{\, q \, Y_1} \left\lgroup\, \pmb{x}\, \right\rgroup = m_{\, Y_1} \left\lgroup \pmb{x} \right\rgroup \, + \, \sum_{Y_1 \succ Y_2} \, u_{\, q \, Y_1 \, Y_2} \, m_{\, Y_2} \left\lgroup \pmb{x} \right\rgroup, \end{equation} where $m_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ is the monomial symmetric function in $\pmb{x}$ labelled by $Y$, $Y_1 \succ Y_2$ indicates that $Y_1$ dominates $Y_2$ in the natural partial ordering of Young diagrams \footnote{\, Ch. I, p. 7, in \cite{macdonald.book} }, and the coefficients $u_{\, q \, Y_1 \, Y_2}$ are rational functions in $q$. \subsubsection{The orthogonality relation} \begin{equation} \langle W_{\, q \, Y_1} \left\lgroup\, \pmb{x}\, \right\rgroup\, |\, W_{\, q \, Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup \rangle = 0, \quad \textit{for} \quad Y_1 \neq \, Y_2 \label{macdonald.orthogonality} \end{equation} \subsection{The dual $q$-Whittaker function} From the definition of the dual Macdonald function $Q_{\, Y} \left\lgroup \, \pmb{x} \, \right\rgroup$ \footnote{\, Ch. VI, p. 322, in \cite{macdonald.book} }, the dual $q$-Whittaker function $W^{\, \prime}_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup$ is defined in terms of $W_{\, q \, Y} \left\lgroup \, \pmb{x}\, \right\rgroup$ as \footnote{\, Ch. VI, p. 323, Equation 4.12, and p. 339, Equation 6.19, in \cite{macdonald.book} }, \begin{equation} W^{\, \prime}_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup = b_{\, Y} \, W_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup, \quad b_{\, Y} = \prod^{\, \prime}_{\square \, \in\, Y} \left\lgroup \frac{ 1 }{ 1\, -\,q^{\, A_{\square \, Y}^+} } \right\rgroup, \label{dual.macdonald} \end{equation} where the prime on the product indicates that the product is restricted to cells $\square \, \in\, Y$ with leg-length $L_{\, \square} = 0$. This is obtained as follows. For $q \neq 0$, the product on the right hand side of Equation \ref{dual.macdonald} is, \begin{equation} b^{\, q \, t}_{\, Y} = \prod_{\square \, \in\, Y} \left\lgroup \frac{ 1\, -\,q^{\, A_{\square \, Y}} \, t^{\, L_{\square \, Y}^+} }{ 1\, -\,q^{\, A_{\square \, Y}^+} \, t^{\, L_{\square \, Y}} } \right\rgroup \end{equation} In the limit $t \rightarrow 0$, the numerator $1\, -\,q^{\, A_{\square \, Y}} \, t^{\, L_{\square \, Y}^+} \rightarrow 1$, for all $\square \, \in\, Y$, while the denominator $1\, -\,q^{\, A_{\square \, Y}^+} \, t^{\, L_{\square \, Y}} \rightarrow 1$, for all $\square \, \in\, Y$ such that $L_{\square \, Y} \geqslant 1$, and the corresponding factors trivialize to 1. All non-trivial contributions are due to $\square \, \in\, Y$ such that $L_{\square \, Y} = 0$. \subsection{$q$-Whittaker Cauchy identities} From the Cauchy identities for $P_{\, Y} \left\lgroup \, \pmb{x} \, \right\rgroup$ and $Q_{\, Y} \left\lgroup \, \pmb{x} \, \right\rgroup$ \footnote{\, Ch. VI, p. 324, Equation 4.13, and p. 329, Equation 5.4, in \cite{macdonald.book} }, $W_{\, q \, Y} \left\lgroup \, \pmb{x} \, \right\rgroup$ and $W^{\, \prime}_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup$ satisfy the Cauchy identity, \begin{equation} \sum_{\, Y} W_{\, q \, Y} \left\lgroup\, \pmb{x}\, \right\rgroup\, W^{\, \prime}_{\, q \, Y} \left\lgroup\, \pmb{y}\, \right\rgroup = \prod_{n = \, 0}^\infty \left\lgroup \frac{1} {1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n}} \right\rgroup \label{pqt.macdonald.cauchy.identity} \end{equation} \subsection{The structure constants} From the product of two Macdonald functions \footnote{\, Ch. VI, p. 343, Equation $7.1^{\, \prime}$, in \cite{macdonald.book} }, the product of two $q$-Whittaker functions can be expanded in the form, \begin{equation} W_{\, q \, Y_1} \left\lgroup\, \pmb{x}\, \right\rgroup \, W_{\, q \, Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup = \sum_{\, Y_3} f_{\, Y_1 \, Y_2 \, Y_3} \, W_{\, q \, Y_3} \left\lgroup\, \pmb{x}\, \right\rgroup, \label{product.01} \end{equation} which can be used as a definition of the $q$-dependent structure constants $f_{\, Y_1 \, Y_2 \, Y_3}$. Similarly, from the product of two dual Macdonald functions \footnote{\, Ch. VI, p. 344, in \cite{macdonald.book} }, the product of two dual $q$-Whittaker functions can be expanded as, \begin{equation} W^{\, \prime}_{\, q \, Y_1} \left\lgroup\, \pmb{x}\, \right\rgroup \, W^{\, \prime}_{\, q \, Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup = \sum_{\, Y_3} f_{\, Y^{\, \prime}\, Y_2^{\, \prime} \, Y_3^{\, \prime}} \, W^{\, \prime}_{\, q \, Y_3} \left\lgroup\, \pmb{x}\, \right\rgroup \label{product.02} \end{equation} From Equations \ref{product.01} and \ref{product.02}, and the corresponding relations for Macdonald functions \footnote{\, Ch. VI, p. 344, Equation 7.3, in \cite{macdonald.book} }, \begin{equation} f_{\, Y_1^{\, \prime} \, Y_2^{\, \prime} \, Y_3^{\, \prime}} = \left\lgroup \frac{ b_{\, Y_3} }{ b_{\, Y_1} \, b_{\, Y_2} } \right\rgroup\, f_{\, Y_1 \, Y_2 \, Y_3} \, , \end{equation} where $b_{\, Y}$ is defined in Equation \ref{dual.macdonald}. Similarly \footnote{\, Ch. VI, p. 343, Equation 7.1, in \cite{macdonald.book} }, the structure constant $f_{\, Y_1 \, Y_2 \, Y_3}$ can be written as an inner product, \begin{equation} f_{\, Y_1 \, Y_2 \, Y_3} = \langle\, W^{\, \prime}_{\, q \, Y_3} \left\lgroup\, \pmb{x}\, \right\rgroup \, |\, W_{\, q \, Y_1} \left\lgroup\, \pmb{x}\, \right\rgroup \, W_{\, q \, Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup \, \rangle \end{equation} \subsection{Skew $q$-Whittaker functions} From the definitions of the skew Macdonald function $P_{Y_1 / Y_2} \left\lgroup \, \pmb{x} \, \right\rgroup$ and the skew dual Macdonald function $Q_{Y_1 / Y_2} \left\lgroup \, \pmb{x} \, \right\rgroup$ \footnote{\, Ch. VI, p. 344, Equation $7.6^{\, \prime}$, and Ch. VI, p. 344, Equation 7.5, in \cite{macdonald.book} }, the skew $q$-Whittaker function $W_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup$ is defined in terms of the skew dual $q$-Whittaker function as, \begin{equation} W_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup = \left\lgroup \frac{ b_{\, Y_2}} { b_{\, Y_1}} \right\rgroup W^{\, \prime}_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup, \end{equation} while the skew dual $q$-Whittaker function $W^{\, \prime}_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup$ is defined in terms of the dual (non-skewed) $q$-Whittaker function as, \begin{equation} W^{\, \prime}_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{x}\, \right\rgroup = \sum_{\, Y_3} f_{\, Y_2\, Y_3 \, Y_1} \, W^{\, \prime}_{\, q \, Y_3} \left\lgroup\, \pmb{x}\, \right\rgroup \end{equation} \subsection{Skew $q$-Whittaker Cauchy identities} From the Cauchy identities for skew Macdonald functions \footnote{\, Ch. VI, p. 352, and p. 352, in \cite{macdonald.book} }, the skew $q$-Whittaker functions satisfy the Cauchy identities, \begin{equation} \sum_{\, Y} W^{ }_{\, q \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime}_{\, q \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup \frac{1} {1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n}} \right\rgroup \sum_{\, Y} \, W^{ }_{\, q \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime}_{\, q \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup, \label{pqt.cauchy.identity.skew.01} \end{equation} and other Cauchy identities that involve skew Hall-Littlewood functions that need not concern us here. \section{Twisted $q$-Whittaker functions} \label{section.03} \textit{We define an involution that we call \textit{\lq a twist\rq} that acts on the $q$-Whittaker functions to generate \lq twisted $q$-Whittaker functions\rq, then consider Cauchy identities that involve the $q$-Whittaker functions and their twisted versions. The reason why we need these specific Cauchy identities will be clear in the sequel.} \subsection{A twist} Consider the twist $\pmb{\iota }$, which acts on the power sum symmetric functions as, \begin{equation} \pmb{\iota } \, . \, p_n \left\lgroup \pmb{x} \right\rgroup = \left\lgroup -1 \right\rgroup^{\, n-1} \, p_n \left\lgroup \pmb{x} \right\rgroup, \quad n = 1, 2, \cdots, \label{involution} \end{equation} which, \textit{in the absence of the Macdonald $q$ and $t$ parameters}, is identical to the involution $\omega$ in the theory of symmetric functions \footnote{\, Ch. I, p. 21 \cite{macdonald.book}. }, and acts on Schur functions as, \begin{equation} \pmb{\iota } \, . \, s_{\lambda } \left\lgroup \pmb{x} \right\rgroup = \omega \, . \, s_{\lambda } \left\lgroup \pmb{x} \right\rgroup = s_{\lambda^{\,\prime}} \left\lgroup \pmb{x} \right\rgroup \end{equation} In the presence of Macdonald $q$ and $t$ parameters, the natural generalization of $\omega$ acts as, \begin{equation} \omega \, . \, p_n \left\lgroup \pmb{x} \right\rgroup = \left\lgroup -1 \right\rgroup^{\, n-1} \, \left\lgroup \frac{ 1 - q^{\, n} }{ 1 - t^{\, n} } \right\rgroup \, p_n \left\lgroup \pmb{x} \right\rgroup, \quad n \geqslant 1, \label{involution.m} \end{equation} and acts on a $q$-Whittaker function with a parameter $q$ to give a Hall-Littlewood function also with a parameter $q$ (rather than $t$) \cite{borodin.wheeler}. The action of our twist $\pmb{\iota }$ remains the same in the presence of a Macdonald parameter (or both), and acts on $q$-Whittaker functions to generate \textit{twisted $q$-Whittaker functions}. \subsection{Twisted $q$-Whittaker functions} The involution $\pmb{\iota }$ acts on the $q$-Whittaker functions $W^{ }_{\, q \, Y}$ and the dual $q$-Whittaker functions $W^{\, \prime}_{\, q \, Y}$ to produce the twisted $q$-Whittaker functions $W^{ \, \star}_{\, q \, Y}$ and the twisted dual $q$-Whittaker functions $W^{\, \prime \, \star}_{\, q \, Y}$ \, , \begin{equation} \pmb{\iota } \, . \, W_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup = W^{\, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup, \quad \pmb{\iota } \, . \, W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup = W^{\, \prime \, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup, \label{twisted.whittaker} \end{equation} where $W^{\, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime \, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ are defined by expanding $W_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ in the power sum functions $p_{\, n} \left\lgroup \pmb{x} \right\rgroup$ and acting on the latter as in Equation \ref{involution}. \subsubsection{Remark} In the limit $q \rightarrow 0$, both $W^{ }_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ reduce to the same Schur function $s_{\, Y^{ }} \left\lgroup \pmb{x} \right\rgroup$ labelled by the Young diagram $Y$, and their twisted versions $W^{\, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ and $W^{\, \prime \, \star}_{\, q \, Y} \left\lgroup \pmb{x} \right\rgroup$ reduce to the same Schur function $s_{\, Y^{\, \prime}} \left\lgroup \pmb{x} \right\rgroup$ labelled by the transpose Young diagram $Y^{\, \prime}$. \subsection{More Cauchy identities} Starting from the identity, \begin{equation} \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{1}{n} \frac{ 1 }{ \left\lgroup 1 - q^{\, n} \right\rgroup } \, p_n \left\lgroup \pmb{x} \right\rgroup p_n \left\lgroup \pmb{y} \right\rgroup \right\rgroup = \prod_{\, n = 0} \frac{ 1 }{ \left\lgroup 1 - \, \pmb{x} \, \pmb{y} \, q^{\, n} \right\rgroup }, \end{equation} and the identities obtained by applying the involution \ref{involution} on the power sum functions $p_n \left\lgroup \pmb{x} \right\rgroup$ in the Cauchy identity \ref{pqt.cauchy.identity.skew.01}, \begin{multline} \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{1}{n} \frac{ 1 }{ \left\lgroup 1 - q^{\, n} \right\rgroup } \, \pmb{\iota } \, . \, p_n \left\lgroup \pmb{x} \right\rgroup p_n \left\lgroup \pmb{y} \right\rgroup \right\rgroup = \\ \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{1}{n} \frac{ 1 }{ \left\lgroup 1 - q^{\, n} \right\rgroup } \, p_n \left\lgroup \pmb{x} \right\rgroup \pmb{\iota } \, . \, p_n \left\lgroup \pmb{y} \right\rgroup \right\rgroup = \prod_{n=0}^{\infty} \, \left\lgroup 1 + \, \pmb{x} \, \pmb{y} \, q^{\, n} \right\rgroup, \end{multline} and, \begin{equation} \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{1}{n} \frac{ 1 }{ \left\lgroup 1 - q^{\, n} \right\rgroup } \, \pmb{\iota } \, . \, p_n \left\lgroup \pmb{x} \right\rgroup \pmb{\iota } \, . \, p_n \left\lgroup \pmb{y} \right\rgroup \right\rgroup = \prod_{n=0}^{\infty} \, \frac{ 1 }{ \left\lgroup 1 - \, \pmb{x} \, \pmb{y} \, q^{\, n} \right\rgroup }, \end{equation} we obtain, \begin{equation} \sum_{\, Y} W^{\, \star}_{\, q \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime}_{\, q \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup 1 + \, \pmb{x} \, \pmb{y} \, q^{\, n} \right\rgroup \sum_{\, Y} \, W^{\, \star}_{\, q \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime}_{\, q \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup \,, \label{pqt.cauchy.identity.skew.02} \end{equation} \begin{equation} \sum_{\, Y} W^{\, \star}_{\, q \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime\, \star}_{\, q \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup \frac{1}{1 - \, \pmb{x} \, \pmb{y} \, q^{\, n}} \right\rgroup \sum_{\, Y} \, W^{\, \star}_{\, q \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime \, \star}_{\, q \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup \label{pqt.cauchy.identity.skew.03} \end{equation} Replacing $q$ with $q^{\prime}$, \begin{equation} \sum_{\, Y} W^{ }_{\, q^\prime \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime}_{\, q^\prime \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n+1} \right\rgroup \sum_{\, Y} \, W^{ }_{\, q^\prime \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime}_{\, q^\prime \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup, \label{pqt.cauchy.identity.skew.prime.01} \end{equation} \begin{equation} \sum_{\, Y} W^{\, \star}_{\, q^\prime \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime}_{\, q^\prime \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup \frac{1} {1 + \, \pmb{x} \, \pmb{y} \, q^{\, n+1}} \right\rgroup \sum_{\, Y} \, W^{\, \star}_{\, q^\prime \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime}_{\, q^\prime \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup \,, \label{pqt.cauchy.identity.skew.prime.02} \end{equation} \begin{equation} \sum_{\, Y} W^{\, \star}_{\, q^\prime \, Y / Y_1} \left\lgroup\,\pmb{x}\, \right\rgroup \, W^{\, \prime\, \star}_{\, q^\prime \, Y / Y_2} \left\lgroup\,\pmb{y}\, \right\rgroup = \prod_{ n = \, 0}^\infty \left\lgroup 1 - \, \pmb{x} \, \pmb{y} \, q^{\, n+1} \right\rgroup \sum_{\, Y} \, W^{\, \star}_{\, q^\prime \, Y_2 / Y} \left\lgroup\,\pmb{x}\, \right\rgroup\, W^{\, \prime \, \star}_{\, q^\prime \, Y_1 / Y} \left\lgroup\,\pmb{y}\, \right\rgroup, \label{pqt.cauchy.identity.skew.prime.03} \end{equation} where we used \begin{equation} \exp \left\lgroup \pm \sum_{n=1}^{\infty} \, \frac{1}{1-q^{\, \prime \, n}} \frac{x^{\, n} y^{\, n}}{n} \right\rgroup = \exp \left\lgroup \mp \sum_{n=1}^{\infty} \, \frac{q^{\, n}}{1-q^{ \, n}} \frac{x^{\, n} y^{\, n}}{n} \right\rgroup =\prod_{n=0}^\infty \left\lgroup 1 - \, x \, y \, q^{\, n + 1}\right\rgroup^{\, \pm 1}, \end{equation} to express the factors in Cauchy identities in terms of $q$. \section{$pqt$-Free bosons and $pqt$-vertex operators} \label{section.04} \textit{We recall basic facts related to Saito's $pqt$-Heisenberg algebras and $pqt$-vertex operators and note that Saito's deformation parameter $p$ appears on equal footing with Macdonald's parameter $q$. Setting $p=t$, all dependence on $p$ and on $t$ disappears, and the remaining parameter $q$ deforms Schur functions into $q$-Whittaker functions. } \medskip \subsection{Two $pqt$-Heisenberg algebras} Saito's free-boson realization of the elliptic extension of the Ding-Iohara-Miki algebra is based on \textit{two} $pqt$-Heisenberg algebras \footnote{\, Our notation is slightly different from, but equivalent to Saito's notation. }, \begin{equation} [a_m, a_n] = m \left\lgroup 1-p^{\, |m|} \right\rgroup \left\lgroup \frac{1 - q^{\, |m|}} {1 - t^{\, |m|}} \right\rgroup \delta_{m+n, 0} \, , \quad [b_m, b_n] = - m \left\lgroup {1 - p^{\, \prime \, |m|}} \right\rgroup \left\lgroup \frac{1 - q^{\, \prime \, |m|}} {1 - t^{\, \prime \, |m|}} \right\rgroup \delta_{m+n, 0} \, , \label{two.heisenbergs} \end{equation} where $p^{\, \prime}, \cdots,$ stand for $1/p, \cdots$ $a_{\, n}$ and $b_{\, n}$, $n = 1, 2, \cdots$ act as creation operators on the left vacuum state $\langle \, 0 \, |$, and as annihilation operators on the right vacuum state $| \, 0 \, \rangle$, while $a_{\, n}$ and $b_{\, n}$, $n = - 1, - 2, \cdots$ act as annihilation operators on the left vacuum state $\langle \, 0 \, |$, and as creation operators on the right vacuum state $| \, 0 \, \rangle$. \subsubsection{Remark} One should consider the writing of the second Heisenberg algebra in Equation \ref{two.heisenbergs} as short-hand notation. When performing computations, and particularly expansions in the various parameters, one should work in terms of the variables $p < 1$, $q < 1$, and $t < 1$, rather than their inverses, which also makes it clear that the minus sign on the right hand side of the $b$-operator commutator is due to notation, and that both algebras have the same signature. \subsection{Two-boson $pqt$-vertex operators} From the $pqt$-Heisenberg algebras, Saito defines two-boson $pqt$-vertex operators \footnote{\, Equations (3.9) and (3.8) respectively, in \cite{saito.01}, but in different notation. In particular, our $\phi_{\, +}^{\, p \, q \, t} \left\lgroup x \right\rgroup$ and $\phi_{\, -}^{\, p \, q \, t} \left\lgroup x \right\rgroup$ are Saito's $\phi^{\, \star} \left\lgroup p; x \right\rgroup$ and $\phi \left\lgroup p; x \right\rgroup$, respectively, and our $b_n$ is Saito's $- b_n$. }, \begin{multline} \phi_{\, \pm}^{\, p \, q \, t} \left\lgroup x \right\rgroup = \\ \exp \left\lgroup \sum_{n = 1}^{\infty} \left\lgroup \frac{ 1 }{ 1 - p^{\, n} } \right\rgroup \left\lgroup \frac{ 1 - t^{\, n} }{ 1 - q^{\, n} } \right\rgroup \frac{x^{\, \mp \, n}}{n} \, a_{\, \pm \, n} \right\rgroup \, \exp \left\lgroup \sum_{n = 1}^{\infty} \left\lgroup \frac{ 1 }{ 1 - p^{\, \prime \, n} } \right\rgroup \left\lgroup \frac{ 1 - t^{\, \prime \, n} }{ 1 - q^{\, \prime \, n} } \right\rgroup \frac{ x^{\, \prime \, \mp \, n} }{ n } \, b_{\, \pm \, n} \right\rgroup, \label{phi.plus.minus} \end{multline} where $x^{\, \prime}, \cdots$, stands for $1/x, \cdots$ \subsection{Two equivalent specializations} In Equations \ref{two.heisenbergs} and \ref{phi.plus.minus}, $p$ is Saito's elliptic deformation parameter, $q$ and $t$ are Macdonald parameters, $p$ and $q$ appear on equal footing, and the following two specializations lead to identical results, up to a renaming of the parameters. One can either \textbf{ 1.} Set $q \rightarrow t$, so that all dependence on $q$ and on $t$ disappears, to work in terms of Schur functions that are deformed by Saito's elliptic deformation parameter $p$, or \textbf{ 2.} set $p \rightarrow t$, so that all dependence on $p$ and on $t$ disappears, to work in terms of Schur functions that are deformed by Macdonald's parameter $q$, which are $q$-Whittaker functions. In the following, we choose specialization \textbf{2} because that makes it clear that we are dealing with $q$-Whittaker functions whose properties follow directly from those of Macdonald functions. \section{$q$-Free bosons and $q$-vertex operators} \label{section.05} \textit{We set $q \rightarrow t$, in Saito's $pqt$-Heisenberg algebras and $pqt$-vertex operators, so that all dependence on these parameters disappears, and we end up with pairs of expressions that depend on $q$ and on $1/q$.} \subsection{Two $q$-Heisenberg algebras} Consider the $q$-Heisenberg algebras, which are specializations of Saito's $pqt$-Heisenberg algebras, \begin{equation} [a_m, a_n] = m \left\lgroup 1-q^{\, |m|} \right\rgroup \delta_{m+n, 0} \, , \quad [b_m, b_n] = - m \left\lgroup {1 - q^{\, \prime \, |m|}} \right\rgroup \delta_{m+n, 0} \, , \quad [a_m, b_n] = 0, \label{pqt.heisenberg.algebras} \end{equation} and the $q$-vertex operators, which are specializations of Saito's $pqt$-vertex operators, \begin{equation} \phi_{\, q \, \pm} \left\lgroup x \right\rgroup = \exp \left\lgroup \sum_{n=1}^{\infty} \, \left\lgroup \frac{ 1 }{ 1 - q^{\, n} } \right\rgroup \frac{x^{\, \mp \, n}}{n} \, a_{\, \pm \, n} \right\rgroup \, \exp \left\lgroup \sum_{n=1}^{\infty} \, \left\lgroup \frac{ 1 }{ 1 - q^{\, \prime \, n} } \right\rgroup \frac{x^{\, \prime \, \mp \, n}}{n} \, b_{\, \pm \, n} \right\rgroup \label{phi} \end{equation} \subsubsection{Remark} Because the oscillators in Equation \ref{phi} have the same signs, when $\phi_{\, \pm}$ acts on a pair of Young diagrams $ \left\lgroup Y_1, Y_2 \right\rgroup$, it generates a new pair of Young diagrams $ \left\lgroup W_1, W_2 \right\rgroup$, such that $Y_1$ and $W_1$ are interlacing, and $Y_2$ and $W_2$ are also be interlacing. \subsection{One-boson $q$-vertex operators} We define the one-boson $q$-vertex operators \footnote{\, The exponentials in definitions in Equation \ref{gamma.vertex.operators} differ by a minus sign from those typically used in the literature. The latter were followed in \cite{foda.wu.02}. This amounts to a redefinition of the creation and annihilation operators $a_n, n \in \mathbbm{Z}$, which does not change the Heisenberg algebra. }, \begin{equation} \Gamma_{\, q \, a \, \pm} \left\lgroup x \right\rgroup = \exp \left\lgroup \sum_{n = 1}^{\infty} \left\lgroup \frac{ 1 }{ 1 - q^{\, n} } \right\rgroup \frac{ x^{\, \mp \, n} }{ n} \, a_{\pm \, n} \right\rgroup, \quad \Gamma_{\, q^{\, \prime} \, b \, \pm} \left\lgroup x \right\rgroup = \exp \left\lgroup \sum_{n = 1}^{\infty} \left\lgroup \frac{ 1 }{ 1 - q^{\, \prime \, n} } \right\rgroup \frac{ x^{\, \mp \, n} }{ n } \, b_{\pm \, n} \right\rgroup, \label{gamma.vertex.operators} \end{equation} which factorize the two-boson $q$-vertex operators as \footnote{\, $\Gamma_{\, q \, a \pm} \left\lgroup x \right\rgroup$ and $\Gamma_{\, q^{\, \prime} \, b \pm} \left\lgroup x \right\rgroup$ are defined in exactly the same way apart from the fact that $\Gamma_{\, q \, a \pm} \left\lgroup x \right\rgroup$ depends on the $a$-oscillators and the parameter $q$, while $\Gamma_{\, q^{\, \prime} \, b \pm} \left\lgroup x \right\rgroup$ depends on the $b$-oscillators and the inverse parameter $1/q$. }, \begin{equation} \phi_{\, + \, q} \left\lgroup x \right\rgroup = \Gamma_{\, q \, a \, +} \left\lgroup x \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x^{\, \prime} \right\rgroup, \quad \phi_{\, - \, q} \left\lgroup x \right\rgroup = \Gamma_{\, q \, a \, -} \left\lgroup x \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, - } \left\lgroup x^{\, \prime} \right\rgroup, \label{phi.vertex.operators} \end{equation} and satisfy the commutation relations \footnote{\, To prove the commutation relations in Equations \ref{gamma.commutator.01} and \ref{gamma.commutator.02}, we assume that $q < 1$, so that $q^{\, \prime} > 1$, so that all expansions and resummations must be made with respect to $q$. }, \begin{multline} \Gamma_{\, q \, a \, +} \left\lgroup x^{\, \prime} \right\rgroup \Gamma_{\, q \, a \, -} \left\lgroup y \right\rgroup = \\ \exp \left\lgroup \sum_{n = 1}^{\infty} \frac{ 1 }{ 1-q^{\, n} } \frac{ x^{\, n} y^{\, n} }{ n } \right\rgroup \Gamma_{\, q \, a \, -} \left\lgroup y \right\rgroup \, \Gamma_{\, q \, a \, +} \left\lgroup x^{\, \prime} \right\rgroup \, = \frac{ 1 }{ \left\lgroup x \,y \, | \, q \right\rgroup_{\infty} } \, \Gamma_{\, q \, a \, -} \left\lgroup y \right\rgroup \, \Gamma_{\, q \, a \, +} \left\lgroup x^{\, \prime} \right\rgroup, \label{gamma.commutator.01} \end{multline} \begin{multline} \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, -} \left\lgroup y^{\, \prime} \right\rgroup = \\ \exp \left\lgroup - \sum_{n = 1}^{\infty} \frac{1}{1-q^{\, \prime \, n}} \frac{x^{\, \prime \, n} y^{\, \prime \, n}}{n} \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x \right\rgroup = \left\lgroup x^{\, \prime} \, y^{\, \prime} \, | \, q^{\, \prime} \right\rgroup_{\infty} \, \Gamma_{\, q^{\, \prime} \, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x \right\rgroup \, = \\ \exp \left\lgroup \sum_{n = 1}^{\infty} \frac{ q^{\, n} }{ 1 - q^{\, n}} \frac{ x^{\, \prime \, n} y^{\, \prime \, n} }{ n } \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x \right\rgroup = \frac{1}{ \left\lgroup x^{\, \prime} \, y^{\, \prime} \, q \, | \, q \right\rgroup_{\infty}} \, \Gamma_{\, q^{\, \prime} \, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup x \right\rgroup, \label{gamma.commutator.02} \end{multline} so that, \begin{equation} \langle \, 0 \, | \, \phi_{\, q \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, \phi_{\, q \, -} \left\lgroup \pmb{y} \right\rgroup \, | \, 0 \, \rangle = \frac{ 1 }{ \theta_{\, q} \left\lgroup \pmb{x}, \, \pmb{y} \right\rgroup } \label{elliptic.macdonald.kernel} \end{equation} \subsection{An inverse vertex operator} We also need to define the inverse vertex operator, \begin{equation} \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, -} \left\lgroup x \right\rgroup = \exp \left\lgroup \, - \, \sum_{n = 1}^{\infty} \left\lgroup \frac{ 1 }{ 1 - q^{\, \prime \, n} } \right\rgroup \frac{ x^{\, n} }{ n } \, b_{\, - \, n} \right\rgroup, \label{inverse.gamma.vertex.operators} \end{equation} which satisfies the commutation relation, \begin{multline} \Gamma_{\, q^{\, \prime}\, b \, +} \left\lgroup x \right\rgroup \Gamma^{\, \prime}_{\, q^{\, \prime}\, b \, -} \left\lgroup y^{\, \prime} \right\rgroup = \\ \exp \left\lgroup + \sum_{n = 1}^{\infty} \frac{1}{1-q^{\, \prime \, n}} \frac{x^{\, \prime \, n} y^{\, \prime \, n}}{n} \right\rgroup \Gamma^{\, \prime}_{\, q^{\, \prime}\, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime}\, b \, +} \left\lgroup x \right\rgroup = \frac{1}{ \left\lgroup x^{\, \prime} \, y^{\, \prime} \, | \, q^{\, \prime} \right\rgroup_{\infty}} \, \Gamma^{\, \prime}_{\, q^{\, \prime}\, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime}\, b \, +} \left\lgroup x \right\rgroup \, = \\ \exp \left\lgroup -\sum_{n = 1}^{\infty} \frac{ q^{\, n} }{ 1 - q^{\, n}} \frac{ x^{\, \prime \, n} y^{\, \prime \, n} }{ n } \right\rgroup \Gamma^{\, \prime}_{\, q^{\, \prime}\, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime}\, b \, +} \left\lgroup x \right\rgroup = \left\lgroup x^{\, \prime} \, y^{\, \prime} \, q \, | \, q \right\rgroup_{\infty} \, \Gamma^{\, \prime}_{\, q^{\, \prime}\, b \, -} \left\lgroup y^{\, \prime} \right\rgroup \, \Gamma_{\, q^{\, \prime}\, b \, +} \left\lgroup x \right\rgroup \label{gamma.commutator.02-minus} \end{multline} $\Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, -} \left\lgroup x \right\rgroup$, as well as the vertex operator obtained from it by the action of the involution $\pmb{\iota }$, will be the only inverse vertex operators that we need. \section{The power-sum/Heisenberg correspondence. The $a$-Heisenberg algebra} \label{section.06} \textit{Starting from the $q$-Whittaker Cauchy identities, we obtain identities that involve operator-valued $q$-Whittaker functions that act on $q$-Whittaker states. In this section, we consider the $a$-Heisenberg algebra that depends on $a_{\, \pm \, n}$ only. } \smallskip \subsection{An isomorphism} Comparing the inner product of power-sum symmetric functions in the $q$-Whittaker basis, Equation \ref{young.diagram.power.sum.inner.product.a}, and the inner product of the right and left-states, Equation \ref{macdonald.orthogonality}, we deduce that the power-sum symmetric function basis is isomorphic to the Fock space spanned by the left-states $\langle \, a_{\, Y}\, |$, as well as that spanned by the right-states $|\, a_{\, Y}\, \rangle$, where $Y$ is a partition. In the left states, on which the operators $a_{ n}, n = 1, 2, \cdots,$ act as creation operators, we have the correspondence, \begin{equation} p_n \left\lgroup \pmb{x} \right\rgroup \rightleftharpoons \, a_{ n}, \quad n \geqslant 1 \label{power.sum.heisenberg.correspondence} \end{equation} In the right states, on which the operators $a_{ n}, n = -1, -2, \cdots,$ act as creation operators, we have the correspondence, \begin{equation} p_n \left\lgroup \pmb{x} \right\rgroup \rightleftharpoons \, a_{-n}, \quad n \geqslant 1 \label{power.sum.heisenberg.correspondence.dual} \end{equation} \subsection{Operator-valued $q$-Whittaker functions and Cauchy identities} Since the power-sum symmetric functions form a complete basis, we expand the $q$-Whittaker functions in terms of the power-sum symmetric functions, then formally replace the latter with Heisenberg generators to obtain operator-valued $q$-Whittaker functions that act either on left-or on right-states that are labeled by $q$-Whittaker functions.. \subsection{The action of vertex operators on $q$-Whittaker states} From Equations \ref{an.exponential.is.a.product} and \ref{pqt.cauchy.identity.skew.01}, \begin{multline} \exp \left\lgroup \sum_{n=1}^\infty \frac{1}{n} \left\lgroup \frac{1}{1\, -\,q^{\, n}} \right\rgroup p_n \left\lgroup\, \pmb{x}\, \right\rgroup\, p_n \left\lgroup\, \pmb{y}\, \right\rgroup\, \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \\ = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup \label{step.02} \end{multline} \subsubsection{The action of $\Gamma_{\, q \, a \, +}$ on a left-state} Using the power-sum/Heisenberg correspondence, Equation \ref{power.sum.heisenberg.correspondence.dual}, on $p_n \left\lgroup \pmb{x} \right\rgroup$ in Equation \ref{step.02}, we obtain the operator-valued $q$-Whittaker Cauchy identity, \begin{multline} \exp \left\lgroup \sum_{n=1}^\infty \frac{1}{n} \left\lgroup \frac{1}{1\, -\,q^{\, n}} \right\rgroup a_n\, p_n \left\lgroup\, \pmb{y}\, \right\rgroup\, \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{a}_{+}\, \right\rgroup\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \\ = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{a}_{+}\, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup \label{step.03} \end{multline} From the definition of the $\Gamma_{\, q \, a \, +}$ vertex operators, Equation \ref{gamma.vertex.operators}, \begin{equation} \Gamma_{\, q \, a \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{a}_{+} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{a}_{+} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup \label{step.04} \end{equation} where we have used the notation in section \ref{products.on.sequences}. Acting with each side of Equation \ref{step.04.repeated} on a left vacuum state, \begin{equation} \sum_{\, Y} \langle\, W_{\, q \, Y_1 / Y}\, |\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \Gamma_{\, q \, a \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup = \sum_{\, Y} \langle\, W_{\, q \, Y / Y_2}\, |\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{y}\, \right\rgroup, \label{step.05} \end{equation} where $\langle\, W_{\, q \, Y_1 / Y_2}\,|$ is a state in the free-boson Fock space obtained by the action of the operator-valued $q$-Whittaker function labelled by the skew Young diagram $Y_1 / Y_2$, that is, by definition, \begin{equation} \langle\, \emptyset\, |\, W_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{a}_{+}\, \right\rgroup\, = \langle\, W_{\, q \, Y_1 / Y_2}\, | \label{macdonald.action.01} \end{equation} Setting $Y_2 = \emptyset$ in Equation \ref{step.05}, we force $Y = \emptyset$ in the sum on the left hand side, \begin{equation} \langle\, W_{\, q \, Y_1}\, |\, \Gamma_{\, q \, a \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup = \sum_{\, Y} \langle\, W_{\, q \, Y}\, |\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{y}\, \right\rgroup \label{step.06} \end{equation} \subsubsection{The action of\, $\Gamma_{\, q \, a \, -}$\, on a right-state} Using the power-sum/Heisenberg correspondence, Equation \ref{power.sum.heisenberg.correspondence.dual}, on $p_n \left\lgroup \pmb{y} \right\rgroup$ in Equation \ref{step.02}, we obtain the operator-valued $q$-Whittaker Cauchy identity, \begin{multline} \exp \left\lgroup \sum_{n=1}^\infty \frac{ 1 }{ n } \left\lgroup \frac{ 1 }{ 1\, -\,q^{\, n} } \right\rgroup p_n \left\lgroup\, \pmb{x}\, \right\rgroup\, a_{-n} \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup \, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup \, \pmb{a}_{-}\, \right\rgroup \\ = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{a}_{-}\, \right\rgroup \label{step.03.repeated} \end{multline} From the definition of the $\Gamma_{\, q \, a \, -}$ vertex operators in Equation \ref{gamma.vertex.operators}, \begin{equation} \Gamma_{\, q \, a \, -} \left\lgroup \pmb{x} \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup\, \pmb{a}_{-}\, \right\rgroup = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{a}_{-}\, \right\rgroup \label{step.04.repeated} \end{equation} Acting with each side of Equation \ref{step.04.repeated} on a right vacuum state, \begin{equation} \Gamma_{\, q \, a \, -} \left\lgroup \pmb{x} \right\rgroup \sum_{\, Y} W_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_2 / Y} \, \rangle = \sum_{\, Y} W_{\, q \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y / Y_1}\, \rangle, \label{step.05.repeated} \end{equation} where $|\, W^{\, \prime}_{\, q \, Y_1 / Y_2} \, \rangle$ is a state in the free boson Fock space obtained by the action of the operator-valued $q$-Whittaker function labelled by the skew Young diagram $Y_1 / Y_2$, that is, by definition, \begin{equation} W^{\, \prime}_{\, q \, Y_1 / Y_2} \left\lgroup\, \pmb{a}_{-}\, \right\rgroup\, |\, \emptyset\, \rangle = |\, W^{\, \prime}_{\, q \, Y_1 / Y_2} \, \rangle \label{macdonald.action.02} \end{equation} Setting $Y_1 = \emptyset$ in Equation \ref{step.05.repeated}, we force $Y = \emptyset$ in the sum on the left hand side, \begin{equation} \Gamma_{\, q \, a \, -} \left\lgroup \pmb{x} \right\rgroup |\, W^{\, \prime}_{\, q \, Y_2} \, \rangle = \sum_{\, Y} W_{\, q \, Y/Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y}\, \rangle \label{step.06.repeated} \end{equation} \subsubsection{ The action of \, $\Gamma_{\, q \, a \, -}$ on a left-state, and \, $\Gamma_{\, q \, a \, +}$ on a right-state} Using Equations \ref{step.06} and \ref{step.06.repeated}, then Equation \ref{pqt.cauchy.identity.skew.01}, \begin{multline} \langle\, W_{\, q \, Y_1}\, |\, \Gamma_{\, q \, a \, +} \left\lgroup\, \pmb{x}^{\, \prime} \right\rgroup \Gamma_{\, q \, a \, -} \left\lgroup\, \pmb{y} \right\rgroup \, |\, W^{\, \prime}_{\, q \, Y_2 }\, \rangle = \\ =\, \sum_{\, Y} W^{\, \prime}_{\, q \, Y / Y_1}\, \left\lgroup \pmb{x} \right\rgroup\, W_{\, q \, Y / Y_2 }\, \left\lgroup \pmb{y} \right\rgroup = \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n} } \right\rgroup \sum_{\, Y} W^{\, \prime}_{\, q \, Y_2 / Y} \left\lgroup \pmb{x} \right\rgroup\, W_{\, q \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup \label{version.01} \end{multline} Using the $q$-vertex operator commutation relation, Equation \ref{gamma.commutator.01}, then inserting a complete set of orthonormal states, \begin{multline} \langle\, W_{\, q \, Y_1} \, |\, \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup \, |\, W^{\, \prime}_{\, q \, Y_2}\, \rangle = \\ \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, n} } \right\rgroup \langle\, W_{\, q \, Y_1} \, |\, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup \, \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, | \, W^{\, \prime}_{\, q \, Y_2}\, \rangle = \\ \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, - \, \pmb{x} \, \pmb{y} \, q^{\, n}} \right\rgroup \sum_{\, Y} \langle \, W_{\, q \, Y_1} \, | \, \prod_{j = 1}^\infty \Gamma_{\, q \, a \, -} \left\lgroup y_j \right\rgroup \, | \, W^{\, \prime}_{\, q \, Y}\, \rangle\, \langle\, W_{\, q \, Y}\, |\, \prod_{i\, = \, 1}^\infty \Gamma_{\, q \, a \, +} \left\lgroup x^{\, \prime}_i \right\rgroup \, | \, W^{\, \prime}_{\, q \, Y_2}\, \rangle \label{version.02} \end{multline} Comparing the right hand sides of Equations \ref{version.01} and \ref{version.02}, we obtain the two identities, \begin{multline} \left\lgroup \langle\, W_{\, q \, Y_1}\, | \, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup\, \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_2} \, \rangle = W_{\, q \, Y_1 / Y_2} \left\lgroup \pmb{y} \right\rgroup, \\ \langle \, W_{\, q \, Y_1} \, |\, \left\lgroup \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_2}\, \rangle\, \right\rgroup = W^{\, \prime}_{\, q \, Y_2 / Y_1} \left\lgroup\, \pmb{x} \,\right\rgroup, \label{two.preliminary.identities} \end{multline} where the brackets indicate the state acted on by the vertex operators. Since the states $\langle\, W_{\, q \, Y_1}\, |$ form a basis of left-states, the states $|\, W^{\, \prime}_{\, q \, Y_2}\, \rangle$ form a basis of right-states, and given the $q$-inner product Equation \ref{young.diagram.power.sum.inner.product.a}, \begin{equation} \langle\, W_{\, q \, Y_1}\, | \, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup \, = \sum_{\, Y} \langle\, W_{\, q \, Y}\, |\, \alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup, \quad \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_2}\, \rangle\, = \sum_{\, Y}\, \beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_2}\, \rangle, \label{expanding} \end{equation} where $\alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup$ and $\beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ are expansion coefficients that carry the dependence on $\pmb{x}$ and $\pmb{y}$, while the expansion is in the set of Young diagrams $Y$. Using Equation \ref{two.preliminary.identities}, we determine $\alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup$ and $ \beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, \begin{multline} \langle\, W_{\, q \, Y_1}\, |\, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup = \sum_{\, Y} \langle\, W_{\, q \, Y} \, |\, W_{\, q \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup, \\ \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup\, |\, W^{\, \prime}_{\, q \, Y_1}\, \rangle = \sum_{\, Y} W^{\, \prime}_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{x} \,\right\rgroup\, |\, W^{\, \prime}_{\, q \, Y} \rangle \label{two.identities} \end{multline} \subsection{The action of vertex operators on twisted $q$-Whittaker states} Replacing the $q$-Whittaker functions with their twisted version, we obtain the following identities for the action of the vertex operators, \begin{multline} \left\lgroup \langle\, W^{\, \star}_{\, q \, Y_1}\, | \, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup\, \right\rgroup\, |\, W^{\, \prime\, \star}_{\, q \, Y_2} \, \rangle = W^{\, \star}_{\, q \, Y_1 / Y_2} \left\lgroup \pmb{y} \right\rgroup, \\ \langle \, W^{\, \star}_{\, q \, Y_1} \, |\, \left\lgroup \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup\, |\, W^{\, \prime\, \star}_{\, q \, Y_2}\, \rangle\, \right\rgroup = W^{\, \prime\, \star}_{\, q \, Y_2 / Y_1} \left\lgroup\, \pmb{x} \,\right\rgroup, \label{two.preliminary.identities.star} \end{multline} \begin{multline} \langle\, W^{\, \star}_{\, q \, Y_1}\, |\, \Gamma_{\, q \, a \, -} \left\lgroup \pmb{y} \right\rgroup = \sum_{\, Y} \langle\, W^{\, \star}_{\, q \, Y} \, |\, W^{\, \star}_{\, q \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup, \\ \Gamma_{\, q \, a \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup\, |\, W^{\, \prime\, \star}_{\, q \, Y_1}\, \rangle = \sum_{\, Y} W^{\, \prime\, \star}_{\, q \, Y_1 / Y} \left\lgroup\, \pmb{x} \,\right\rgroup\, |\, W^{\, \prime\, \star}_{\, q \, Y} \rangle \label{two.identities.star} \end{multline} \section{The power-sum/Heisenberg correspondence. The $b$-Heisenberg algebra} \label{section.07} \textit{Starting from the $q$-Whittaker Cauchy identities, we obtain identities that involve operator-valued $q$-Whittaker functions that act on $q$-Whittaker states. In this section, we consider the $b$-Heisenberg algebra that depends on $b_{\, \pm \, n}$ only. } \smallskip \subsection{An isomorphism} Due to the extra minus sign in the commutator of $b_{\pm n}$ bosons, the correspondence between the power-sum functions and oscillators is given by, \begin{equation} p_n \left\lgroup \pmb{x} \right\rgroup \rightleftharpoons \, b_{ n}, \quad n \geqslant 1, \label{power.sum.heisenberg.correspondence.b} \end{equation} for the power-sum functions in the left states, and \begin{equation} p_n \left\lgroup \pmb{x} \right\rgroup \rightleftharpoons \, -b_{-n}, \quad n \geqslant 1, \label{power.sum.heisenberg.correspondence.dual.b} \end{equation} for the power-sum functions in the right states \footnote{\, The minus sign on the right hand side of Equation \ref{power.sum.heisenberg.correspondence.dual.b} follows from the minus sign on the right hand side of the commutator of the $b$-oscillators, Equation \ref{two.heisenbergs}, and the fact that we write all expressions that involve $b$-oscillators in terms of $q^{\, \prime}$ to maintain similarity to those that involve $a$-oscillators. }. \subsection{The action of operator-valued $q$-Whittaker functions on states} From the Cauchy identity for skew $q$-Whittaker functions, Equation \ref{pqt.cauchy.identity.skew.01}, with $q$ replaced by $q^{\, \prime}$, and use Equation \ref{an.exponential.is.a.product} to obtain, \begin{multline} \exp \left\lgroup \sum_{n=1}^\infty \frac{ 1 }{ n } \left\lgroup \frac{ 1 }{ 1 \, - \, q^{\, \prime\, n} } \right\rgroup p_n \left\lgroup\, \pmb{x}\, \right\rgroup\, p_n \left\lgroup\, \pmb{y}\, \right\rgroup\, \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \\ = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup \label{step.02.b} \end{multline} \subsubsection{The action of $\Gamma_{\, q \, b \, +}$ on a left-state} Using the power-sum/Heisenberg correspondence, Equation \ref{power.sum.heisenberg.correspondence.dual.b}, on the power sum function $p_n \left\lgroup \pmb{x} \right\rgroup$, on both sides of Equation \ref{step.02.b}, we introduce free-boson mode operators that act as creation operators on a left-state, to obtain the operator-valued $q$-Whittaker Cauchy identity, \begin{multline} \exp \left\lgroup \sum_{n=1}^\infty \frac{ 1 }{ n } \left\lgroup \frac{ 1 }{ 1 \, - \, q^{\, \prime \, n} } \right\rgroup b_n \, p_n \left\lgroup \, \pmb{y} \, \right\rgroup \, \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{b}_{+}\, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \\ = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{b}_{+}\, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup, \label{step.03.b} \end{multline} From the definition of the $\Gamma_{\, q^{\, \prime} \, b \, +}$ vertex operators, Equation \ref{gamma.vertex.operators}, \begin{equation} \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \, \pmb{b}_{+} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{b}_{+} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup\, \pmb{y} \, \right\rgroup \label{step.04.b} \end{equation} Acting with each side of Equation \ref{step.04.b} on a left vacuum state, \begin{equation} \sum_{\, Y} \langle\, W_{\, q^{\, \prime} \, Y_1 / Y} \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{y} \, \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup = \sum_{\, Y} \langle\, W_{\, q^{\, \prime} \, Y / Y_2} \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup\, \pmb{y}\, \right\rgroup, \label{step.05.b} \end{equation} where $\langle\, W_{\, q^{\, \prime} \, Y_1 / Y_2}\,|$ is a state in the free-boson Fock space obtained by the action of the operator-valued $q$-Whittaker function labelled by the skew Young diagram $Y_1 / Y_2$, \begin{equation} \langle\, \emptyset\, |\, W_{\, q^{\, \prime} \, Y_1 / Y_2} \left\lgroup\, \pmb{b}_{+}\, \right\rgroup\, = \langle\, W_{\, q^{\, \prime} \, Y_1 / Y_2}\, | \label{macdonald.action.01.b} \end{equation} Setting $Y_2 = \emptyset$ in Equation \ref{step.05.b}, we force $Y = \emptyset$ in the sum on the left hand side, \begin{equation} \langle\, W_{\, q^{\, \prime} \, Y_1}\, |\, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{y}^{\, \prime} \right\rgroup = \sum_{\, Y} \langle\, W_{\, q^{\, \prime} \, Y}\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup \, \pmb{y} \, \right\rgroup \label{step.06.b} \end{equation} \subsubsection{The action of\, $\Gamma^{\, \prime}_{\, q \, b \, - }$\, on a right-state} Using the power-sum/Heisenberg correspondence, Equation \ref{power.sum.heisenberg.correspondence.dual.b}, on $p_n \left\lgroup \pmb{y} \right\rgroup$, to introduce free-boson mode operators that act as creation operators on a right-state, \begin{multline} \exp \left\lgroup -\sum_{n=1}^\infty \frac{ 1 }{ n } \left\lgroup \frac{ 1 }{ 1 \, - \, q^{\, \prime\, n} } \right\rgroup p_n \left\lgroup\, \pmb{x}\, \right\rgroup\, b_{-n} \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{b}_{-}\, \right\rgroup \\ = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q \, Y / Y_1} \left\lgroup\, \pmb{b}_{-}\, \right\rgroup \label{step.03.repeated.b} \end{multline} From the definition of the $\Gamma^{\, \prime}_{\, q \, b \, - }$ vertex operator, Equation \ref{inverse.gamma.vertex.operators}, \begin{equation} \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{x} \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup\, \pmb{b}_{-}\, \right\rgroup = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1} \left\lgroup\, \pmb{b}_{-}\, \right\rgroup \label{step.04.repeated.b} \end{equation} Acting with each side of Equation \ref{step.04.repeated.b} on a right vacuum state, \begin{equation} \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{x} \right\rgroup \sum_{\, Y} W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \, \rangle = \sum_{\, Y} W_{\, q^{\, \prime} \, Y / Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1}\, \rangle, \label{step.05.repeated.b} \end{equation} where $|\, W^{\, \prime}_{\, q^{\, \prime} \, Y_1 / Y_2} \, \rangle$ is a state in the free boson Fock space obtained by the action of the operator-valued $q$-Whittaker function labelled by the skew Young diagram $Y_1 / Y_2$, \begin{equation} W^{\, \prime}_{\, q^{\, \prime} \, Y_1 / Y_2} \left\lgroup\, \pmb{b}_{-}\, \right\rgroup\, |\, \emptyset\, \rangle = |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_1 / Y_2} \, \rangle \label{macdonald.action.02.b} \end{equation} Setting $Y_1 = \emptyset$ in Equation \ref{step.05.repeated.b}, we force $Y = \emptyset$ in the sum on the left hand side, \begin{equation} \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{x} \right\rgroup |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2} \, \rangle = \sum_{\, Y} W_{\, q^{\, \prime} \, Y/Y_2} \left\lgroup\, \pmb{x} \, \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, \, Y}\, \rangle \label{step.06.repeated.b} \end{equation} \subsubsection{ The action of \, $\Gamma^{\, \prime}_{\, q \, b \, -}$ on a left-state, and \, $\Gamma_{\, q \, b \, +}$ on a right-state} Using Equations \ref{step.06.b} and \ref{step.06.repeated.b}, then Equation \ref{pqt.cauchy.identity.skew.01}, \begin{multline} \langle\, W_{\, q^{\, \prime} \, Y_1}\, |\, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup\, \pmb{x}^{\, \prime} \right\rgroup \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, -} \left\lgroup\, \pmb{y} \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2 }\, \rangle = \\ \sum_{\, Y} W^{\, \prime}_{\, q^{\, \prime} \, Y / Y_1}\, \left\lgroup \pmb{x} \right\rgroup\, W_{\, q^{\, \prime} \, Y / Y_2 }\, \left\lgroup \pmb{y} \right\rgroup = \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, \prime \, n} } \right\rgroup \sum_{\, Y} W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y} \left\lgroup \pmb{x} \right\rgroup\, W^{ }_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup \label{version.01.b} \end{multline} Using the $q$-vertex operator commutation relation, Equation \ref{gamma.commutator.02-minus}, the left hand side of Equation \ref{version.01.b} can be re-written as, \begin{multline} \langle\, W_{\, q^{\, \prime} \, Y_1} \, |\, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, -} \left\lgroup \pmb{y} \right\rgroup \, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2}\, \rangle = \\ \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, \prime\, n} } \right\rgroup \langle\, W_{\, q^{\, \prime} \, Y_1} \, | \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, -} \left\lgroup \pmb{y} \right\rgroup \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y_2}\, \rangle = \\ \prod_{n \, = \, 0}^\infty \left\lgroup \frac{ 1 }{ 1\, -\, \pmb{x} \, \pmb{y} \, q^{\, \prime\, n} } \right\rgroup \sum_{\, Y} \langle\, W_{\, q^{\, \prime} \, Y_1} \, | \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y}\, \rangle\, \langle\, W_{\, q^{\, \prime} \, Y} \, | \, \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2}\, \rangle \label{version.02.b} \end{multline} Comparing the right hand side of Equation \ref{version.01.b} and that of Equation \ref{version.02.b}, \begin{multline} \left\lgroup \langle\, W_{\, q^{\, \prime} \, Y_1} \, | \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup \, \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2} \, \rangle = W_{\, q^{\, \prime} \, Y_1 / Y_2} \left\lgroup \pmb{y} \right\rgroup, \\ \langle \, W_{\, q^{\, \prime} \, Y_1} \, |\, \left\lgroup \Gamma_{\, q \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y_2}\, \rangle\, \right\rgroup = W^{\, \prime}_{\, q^{\, \prime} \, Y_2 / Y_1} \left\lgroup\, \pmb{x} \,\right\rgroup, \label{two.preliminary.identities.b} \end{multline} where the brackets indicate the state acted on by the vertex operators. Since the states $\langle\, W_{\, q \, Y_1}\, |$ form a basis of left-states, and $|\, W^{\, \prime}_{\, q \, Y_2}\, \rangle$ form a basis of right-states, and given the $q$-inner product Equation \ref{young.diagram.power.sum.inner.product.a} with $q$ replaced by $q^{\prime}$, \begin{equation} \langle\, W_{\, q^{\, \prime} \, Y_1} \, | \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup \, = \sum_{\, Y} \langle\, W_{\, q^{\, \prime} \, \, Y} \, |\, \alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup, \quad \Gamma_{\, q \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \, |\, W^{\, \prime}_{\, q \, Y_2} \, \rangle\, \right\rgroup = \sum_{\, Y}\, \beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2}\, \rangle, \label{expanding.b} \end{equation} where $\alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup$ and $\beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup$ are expansion coefficients that carry the dependence on the variables $ \left\lgroup \pmb{x} \right\rgroup$ and $ \left\lgroup \pmb{y} \right\rgroup$, while the expansion is in the set of Young diagrams $Y$. Using Equation \ref{two.preliminary.identities}, we determine $\alpha_{\, Y} \left\lgroup \pmb{y} \right\rgroup$ and $ \beta_{\, Y} \left\lgroup \pmb{x} \right\rgroup$, \begin{multline} \langle\, W_{\, q^{\, \prime} \, Y_1}\, |\, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup = \sum_{\, Y} \langle\, W_{q^{\, \prime} \, Y} \, | \, W_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup, \\ \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, | \, W^{\, \prime}_{\, q^{\, \prime} \, Y_1}\, \rangle = \sum_{\, Y} W^{\, \prime}_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup\, \pmb{x} \,\right\rgroup \, | \, W^{\, \prime}_{q^{\, \prime}\, Y} \rangle \label{two.identities.b} \end{multline} \subsection{Twisted $q$-Whittaker function identities} Replacing the $q$-Whittaker functions with their twisted versions, \begin{multline} \left\lgroup \langle\, W^{\, \star}_{\, q^{\, \prime} \, Y_1}\, | \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup\, \right\rgroup \, | \, W^{\, \prime\, \star}_{\, q^{\, \prime} \, Y_2} \, \rangle = W_{\, q^{\, \prime\, \star} \, Y_1 / Y_2} \left\lgroup \pmb{y} \right\rgroup, \\ \langle \, W^{\, \star}_{\, q^{\, \prime} \, Y_1} \, | \, \left\lgroup \Gamma_{\, q \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, |\, W^{\, \prime\, \star}_{\, q^{\, \prime} \, Y_2}\, \rangle \, \right\rgroup = W^{\, \prime\, \star}_{\, q^{\, \prime} \, Y_2 / Y_1} \left\lgroup\, \pmb{x} \,\right\rgroup, \label{two.preliminary.identities.b.star} \end{multline} \begin{multline} \langle\, W^{\, \star}_{\, q^{\, \prime} \, Y_1}\, |\, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \pmb{y} \right\rgroup = \sum_{\, Y} \langle\, W^{\, \star}_{q^{\, \prime} \,Y} \, | \, W^{\, \star}_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \pmb{y} \right\rgroup, \\ \Gamma_{\, q^{\, \prime} \, b \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, | \, W^{\, \prime\, \star}_{\, q^{\, \prime} \, Y_1}\, \rangle = \sum_{\, Y} W^{\, \prime\, \star}_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \, \pmb{x} \,\right\rgroup\, |\, W^{\, \prime\, \star}_{q^{\, \prime} \, Y} \rangle \label{two.identities.b.star} \end{multline} Applying the twist $\pmb{\iota }$, Eqaution \ref{involution}, on $p_n \left\lgroup \pmb{y} \right\rgroup$ on both sides of Equation \ref{two.preliminary.identities.b}, renaming the variables, and using, \begin{equation} \pmb{\iota } \, . \, \Gamma^{\, \prime}_{\, q^{\, \prime} \, b \, - } \left\lgroup \, x \right\rgroup = \Gamma^{ }_{\, q^{\, \prime} \, b \, - } \left\lgroup - \, x \right\rgroup, \end{equation} \begin{multline} \left\lgroup \langle\, W_{\, q^{\, \prime} \, Y_1} \, | \, \Gamma_{\, q^{\, \prime} \, b \, - } \left\lgroup - \pmb{x} \right\rgroup \, \right\rgroup\, |\, W^{\, \prime}_{\, q^{\, \prime} \, Y_2} \, \rangle = W^{\star}_{\, q^{\, \prime} \, Y_1 / Y_2} \left\lgroup \pmb{x} \right\rgroup, \\ \langle\, W_{\, q^{\, \prime} \, Y_1}\, |\, \Gamma_{\, q^{\, \prime} \, b \, - } \left\lgroup - \pmb{x} \right\rgroup = \sum_{\, Y} \langle\, W_{q^{\, \prime} \, Y} \, | \, W^{\star}_{\, q^{\, \prime} \, Y_1 / Y} \left\lgroup \pmb{x} \right\rgroup \label{two.identities.b-twist} \end{multline} \section{$q$-Whittaker pairs and operator-valued identities} \label{section.08} \textit{We define pairs of $q$-Whittaker functions, and derive their Cauchy identities. } \medskip \subsection{$q$-Whittaker and twisted $q$-Whittaker pairs} \begin{equation} \pmb{W}_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup = W_{\, q \, Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup \, W^{\, \star}_{\, q^{\, \prime} \, Y_2/R_2} \left\lgroup -\pmb{x}^{\, \prime} \right\rgroup, \quad \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup = W^{\, \prime}_{\, q \, Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup \, W^{\, \prime}_{\, q^{\, \prime} \, Y_2/R_2} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \end{equation} \begin{equation} \pmb{W}^{\, \star}_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup = W^{\, \star}_{\, q \, Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup \, W_{\, q^{\, \prime} \, Y_2/R_2} \left\lgroup -\pmb{x}^{\, \prime} \right\rgroup, \quad \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup = W^{\, \prime \, \star}_{\, q \, Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup \, W^{\, \prime \, \star}_{\, q^{\, \prime} \, Y_2/R_2} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \end{equation} \subsection{Cauchy identities for $q$-Whittaker pairs.} From the Cauchy identities in Equations \ref{pqt.cauchy.identity.skew.01}, \ref{pqt.cauchy.identity.skew.prime.02}, \ref{pqt.cauchy.identity.skew.03}, and applications of the involution $\pmb{\iota }$, \begin{equation} \sum_{\pmb{Y}} \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}/\pmb{S}} \left\lgroup \pmb{y} \right\rgroup = \frac{ 1 }{ \theta_q \left\lgroup \pmb{x} \, \pmb{y} \right\rgroup } \sum_{\pmb{W}} \pmb{W}^{ }_{\, \pmb{q} \, \pmb{S}/\pmb{W}} \left\lgroup \pmb{x} \right\rgroup \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{R}/\pmb{W}} \left\lgroup \pmb{y} \right\rgroup \label{double-normal-normal-Cauchy} \end{equation} \begin{equation} \sum_{\pmb{Y}} \pmb{W}^{\, \star}_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup \, \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{Y}/\pmb{S}} \left\lgroup \pmb{y} \right\rgroup = \frac{ 1 }{ \theta_q \left\lgroup \pmb{x} \, \pmb{y} \right\rgroup } \sum_{\pmb{W}} \pmb{W}^{ \star}_{\, \pmb{q} \, \pmb{S}/\pmb{W}} \left\lgroup \pmb{x} \right\rgroup \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{R}/\pmb{W}} \left\lgroup \pmb{y} \right\rgroup \label{double-star-star-Cauchy} \end{equation} \begin{equation} \sum_{\pmb{Y}} \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup \, \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{Y}/\pmb{S}} \left\lgroup \pmb{y} \right\rgroup = \theta_q \left\lgroup - \pmb{x} \, \pmb{y} \right\rgroup \sum_{\pmb{W}} \pmb{W}^{ }_{\, \pmb{q} \, \pmb{S}/\pmb{W}} \left\lgroup \pmb{x} \right\rgroup \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{R}/\pmb{W}} \left\lgroup \pmb{y} \right\rgroup \label{double-normal-star-Cauchy} \end{equation} \begin{equation} \sum_{\pmb{Y}} \pmb{W}^{\star }_{\, \pmb{q} \, \pmb{Y}/\pmb{R}} \left\lgroup \pmb{x} \right\rgroup \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}/\pmb{S}} \left\lgroup \pmb{y} \right\rgroup = \theta_q \left\lgroup - \pmb{x} \, \pmb{y} \right\rgroup \sum_{\pmb{W}} \pmb{W}^{ \star }_{\, \pmb{q} \, \pmb{S}/\pmb{W}} \left\lgroup \pmb{x} \right\rgroup \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{R}/\pmb{W}} \left\lgroup \pmb{y} \right\rgroup \label{double-star-normal-Cauchy} \end{equation} \section{The elliptic vertex} \label{section.09} \textit{We construct an elliptic extension of the refined topological vertex using $q$-Whittaker functions, and a twisted version of the same vertex using twisted $q$-Whittaker functions. } \medskip We construct the elliptic vertex $\mathcal{E}_{\, \pmb{Y}_1 \pmb{Y}_2 Y_3} \left\lgroup x, y \right\rgroup$ in five steps. \subsection{Step \textbf{ 1.} From $\pmb{Y}_3$ to an infinite sequence of vertex operators} We consider the \textit{finite} Young diagram $Y_3$ that labels the preferred leg of the vertex that we wish to construct, see Figure \ref{young.diagram}, position it as in Figure \ref{tilted.young.diagram}, and consider its \textit{infinite profile}, which consists of upward and downward segments $ \left\lgroup \diagup, \diagdown \right\rgroup$, that (scanning the profile from $- \infty$ all the way to the left, to $\infty$ all the way to the right) are all upward sufficiently far to the left, and all downward sufficiently far to the right, as indicated in Figure \ref{young.maya}. We map this infinite profile to a Maya diagram \cite{miwa.jimbo.date.book}, that consists white and black stones $ \left\lgroup \Circle, \CIRCLE \right\rgroup$, then we map the latter to an infinite sequence of $q$-vertex operators, that we denote by $\prod_{Maya \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm}$ \footnote{\, The vertex operators $\phi_{\, q \, \pm}$ are defined in Equation \ref{phi.vertex.operators} in terms of one-boson vertex operators that are \textit{not inverted}. }, using the bijections \footnote{\, We could have skipped the intermediate step of mapping to a Maya diagram, but we prefer to keep because it can be useful in related contexts.}, \begin{figure} \begin{center} \begin{tikzpicture}[scale=.8485] \draw [thin] (-1.5,8.0)--(2.5,8.0); \draw [thin] (-1.5,7.0)--(2.5,7.0); \draw [thin] (-1.5,6.0)--(1.5,6.0); \draw [thin] (-1.5,5.0)--(1.5,5.0); \draw [thin] (-1.5,4.0)--(0.5,4.0); \draw [thin] (-1.5,4.0)--(-1.5,8.0); \draw [thin] (-0.5,4.0)--(-0.5,8.0); \draw [thin] ( 0.5,4.0)--( 0.5,8.0); \draw [thin] ( 1.5,5.0)--( 1.5,8.0); \draw [thin] ( 2.5,7.0)--( 2.5,8.0); \end{tikzpicture} \end{center} \caption{ \it A Young diagram $Y = \left\lgroup 4, 3, 3, 2 \right\rgroup$ } \label{young.diagram} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture}[scale=.6] \draw [thin] (-1.5,1.0)--(5.5,8.0)--(12.5,1.0); \draw [thin] ( 2.5,3.0)--(6.5,7.0); \draw [thin] ( 3.5,2.0)--(7.5,6.0); \draw [thin] ( 5.5,2.0)--(8.5,5.0); \draw [thin] ( 8.5,3.0)--(9.5,4.0); \draw [thin] ( 1.5,4.0)--(3.5,2.0); \draw [thin] ( 2.5,5.0)--(5.5,2.0); \draw [thin] ( 3.5,6.0)--(6.5,3.0); \draw [thin] ( 4.5,7.0)--(8.5,3.0); \draw [very thick] (-1.5,1.0)--(1.5,4.0); \draw [very thick] (-1.45,0.95)--(1.5,3.9); \draw [very thick] ( 1.5,4.0)--(3.5,2.0); \draw [very thick] ( 1.5,3.9)--(3.5,1.9); \draw [very thick] ( 3.5,2.0)--(4.5,3.0); \draw [very thick] ( 3.5,1.9)--(4.5,2.9); \draw [very thick] ( 4.5,3.0)--(5.5,2.0); \draw [very thick] ( 4.5,2.9)--(5.5,1.9); \draw [very thick] ( 5.5,2.0)--(7.5,4.0); \draw [very thick] ( 5.5,1.9)--(7.5,3.9); \draw [very thick] ( 7.5,4.0)--(8.5,3.0); \draw [very thick] ( 7.5,3.9)--(8.5,2.9); \draw [very thick] ( 8.5,3.0)--(9.5,4.0); \draw [very thick] ( 8.5,2.9)--(9.5,3.9); \draw [very thick] ( 9.5,4.0)--(12.5,1.0); \draw [very thick] ( 9.5,3.9)--(12.45,0.95); \end{tikzpicture} \end{center} \caption{ \it A tilted Young diagram $Y$ and its infinite profile indicated by a heavy line. } \label{tilted.young.diagram} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture}[scale=.6] \draw [thin] (-1.5,1.0)--(5.5,8.0)--(12.5,1.0); \draw [thin] ( 2.5,3.0)--(6.5,7.0); \draw [thin] ( 3.5,2.0)--(7.5,6.0); \draw [thin] ( 5.5,2.0)--(8.5,5.0); \draw [thin] ( 8.5,3.0)--(9.5,4.0); \draw [thin] ( 1.5,4.0)--(3.5,2.0); \draw [thin] ( 2.5,5.0)--(5.5,2.0); \draw [thin] ( 3.5,6.0)--(6.5,3.0); \draw [thin] ( 4.5,7.0)--(8.5,3.0); \draw [very thick] (-1.5,1.0)--(1.5,4.0); \draw [very thick] (-1.45,0.95)--(1.5,3.9); \draw [very thick] ( 1.5,4.0)--(3.5,2.0); \draw [very thick] ( 1.5,3.9)--(3.5,1.9); \draw [very thick] ( 3.5,2.0)--(4.5,3.0); \draw [very thick] ( 3.5,1.9)--(4.5,2.9); \draw [very thick] ( 4.5,3.0)--(5.5,2.0); \draw [very thick] ( 4.5,2.9)--(5.5,1.9); \draw [very thick] ( 5.5,2.0)--(7.5,4.0); \draw [very thick] ( 5.5,1.9)--(7.5,3.9); \draw [very thick] ( 7.5,4.0)--(8.5,3.0); \draw [very thick] ( 7.5,3.9)--(8.5,2.9); \draw [very thick] ( 8.5,3.0)--(9.5,4.0); \draw [very thick] ( 8.5,2.9)--(9.5,3.9); \draw [very thick] ( 9.5,4.0)--(12.5,1.0); \draw [very thick] ( 9.5,3.9)--(12.45,0.95); \foreach \iota in {0,...,6} { \draw [dashed, red] (\iota - .5, \iota + 1.5)--(\iota - .5, 0); } \foreach \j in {7,...,12} { \draw [dashed, red](\j -.5, 13.5 -\j)--(\j - .5, 0); } \node [left] at (-1.3,0) {$\cdots$}; \node [right] at (12.3,0) {$\cdots$}; \foreach x^{(1)} in {0,...,2} { \draw (x^{(1)}- 1,0) circle (0.3); \draw [fill=black!50] (x^{(1)}+10,0) circle (0.3); } \draw [fill=black!50] (2,0) circle (0.3); \draw [fill=black!50] (3,0) circle (0.3); \draw (4,0) circle (0.3); \draw [fill=black!50] (5,0) circle (0.3); \draw (6,0) circle (0.3); \draw (7,0) circle (0.3); \draw [fill=black!50] (8,0) circle (0.3); \draw (9,0) circle (0.3); \foreach \a in {6, 5, ..., 1} { \node [below] at (- \a + 5.8, -.3) {$- \a$}; } \foreach \a in {0, 1, ..., 5} { \node [below] at ( \a + 6.0, -.3) {$ \a$}; } \end{tikzpicture} \end{center} \caption{ \it The tilted Young diagram, its infinite profile, and the corresponding Maya diagram, which gives the Young diagram/Maya diagram correspondence for $Y = \left\lgroup 4, 4, 3, 1 \right\rgroup$. The integer below a stone is its position in the Maya diagram. The apex of the inverted Young diagram is located between positions $-1$ and $0$. } \label{young.maya} \end{figure} \begin{equation} \diagup \rightleftharpoons \Circle \rightleftharpoons \phi_{\, q \, -}, \quad \quad \diagdown \rightleftharpoons \CIRCLE \rightleftharpoons \phi_{\, q \, +} \end{equation} \subsection{Step \textbf{ 2.} Choosing the arguments of the vertex operators} We choose the arguments of the $q$-vertex operators to be, \begin{equation} \phi_{\, q \, +} \left\lgroup x^{ - i}\, y^{ y^{\, \prime}_{3, i} } \right\rgroup \, , \quad \phi_{\, q \, -} \left\lgroup y^{j - 1}\, x^{- y_{3, j}} \right\rgroup, \label{full.arguments} \end{equation} where $y_{3, i}$ is the length of the $i$-column of the Young diagram $Y_3$ that labels the preferred leg of the vertex, and $y^{\, \prime}_{3, j}$ is the length of the $j$-column of the transpose Young diagram $Y^{\, \prime}_3$. \subsubsection{Example} The Young diagram/Maya diagram correspondence in Figure \ref{young.maya} leads to the vertex-operator sequence, \begin{multline} \left\lgroup \prod_{Maya \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm} \right\rgroup = \cdots \phi_{\, q \, +} \left\lgroup x^{-5} \right\rgroup \phi_{\, q \, -} \left\lgroup x^{-4} \right\rgroup \phi_{\, q \, -} \left\lgroup y \, x^{-4} \right\rgroup \phi_{\, q \, +} \left\lgroup x^{-4}\, y^{ 2} \right\rgroup \phi_{\, q \, -} \left\lgroup y^2 \, x^{-3} \right\rgroup \\ \phi_{\, q \, +} \left\lgroup x^{-3}\, y^3\right\rgroup \phi_{\, q \, +} \left\lgroup x^{-2}\, y^3\right\rgroup \phi_{\, q \, -} \left\lgroup y^3 \, x^{-1} \right\rgroup \phi_{\, q \, +} \left\lgroup x^{-1}\, y^4 \right\rgroup \phi_{\, q \, -} \left\lgroup y^4 \right\rgroup \cdots \label{vertex.operator.sequence} \end{multline} \subsection{Step \textbf{ 3.} From the infinite sequence of vertex operators to an expectation value} We evaluate the sequence $\prod_{Maya \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm}$ between a left-state that corresponds to a $q$-Whittaker pair labelled by a pair of Young diagrams $\pmb{Y}_1$ \footnote{\, The $q$-Whittaker pair that labels the left states were created by the action of pairs of one-boson vertex operators, one of which, that which depends on $b$-oscillators is inverted. } and a right-state labelled by a dual $q$-Whittaker pair labelled by a pair of Young diagrams $\pmb{Y}_2$, \begin{equation} \mathcal{E}^{\, unnorm}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup = \langle\, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1} \, | \, \left\lgroup \prod_{ \textit{Maya} \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm} \right\rgroup \, |\, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2} \rangle \label{unnormalized.elliptic.vertex} \end{equation} To evaluate the expectation value in Equation \ref{unnormalized.elliptic.vertex}, we \lq order\rq\, the sequence in Equation \ref{unnormalized.elliptic.vertex} \textit{via} an infinite number of commutations that put all $\phi_{\, q \, +}$ vertex operators on the right, and all $\phi_{\, q \, -}$ vertex operators on the left. From Equation \ref{elliptic.macdonald.kernel}, \begin{multline} \phi_{\, q \, +} \left\lgroup x^{ - i}\, y^{ y^{\, \prime}_{3, i}} \right\rgroup \, \phi_{\, q \, -} \left\lgroup y^{j - 1}\, x^{- y_{3, j}} \right\rgroup = \\ \prod_{m, \, n \, = \, 1}^\infty \left\lgroup \frac{ 1 }{ \theta_{\, q} \left\lgroup x^{\, - \, y_{\, 3, \, n} + m} \, y^{- y^{\, \prime \, +}_{\, 3, \, m} + n} \right\rgroup } \right\rgroup \phi_{\, q \, -} \left\lgroup y^{\, j \, - \, 1} \, x^{\, - \, y^{ }_{\, 3, \, j}} \right\rgroup\, \phi_{\, q \, +} \left\lgroup x^{\, - \, i} \, y^{\, y^{\, \prime}_{\, 3, \, i}} \right\rgroup, \label{q.t.vertex.operator.commutation.relation.with.arguments} \end{multline} where $y^+_{\, 3,\, i} = y_{\, 3,\, i} + 1$, and $y_{\, 3, \, i}$ is the length of the $i$-column in $Y_3$. Since $\phi_{\, q \, +}$ is attached to a segment $\diagup$ in the extended profile of $Y_3$, and $\phi_{\, q \, -}$ is attached to an adjacent segment $\diagdown$ to the right of the former, the commutation relation, Equation \ref{q.t.vertex.operator.commutation.relation.with.arguments} describes replacing the adjacent pair $\diagup \diagdown$ with the pair $\diagdown \diagup$, adding a cell to $Y_3$, to generate a Young diagram that is larger by one cell. The exponents that appear in the factor on the right hand side of Equation \ref{q.t.vertex.operator.commutation.relation.with.arguments} have simple interpretations, \begin{equation} y_{\, 3, \, i} - j = L_{\, \square}, \quad y_{\, 3, \, j}^{\, \prime} - i = A_{\, \square}, \end{equation} where $A_{\square}$ and $L_{\square}$ are the arm-length and the leg-length of the cell $\square$ that is added to $Y_3$ \textit{ via} the commutation in Equation \ref{q.t.vertex.operator.commutation.relation.with.arguments}, to generate a larger Young diagram, that is $\square \notin Y_3$. Inserting the sequence $\prod_{Maya \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm}$ between a left-state $\langle\, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1}\, |$, and a right-state $|\, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2}\, \rangle$, then commuting the (infinitely-many) $\phi_{\, q \, +}$ vertex operators to the right of the $\phi_{\, q \, -}$ vertex operators, \begin{multline} \langle\, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1}\, |\, \left\lgroup \prod_{i=1}^\infty \phi_{\, q \, +} \left\lgroup x^{ - i}\, y^{ y^{\, \prime}_{ 3, i}} \right\rgroup \right\rgroup \, \left\lgroup \prod_{j=1}^\infty \phi_{\, q \, -} \left\lgroup y^{j - 1}\, x^{- y_{3, j}} \right\rgroup \right\rgroup \, | \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2}\, \rangle\, = \\ \left\lgroup \prod_{\square \notin Y_3} \frac{ 1 }{ \theta_{\, q} \, \left\lgroup x^{- L_\square}\, y^{- A^{\, +}_\square} \, q^{\, n} \right\rgroup } \right\rgroup \, \langle\, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1}\, | \left\lgroup \prod_{j=1}^\infty \phi_{\, q \, -} \left\lgroup y^{j - 1}\, x^{- y_{3, j}} \right\rgroup \right\rgroup\, \left\lgroup \prod_{i=1}^\infty \phi_{\, q \, +} \left\lgroup x^{ - i}\, y^{ y^{\, \prime}_{ 3, i}} \right\rgroup \right\rgroup |\, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2}\, \rangle \label{result.of.step.03} \end{multline} \subsection{Step \textbf{ 4.} From an expectation value to the unnormalized elliptic vertex} Using the identities, \begin{equation} \langle \, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1} \, | \, \phi_{\, q \, -} \left\lgroup \pmb{y} \right\rgroup = \sum_{\, \pmb{Y}} \langle \, \pmb{W}_{\, \pmb{q} \, \pmb{Y}} \, | \, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1 / \pmb{Y}} \left\lgroup \pmb{y} \right\rgroup, \, \, \phi_{\, q \, +} \left\lgroup \pmb{x}^{\, \prime} \right\rgroup \, | \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_1} \, \rangle = \sum_{\, \pmb{Y}} \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_1 / \pmb{Y}} \left\lgroup \, \pmb{x} \, \right\rgroup \, | \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}} \rangle, \label{two.identities.combine} \end{equation} in Equation \ref{result.of.step.03}, we obtain the \textit{unnormalized} elliptic vertex, \begin{multline} \mathcal{E}^{\, unnorm}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup \pmb{x}, \, \pmb{y} \right\rgroup = \langle \, \pmb{W}_{\, \pmb{q} \, \pmb{Y}_1} \, | \, \left\lgroup \prod_{i=1}^\infty \phi_{\, q +} \left\lgroup x^{ - i}\, y^{ y^{\, \prime}_{ 3, i}} \right\rgroup \right\rgroup\, \left\lgroup \prod_{j=1}^\infty \phi_{\, q \, -} \left\lgroup y^{j - 1}\, x^{- y_{3, j}} \right\rgroup \right\rgroup\, |\, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2}\, \rangle\, = \\ \left\lgroup \prod_{\square \notin Y_3} \frac{ 1 }{ \theta_{\, q} \left\lgroup x^{- L_\square} \, y^{- A^{\, +}_\square} \right\rgroup } \right\rgroup \, \sum_{\, \pmb{Y}} \pmb{W}_{\, \pmb{Y}_1 / \pmb{Y}} \left\lgroup y^{\, \bj - 1}\, x^{\, - Y_3} \right\rgroup \pmb{W}^{\, \prime}_{\, \pmb{Y}_2 / \pmb{Y}} \left\lgroup x^{\, \pmb{\iota } }\, y^{\, - Y_3^{\, \prime} } \right\rgroup, \label{macdonald.top.vertex.step.02} \end{multline} where $\pmb{\iota } = \left\lgroup 1, 2, \cdots \right\rgroup$, $\bj = \left\lgroup 1, 2, \cdots \right\rgroup$, and the arguments in $\pmb{W}_{\, \pmb{Y}_1 / Y} \left\lgroup y^{\, \bj - 1}\, x^{\, -Y_3^{\, \prime}} \right\rgroup$ and $\pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2 / Y} \left\lgroup x^{\, \pmb{\iota } }\, y^{\, -Y_3 } \right\rgroup$, should be understood in the sense of Section \ref{sequences}. \subsection{Step \textbf{ 5.} From the unnormalized to the normalized elliptic topological vertex} To normalize the expression in Equation \ref{macdonald.top.vertex.step.02} such that $\mathcal{E}_{\, \pmb{\emptyset} \, \pmb{\emptyset} \, \emptyset} = 1$, we divide it by the elliptic version of the $xy$-refined $q$-MacMahon partition function, that is, \begin{equation} \mathcal{M} \left\lgroup x, y, q \right\rgroup = \prod_{i,\, j \, = \, 1}^\infty \, \frac{ 1 }{ \theta_{\, q} \left\lgroup x^{ i}\, y^{j-1} \right\rgroup }, \label{x.y.q.t.macmahon.function.01} \end{equation} Using the identity, \begin{equation} \left\lgroup \prod_{\square \notin Y_3} \frac{ 1 }{ 1\, -\, x^{- L_\square}\, y^{- A^{\, +}_\square}\,q^{\, n} } \right\rgroup \, \left\lgroup \prod_{i, j = 1}^\infty \frac{ 1 }{ 1 - x^i \, y^{\,j-1}\,q^{\, n} } \right\rgroup^{-1} = \left\lgroup \prod_{\square \in Y_3} \frac{ 1 }{ 1 -\,x^{\, L^+_\square} \, y^{\,A_\square} \, q^{\, n} } \right\rgroup, \end{equation} which follows from Equations \textbf{2.8} and \textbf{2.11} in \cite{awata.kanno.02}. The final, normalised elliptic vertex $\mathcal{E}_{\, \pmb{Y}_1\,\pmb{Y}_2\, Y_3} \left\lgroup x, y \right\rgroup$ is, \begin{equation} \mathcal{E}_{\, \pmb{Y}_1\,\pmb{Y}_2\, Y_3} \left\lgroup x, y \right\rgroup = \frac{ \mathcal{E}^{\, unnorm}_{\pmb{Y}_1 \,\pmb{Y}_2 \,Y_3} \left\lgroup x, y \right\rgroup }{ \mathcal{E}^{\, unnorm}_{\emptyset\,\emptyset\,\emptyset} \left\lgroup x, y \right\rgroup }, \label{normalized.macdonald.vertex} \end{equation} where, \begin{empheq}[box=\fbox]{equation} \mathcal{E}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup = \prod_{\square \in Y_3} \frac{ 1 }{ \theta_q \left\lgroup x^{\,L^+_{\square \, Y_3}} \, y^{\,A_{\square \, Y_3}} \right\rgroup } \sum_{\, \pmb{Y}} \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}_1 / \pmb{Y}} \left\lgroup y^{\, \bj -1}\, x^{- Y_3} \right\rgroup \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{Y}} \left\lgroup x^{\, \pmb{\iota } }\, y^{ - Y_3^{\, \prime} } \right\rgroup, \label{macdonald.vertex} \end{empheq} $\pmb{\iota } = \left\lgroup 1, 2, \cdots \right\rgroup$, and the arguments in $\pmb{W}^{ }_{\, \pmb{Y}_1 / Y} \left\lgroup y^{\, \bj -1}\, x^{-Y_3} \right\rgroup$ and $\pmb{W}^{\, \prime}_{\, q \, \pmb{Y}_2 / Y} \left\lgroup x^{\, \pmb{\iota } }\, y^{-Y_3^{\, \prime} } \right\rgroup$ are in the sense of Section \ref{sequences}. \subsection{The twisted version of the vertex} We define the twisted version of the vertex $\mathcal{E}^{\, \star}$, in the same way that we defined $\mathcal{E}$, but with the choice of arguments in Step \textbf{2} changed to, \begin{equation} \phi_{\, q \, +} \left\lgroup x^{\, - \, j \, + \, 1}\, y^{\, y^{\, \prime}_{\, 3, \, j}} \right\rgroup, \quad \phi_{\, q \, -} \left\lgroup y^{\, i }\, x^{\, - \, y^{ }_{\, 3, \, i}} \right\rgroup, \label{full.arguments.twisted} \end{equation} and calculate the expectation value in the twisted basis, Step \textbf{3}, as \footnote{\, We can alternatively use the vertex operators, $\pmb{\iota } \, . \, \phi_{\, q \, +} \left\lgroup x^{-j + 1}\, y^{ y^{\, \prime}_{3, j}} \right\rgroup$, and $\pmb{\iota } \, . \, \phi_{\, q \, -} \left\lgroup y^{ i}\, x^{ -y_{3, i} } \right\rgroup$, and the usual $q$-Whittaker basis to obtain the same result. } , \begin{equation} \mathcal{E}^{\, \star \, unnorm}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup = \langle\, \pmb{W}^{\, \star}_{\, \pmb{q} \, \pmb{Y}_1} \, |\, \left\lgroup \prod_{ \textit{ Maya} \left\lgroup Y_3 \right\rgroup} \phi_{\, q \, \pm} \right\rgroup \, |\, \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{Y}_2} \rangle \label{unnormalized.elliptic.vertex.twisted} \end{equation} A parallel calculation leads to the following expression for the twisted version of the elliptic vertex, \begin{empheq}[box=\fbox]{equation} \mathcal{E}^{\, \star}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup = \prod_{\square \in Y_3} \frac{ 1 }{ \theta_q \left\lgroup \,x^{\,L_{\square \, Y_3}}\, y^{\,A^{\, +}_{\square \, Y_3}} \right\rgroup } \sum_{\, \pmb{Y}} \pmb{W}^{\, \star}_{\, \pmb{q} \, \pmb{Y}_1 / \pmb{Y}} \left\lgroup y^{\, \pmb{\iota } }\, x^{- Y_3} \right\rgroup \pmb{W}^{\, \prime\, \star}_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{Y}} \left\lgroup x^{\, \bj -1 }\, y^{ - Y_3^{\, \prime} } \right\rgroup \label{macdonald.vertex.twisted} \end{empheq} \subsection{The $q \rightarrow 0$ limit} In the $q \rightarrow 0$ limit, the $q$-Whittaker function $W_{q \, Y_1/Y_2}(x)$ reduces to the Schur function $s_{Y_1/Y_2} \left\lgroup \pmb{x} \right\rgroup$. As we can see from the expression of vertex operators, equation \ref{gamma.vertex.operators}, $\Gamma_{q^{\,\prime} \, b \pm}(x^{\, \prime})$ goes to $1$ in the $q \rightarrow 0$ limit and, \begin{equation} W^{\, \star}_{q^{\, \prime} \, \, Y_1/Y_2}\rightarrow \delta_{Y_1,Y_2}, \end{equation} in this limit. This trivializes the $q^{\, \prime}$-dependent part of the partition function, as long as we consider toric diagrams with trivial external legs. We can effectively drop the $q^{\, \prime}$-part in the elliptic vertex when we compute the partition function in the limit. Therefore, we can simply replace, \begin{equation} \pmb{W}_{\pmb{q} \, \pmb{Y} / \pmb{R}} \left\lgroup \pmb{x} \right\rgroup \rightarrow s_{Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup, \quad \pmb{W}^{\, \prime}_{\pmb{q} \, \pmb{Y} / \pmb{R}} \left\lgroup \pmb{x} \right\rgroup \rightarrow s_{Y_1/R_1} \left\lgroup \pmb{x} \right\rgroup, \quad \theta_p \left\lgroup x \right\rgroup \rightarrow \left\lgroup 1-x \right\rgroup, \end{equation} in this limit, and the elliptic vertex reduces to the refined vertex in \cite{iqbal.kozcaz.vafa}, \begin{equation} \mathcal{E}^{ }_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup \rightarrow \mathcal{R}_{\, Y^{\, \prime}_{1A}\, Y_{\, 2 \, A} \, Y_3 } \left\lgroup y, x \right\rgroup, \quad \mathcal{E}^{\, \star}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup \rightarrow \mathcal{R}_{\, Y_{\, 1 \, A} \, Y^{\, \prime}_{2A}\, Y^{\, \prime}_3} \left\lgroup x, y \right\rgroup \end{equation} \subsection{Equivalence with the elliptic Awata-Feigin-Shiraishi vertex}\label{s:equiv-AFS} The vertex operators used to construct $\mathcal{E}$ are equivalent to those used to construct the elliptic Awata-Feigin-Shiraishi vertex constructed in \cite{zhu.01}. To see this, we focus on the simple case of $\mathcal{E}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, \emptyset} \left\lgroup x, y \right\rgroup$. In this case, the corresponding AFS vertex is the normal-ordered product \footnote{\, We use the notation in Equation \ref{abbreviation} for infinite products of vertex operators, the definition of the two-boson vertex operators in terms of single-boson vertex operators, Equation \ref{phi.vertex.operators}, the definition of the single-boson vertex operators, Equation \ref{gamma.vertex.operators}. }, \begin{equation} \Phi_\emptyset \left\lgroup 1 \right\rgroup = \, : \, \phi_{\, - \, q} \left\lgroup x^{\, \bj } \right\rgroup \, \phi_{\, + \, q} \left\lgroup y^{\, - \pmb{\iota } + 1} \right\rgroup \, : \, \label{afs.vertex} \end{equation} The infinite products of two-boson vertex operators on the right hand side of Equation \ref{afs.vertex} can be evaluated in the form, \begin{multline} \phi_{\, + \, q} \left\lgroup y^{\, - \pmb{\iota } + 1} \right\rgroup = \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{ 1 }{ n } \frac{ 1 }{ \left\lgroup 1 - y^{\, n} \right\rgroup \, \left\lgroup 1 - q^{\, n} \right\rgroup } \, a_{n} \right\rgroup \exp \left\lgroup \sum_{n=1}^{\infty} \, \frac{ 1 }{ n } \frac{ 1 }{ \left\lgroup 1 - y^{\, \prime \, n} \right\rgroup \, \left\lgroup 1 - q^{\, \prime \, n} \right\rgroup } \, b_n \right\rgroup, \\ \phi_{\, - \, q} \left\lgroup x^{\, \bj} \right\rgroup = \exp \left\lgroup \, - \, \sum_{n=1}^{\infty} \, \frac{ 1 }{ n } \frac{ 1 }{ \left\lgroup 1 - x^{\, \prime \, n} \right\rgroup \left\lgroup 1 - q^{\, n} \right\rgroup } \, a_{-n} \right\rgroup \exp \left\lgroup \, - \, \sum_{n=1}^{\infty} \, \frac{ 1 }{ n } \frac{ 1 }{ \left\lgroup 1 - x^{\, \, n} \right\rgroup \left\lgroup 1 - q^{\, \prime \, n} \right\rgroup } \, b_{-n} \right\rgroup \end{multline} Setting, \begin{multline} a_n = a^{\, Saito}_n, \quad a_{-n} = \left\lgroup \frac{ 1 - x^{\, \prime \, n} }{ 1 - y^{\, \prime \, n} } \right\rgroup a^{\, Saito}_{\, - n}, \quad n = 1, 2, \cdots, \\ b_n = - b^{\, Saito}_n, \quad b_{\, - n} = - \left\lgroup \frac{ 1 - x^{\, n} }{ 1 - y^{\, n} } \right\rgroup \, b^{\, Saito}_{\, - n}, \quad n = 1, 2, \cdots, \label{redef} \end{multline} where $a^{\, Saito}_n$ and $b^{\, Saito}_n$ are the $pqt$-Heisenberg generators, Equation \ref{two.heisenbergs}, used in \cite{saito.01, saito.02, saito.03} and in \cite{zhu.01}, with the parameters reset as, \begin{equation} p \rightarrow q, \quad q \rightarrow y^{\, \prime}, \quad t \rightarrow x^{\, \prime}, \end{equation} where the parameters on the left are Saito's and the parameters on the right are those used in the present work, we obtain \begin{multline} \Phi_\emptyset \left\lgroup 1 \right\rgroup = \\ : \, \exp \left\lgroup \sum_{n \neq 0} \frac{ 1 }{ n } \, \frac{ 1 }{ \left\lgroup 1 - y^{\, n} \right\rgroup \left\lgroup 1 - q^{\, | \, n \, |} \right\rgroup } \, a^{\, Saito}_{n} \right\rgroup \, \exp \left\lgroup \, - \, \sum_{n \neq 0} \frac{ 1 }{ n } \, \frac{ 1 }{ \left\lgroup 1 - y^{\, \prime \, n} \right\rgroup \left\lgroup 1 - q^{\, \prime \, | \, n \, |} \right\rgroup } \, b^{\, Saito}_n \right\rgroup \, : \end{multline} which is the correct expression for the elliptic AFS vertex for $Y_3 = \emptyset$ \cite{zhu.01}. With the mapping in Equation \ref{redef}, one can repeat the argument in \cite{awata.feigin.shiraishi} to prove the equivalence. We do not do this here as it is not the main point of this paper. \section{The 6D strip partition function} \label{section.10} \textit{We compute the 4-vertex strip partition function obtained by gluing four elliptic vertices, and show that the result is a 6D strip partition function.} \subsection{The rules of gluing} Drawing web diagrams, we set all horizontal legs to be the preferred. Having done that, the set of all vertices splits into into two disjoint subsets, one with all preferred legs pointing to the right, and the other with all preferred legs pointing to the left, as in Figure \ref{fig:assign-vertex}. We take the first subset to consist of elliptic vertices, and the second to consist of twisted vertices. We glue vertices only to twisted vertices and \textit{vice versa}. \subsubsection{The choice of K\"ahler parameters} \label{choice.kahler.parameters} We form strips by gluing elliptic vertex non-preferred legs. Each of these elliptic vertex non-preferred legs is effectively a pair of refined vertex non-preferred legs, and each of the latter is assigned a Young diagram. Since the assigned Young diagrams are independent, we can in principle use a pair of independent K\"ahler parameters $ \left\lgroup Q_{\, A}, Q_{\, B} \right\rgroup$ to glue. However, to recover the 6D partition functions computed in the literature, including \cite{hollowood.iqbal.vafa, iqbal.kozcaz.yau, nieri}, we need to choose the K\"ahler parameter pair as $ \left\lgroup Q, Q^{\, \prime} \right\rgroup$, where $Q^{\, \prime} = 1 / Q$, consistently with all choices of all parameter pairs made so far. At this stage, we have no justification for this choice other than reproducing the known 6D partition functions. \begin{figure} \centering \begin{tikzpicture} \draw [very thick] (0,0)--(1,0); \draw (0,0)--(0,1); \draw (0,0)--(-0.71,-0.71); \node at (1,0) [right] {$Y_3$}; \node at (0,1) [above] {$\pmb{Y}_1$}; \node at (-0.71,-0.71) [left] {$\pmb{Y}_2$}; \node at (0,0.2) [left] {$\mathcal{E}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup$}; \end{tikzpicture} \hskip 3cm \begin{tikzpicture} \draw (0,0)--(0.71,0.71); \draw [very thick] (0,0)--(-1,0); \draw (0,0)--(0,-1); \node at (0,-1) [below] {$\pmb{Y}_2$}; \node at (-1,0) [left] {$Y_3$}; \node at (0.71,0.71) [right] {$\pmb{Y}_1$}; \node at (0,-0.2) [right] {$\mathcal{E}^{\, \star}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, Y_3} \left\lgroup x, y \right\rgroup $}; \end{tikzpicture} \caption{The assignment of the elliptic vertex and its twisted version to toric diagrams. The preferred legs are shown as thick horizontal lines.} \label{fig:assign-vertex} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0)--(0.71,0.71); \draw (0,0)--(-1,0); \draw (0,0)--(0,-1); \draw (0.71,0.71)--(0.71,1.71); \draw (0.71,0.71)--(1.71,0.71); \draw (0,-1)--(1,-1); \draw (0,-1)--(-0.71,-1.71); \draw (-0.71,-1.71)--(-1.71,-1.71); \draw (-0.71,-1.71)--(-0.71,-2.71); \node at ( 0.40, 2.00) [right] {$\pmb{\emptyset}$}; \node at ( 1.80, 0.70) [right] {$W_1$}; \node at ( 0.40, 0.20) [right] {$\pmb{Y_1}$}; \node at (-1.00, 0.10) [left] {$V_1$}; \node at ( 0.00, -0.50) [right] {$\pmb{Y_2}$}; \node at ( 1.00, -1.00) [right] {$W_2$}; \node at (-0.40,-1.60) [right] {$\pmb{Y_3}$}; \node at (-1.80,-1.60) [left] {$V_2$}; \node at (-1.00,-3.00) [right] {$\pmb{\emptyset}$}; \end{tikzpicture} \end{center} \caption{The strip obtained by gluing 4 vertices along their non-preferred legs. $V_1$, $V_2$, $W_1$ and $W_2$ are single diagrams. $\pmb{Y_1}$, $\pmb{Y_2}$ and $\pmb{Y_3}$ are pairs of Young diagrams. $\pmb{\emptyset}$ is a pair of empty Young diagrams.} \label{fig:strip-2} \end{figure} \subsection{Example. The 4-vertex strip partition functions} The partition function of the 4-vertex strip in Figure \ref{fig:strip-2} is, \begin{multline} Z_{\, 4-strip} = \sum_{\pmb{Y}_{\, 1, \, 2, \, 3}} \prod_{\ell = 1}^3 \left\lgroup - Q_\ell \right\rgroup^{| \pmb{Y}_\ell |} \mathcal{E}^{ }_{\, \pmb{\emptyset} \, \pmb{Y}_1 \, W_1} \left\lgroup x, y \right\rgroup \mathcal{E}^{\, \star}_{\, \pmb{Y}_1 \, \pmb{Y}_2 \, V_1} \left\lgroup x, y \right\rgroup \mathcal{E}^{ }_{\, \pmb{Y}_2 \, \pmb{Y}_3 \, W_2} \left\lgroup x, y \right\rgroup \mathcal{E}^{\, \star}_{\, \pmb{Y}_3 \, \pmb{\emptyset} \, V_2} \left\lgroup x, y \right\rgroup \\ = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\square, \, W_k}} \, y^{\, A^{ }_{\square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\square, \, V_k}} \, y^{\, A^{\, +}_{\square, \, V_k}} \right\rgroup } \, \sum_{\pmb{Y}_{1, 2, 3}} \sum_{\pmb{R}_{1,2}} \prod_{\ell =1}^3 \left\lgroup - Q_\ell \right\rgroup^{| \pmb{Y}_\ell |} \\ \pmb{W}^{\, \prime }_{\, \pmb{q} \, \pmb{Y}_1 } \left\lgroup x^{\, \pmb{\iota }} y^{\, - W^{\, \prime}_1} \right\rgroup \left\lgroup \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{Y}_1/\pmb{R}_1} \left\lgroup y^{\, \bj} x^{\, - V^{ }_1} \right\rgroup \, \pmb{W}^{\, \prime \, \star}_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_1} \left\lgroup x^{\, \pmb{\iota } - 1} y^{\, - V^{\, \prime}_1} \right\rgroup \right\rgroup \\ \left\lgroup \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_2} \left\lgroup y^{\, \pmb{\iota } - 1} x^{\, - W^{ }_2} \right\rgroup \, \pmb{W}^{\, \prime }_{\, \pmb{q} \, \pmb{Y}_3 / \pmb{R}_2} \left\lgroup x^{\, \bj } y^{\, - W^{\, \prime}_2} \right\rgroup \right\rgroup \, \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{Y}_3 } \left\lgroup y^{\, \bj } x^{\, - V_2 } \right\rgroup \label{4.strip} \end{multline} where $|\pmb{Y}| = |Y_A| - |Y_B|$ because of our choice of K\"ahler parameters, see section \ref{choice.kahler.parameters}. \subsection{Using the Cauchy identities} We compute the strip partition function in Equation \ref{4.strip} in 4 steps using the Cauchy identities derived in section \ref{section.07}. \subsubsection{Step 1} Using the identity $Q^{\, | \pmb{Y} | \, - \, | \pmb{X} |} \, \pmb{W}_{\pmb{Y} / \pmb{X}} \, \left\lgroup \pmb{x} \right\rgroup = \pmb{W}_{\pmb{Y} / \pmb{X}} \, \left\lgroup Q \, \pmb{x} \right\rgroup$, where $\pmb{X}$ and $\pmb{Y}$ are partition pairs, and $\pmb{x}$ is a set of variables, which follows from the properties of the $q$-Whittaker functions that make $\pmb{W}$, and the Cauchy identity \ref{double-star-normal-Cauchy} to perform the sum over $\pmb{Y}_1$ and over $\pmb{Y}_3$ \footnote{\, To relate the notation $\pmb{Y}_1$ and $\pmb{Y}_3$ for the Young diagrams that are summed out to that of the Young diagrams that are introduced, we use $\pmb{R}_1$ and $\pmb{R}_3$ for the latter, and there is no $\pmb{R}_2$ }, \begin{multline} Z_{\, 4-strip} = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\, \square, \, W_k}} \, y^{\, A^{ }_{\, \square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\, \square, \, V_k}} \, y^{\, A^{\, +}_{\, \square, \, V_k}} \right\rgroup } \, \sum_{\pmb{Y}_2} \sum_{\pmb{R}_1 \, \pmb{R}_3} \left\lgroup - Q_2 \right\rgroup^{| \pmb{Y}_2 |} \, \times \\ \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_1 \, x^{\, i - V_{\, 1, \, j}} \, y^{\, j- W^{\, \prime}_{\, 1, \, i}} \right\rgroup \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{R}_1} \left\lgroup - Q_1 \, x^{\, \pmb{\iota }} \, y^{\, - W^{\, \prime}_1} \right\rgroup \, \pmb{W}^{\, \prime \, \star}_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_1} \left\lgroup x^{\, \pmb{\iota } - 1} y^{\, - V^{\, \prime}_1} \right\rgroup \\ \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_3} \left\lgroup y^{\, \pmb{\iota } - 1} x^{\, - W^{ }_2} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_3 \, x^{\, i - V_{2, \, j}} \, y^{\, j - W^{\, \prime}_{2, \, i}} \right\rgroup \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{R}_3} \left\lgroup - Q_3 \, y^{\, \bj} \, x^{\, - \, V_2} \right\rgroup \\ = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\, \square, \, W_k}} \, y^{\, A^{ }_{\, \square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\, \square, \, V_k}} \, y^{\, A^{\, +}_{\, \square, \, V_k}} \right\rgroup } \\ \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_1 \, x^{\, i - V_{\, 1, \, j}} \, y^{\, j- W^{\, \prime}_{\, 1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_3 \, x^{\, i - V_{2, \, j}} \, y^{\, j - W^{\, \prime}_{2, \, i}} \right\rgroup \\ \sum_{\pmb{Y}_2} \sum_{\pmb{R}_1 \, \pmb{R}_3} \left\lgroup - Q_2 \right\rgroup^{| \pmb{Y}_2 |} \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{R}_1} \left\lgroup - Q_1 \, x^{\, \pmb{\iota }} \, y^{\, - W^{\, \prime}_1} \right\rgroup \, \pmb{W}^{\, \prime \, \star}_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_1} \left\lgroup x^{\, \pmb{\iota } - 1} y^{\, - V^{\, \prime}_1} \right\rgroup \\ \pmb{W}^{ }_{\, \pmb{q} \, \pmb{Y}_2 / \pmb{R}_3} \left\lgroup y^{\, \pmb{\iota } - 1} x^{\, - W^{ }_2} \right\rgroup \, \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{R}_3} \left\lgroup - Q_3 \, y^{\, \bj} \, x^{\, - \, V_2} \right\rgroup \end{multline} \subsubsection{Step 2} Using the Cauchy identity \ref{double-normal-star-Cauchy} to perform the sum over $\pmb{Y}_2$, \begin{multline} Z_{\, 4-strip} = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\square, \, W_k}} \, y^{\, A^{ }_{\, \square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\square, \, V_k}} \, y^{\, A^{\, +}_{\, \square, \, V_k}} \right\rgroup } \\ \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_1 \, x^{\, i - V_{\, 1, \, j}} \, y^{\, j - W^{\, \prime}_{1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_2 x^{\, i - 1 - W_{\, 2, \, j}} y^{\, j - 1 - V^{\, \prime}_{\, 1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_3 \, x^{\, i - V_{2, \, j}} \, y^{\, j - W^{\, \prime}_{2, \, i}} \right\rgroup \\ \sum_{\pmb{S}} \, \sum_{\pmb{R}_{\, 1, \, 3}} \, \left\lgroup - Q_2 \right\rgroup^{| \pmb{R}_1 | } \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{R}_1} \left\lgroup - Q_1 \, x^{\, \pmb{\iota }} \, y^{\, - W^{\, \prime}_1} \right\rgroup \, \pmb{W}^{ }_{\, \pmb{q} \, \pmb{R}_1 / \pmb{S}} \left\lgroup y^{\, \pmb{\iota } - 1} \, x^{\, - W^{ }_2} \right\rgroup \\ \pmb{W}^{\, \prime \, \star}_{\, \pmb{q} \, \pmb{R}_3 / \pmb{S}} \left\lgroup - Q_2 \, x^{\, \pmb{\iota } - 1} \, y^{\, - V^{\, \prime}_1} \right\rgroup \, \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{R}_3} \left\lgroup - Q_3 \, y^{\, \bj} \, x^{\, - \, V_2} \right\rgroup, \end{multline} \subsubsection{Step 3} Using the Cauchy identities \ref{double-normal-normal-Cauchy} and \ref{double-star-star-Cauchy} to sum over $\pmb{R}_1$ and over $\pmb{R}_3$, \begin{multline} Z_{\, 4-strip} = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\square, \, W_k}} \, y^{\, A^{ }_{\square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\square, \, V_k}} \, y^{\, A^{\, +}_{\square, \, V_k}} \right\rgroup } \\ \frac{ \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_1 \, x^{\, i - V_{1, \, j}} \, y^{\, j - W^{\, \prime}_{1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_{\, 2 } \, x^{\, i - 1 - W_{\, 2, \, j}} \, y^{\, j - 1 - V^{\, \prime}_{\, 1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_3 \, x^{\, i - V_{\, 2, \, j}} \, y^{\, j - W^{\, \prime}_{\, 2, \, i}} \right\rgroup }{ \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_{\, 1 \, 2} \, x^{i - W_{\, 2, \, j}} \, y^{j - 1 - W^\prime_{\, 1, \, i}} \right\rgroup \, \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q_{\, 2 \, 3} \, x^{\, i - 1 - V_{\, 2, \, j}} \, y^{\, j - V^{\, \prime}_{\, 1, \, i}} \right\rgroup } \\ \sum_{\pmb{S}} \, \pmb{W}^{\, \prime}_{\, \pmb{q} \, \pmb{S}} \left\lgroup Q_{\, 1 \, 2} \, x^{\, \pmb{\iota }} \, y^{\, - W^{\, \prime}_1} \right\rgroup \, \pmb{W}^{\, \star }_{\, \pmb{q} \, \pmb{S}} \left\lgroup - Q_{23} \, y^{\, \bj} \, x^{\, - \, V_2} \right\rgroup, \end{multline} where $Q_{i j} = Q_i \, Q_j$ \subsubsection{Step 4} Using the Cauchy identity \ref{double-star-normal-Cauchy} to perform the sum over $\pmb{S}$, \begin{multline} Z_{\, 4-strip} = \prod_{k = 1}^2 \frac{ 1 }{ \theta_q \left\lgroup x^{\, L^{\, +}_{\square, \, W_k}} \, y^{\, A^{\, }_{\square, \, W_k}} \right\rgroup \theta_q \left\lgroup x^{\, L^{ }_{\square, \, V_k}} \, y^{\, A^{\, +}_{\square, \, V_k}} \right\rgroup } \\ \frac{ N^{\, \prime}_{\, W_1 \, V_1} \left\lgroup Q_1 \, y \right\rgroup N^{\, \prime}_{\, V_1 \, W_2} \left\lgroup Q_2 \, x^{\, \prime} \right\rgroup N^{\, \prime}_{\, W_2 \, V_2} \left\lgroup Q_3 \, y \right\rgroup }{ N^{\, \prime}_{\, W_1 \, W_2} \left\lgroup Q_{\, 1 \, 2} \right\rgroup N^{\, \prime}_{\, V_1 \, V_2} \left\lgroup Q_{\, 2 \, 3} y \, x^{\, \prime} \right\rgroup } \, N^{\, \prime}_{\, W_1 \, V_2} \left\lgroup Q_{\, 1 \, 2 \, 3}\, y \right\rgroup, \\ N^{\, \prime}_{\, Y_1 \, Y_2} \left\lgroup Q \right\rgroup = \prod_{i, j = 1}^\infty \theta_q \left\lgroup Q \, y^{ - Y^{\, \prime}_{1,j} + i - 1} \, x^{-Y_{2,i} + j} \right\rgroup, \, Q_{\, 1 \, 2 \, 3} = Q_1 \, Q_2 \, Q_3, \label{partition-function-comp} \end{multline} which agrees with that in \cite{hollowood.iqbal.vafa, iqbal.kozcaz.yau, nieri}. \subsubsection{Remark} The product $N^{\, \prime}_{\, Y_1 \, Y_2} \left\lgroup Q \right\rgroup$ in Equation \ref{partition-function-comp} is $N^{\, }_{\, Y_1 \, Y_2} \left\lgroup Q \, | \, q_1, q_2, q \right\rgroup$, $q_1 = y^{\, \prime}, q_2 = x$, in \cite{awata.kanno.02}, up to a factor, \begin{equation} N^{\, \prime}_{\, Y_1 \, Y_2} \left\lgroup Q \right\rgroup = \left\lgroup \prod_{i,j=1}^\infty \theta_q \left\lgroup Q \, y^{\, i - 1} \, x^{\, j} \right\rgroup \right\rgroup \, N^{ }_{\, Y_1 \, Y_2} \left\lgroup Q \, | \, y^{\, \prime}, x, q \right\rgroup \end{equation} \subsubsection{Remark} In computing the 6D instanton partition function in Equation \ref{partition-function-comp}, we used the variables $x^{\, \pmb{\iota }} y^{\, - \, Y^{\, \prime}_3}$ and $y^{\, \bj - 1} \, x^{\, -Y_3}$ in Equation \ref{macdonald.vertex} instead of the variables $x^{\, -\rho} \, y^{-Y^{\, \prime}_3}$ and $y^{\, - \, \rho} \, x^{\, -Y_3}$ in \cite{iqbal.kozcaz.vafa}, where $x^{\, - \, \rho} \, y^{\, Y} = \left\lgroup x^{\, 1/2} \, y^{\, y_1}, \, x^{\, 3/2} \, y^{\, y_2}, \, x^{\, 5/2} \, y^{\, y_3}, \cdots \right\rgroup$. To compensate for these differences when comparing our results with those obtained using the refined topological vertex of \cite{iqbal.kozcaz.vafa}, we need to rewrite the K\"ahler parameters in Equation \ref{partition-function-comp} as, \begin{equation} Q_{\, 2 i } = \left\lgroup y x^{\, \prime} \right\rgroup^{ 1/2} \, Q_{\, 2 i }^{\, IKV}, \quad Q_{\, 2 i - 1} = \left\lgroup x y^{\, \prime} \right\rgroup^{ 1/2} \, Q_{\, 2 i - 1}^{\, IKV}, \quad i = 1, 2, \cdots, \end{equation} where $Q_{\, i}^{\, IKV}, i = 1, 2, \cdots,$ is identified with the corresponding K\"ahler parameter in \cite{iqbal.kozcaz.vafa}. \subsubsection{More general strip partition functions} Strip partition functions with more vertices can be calculated, using the same Cauchy identities as in the 4-vertex case. The result of these computations is that the parition function of an $N$-vertex strip, $N=6, 8, \cdots,$ computed using elliptic topological vertices, with empty top and bottom external legs, is equal to the parition function of the same strip computed using refined topological vertices,with the top and bottom legs identified. Gluing copies of these strips, computed either way, along their horizontal preferred legs, we obtain 6D instanton partition function. \section{Comments} \label{section.11} \subsection{The $q \rightarrow 0$ limit} In the limit $q \rightarrow 0$, all $\theta_q \left\lgroup x \right\rgroup \rightarrow 1-x$, and the 6D partition function in Equation \ref{partition-function-comp} reduces to the 5D partition function of the corresponding toric diagram computed using the refined topological vertex. From the viewpoint of compactification, $q\rightarrow 0$ corresponds to the length of the compactification circle going to infinity, which forces the vertical external legs to be trivial. \subsection{The $R \rightarrow 0$ limit} Another interesting limit is obtained by taking the radius of the $M$-theory circle $R \rightarrow 0$, while keeping $q$ finite, to obtain a 5D partition function \cite{foda.gavrylenko}. A study of this 5D partition function, its relation to that obtained in the limit $q \rightarrow 0$, and the possible interplay of these two limits is beyond the scope of the present work. \subsection{The other involution and other Cauchy identities} In addition to the Cauchy identities that involve $q$-Whittaker functions and their twisted versions, there are identities that involve $q$-Whittaker functions and Hall-Littlewood functions. These identities are obtained by the action of the Macdonald involution $\omega$, Equation \ref{involution.m}, that changes the $q$-Whittaker functions labelled by a Young diagram $Y$ and a parameter $q$ to a Hall-Littlewood function labelled by the transpose diagram $Y^{\, \prime}$ and the same parameter $q$ \cite{borodin.wheeler}. We did not consider $\omega$ and the resulting Cauchy identities because they do not lead to the 6D instanton partition functions that we wish to compute. Instead, we considered the involution $\pmb{\iota }$, Equation \ref{involution} that leads to the twisted $q$-Whittaker functions and the Cauchy identities that lead to the 6D instanton partition functions. In this sense, our construction of $\mathcal{E}$ aimed in a specific direction and is not the only construction possible. It is not clear what the construction that uses $\omega$ gives \footnote{\, One can make a similar remark regarding our choice of the K\"ahler parameters in section \ref{choice.kahler.parameters}, which was motivated by producing the 6D partition functions in the literature, including \cite{hollowood.iqbal.vafa, iqbal.kozcaz.yau, nieri}, and only that. }. \subsection{The 2D interpretation} In addition to their interpretation as 6D instanton partition functions, the partition functions obtained by gluing copies of $\mathcal{E}$ and $\mathcal{E}^{\, \star}$ have a natural interpretation as elliptic deformations of 2D conformal blocks. We expect that these 2D elliptic conformal blocks are related to the $n$-point local height probabilities in off-critical exactly-solved statistical mechanical models, with elliptic Boltzmann weights, studied in \cite{jimbo.miki.miwa.nakayashiki, lukyanov.pugai}. \subsection{Two extensions that need physical interpretation} \subsubsection{Both methods can work in parallel} Using the elliptic vertex can be combined with the trace method. One can compute an elliptic partition function using the elliptic vertex, then take the trace over the all possible external states on the doubled non-preferred legs. It is not clear what the result means. \subsubsection{The Macdonald parameter $t$} One advantage of using $\mathcal{E}$, as opposed to taking traces as in \cite{hollowood.iqbal.vafa} is that it makes it obvious that there is room for one parameter, namely the second Macdonald parameter $t$. We have not switched this parameter on because it does not appear in the 6D instanton partition function results of \cite{hollowood.iqbal.vafa}. We could have easily switched $t$ on, but we would have no interpretation for what that means \footnote{\, In \cite{sulkowski}, Sulkowski showed that starting from the topological string partition function on $\mathcal{C}^3$ then switching on the Macdonald parameter $t$ produces the topological string partition function of the conifold, with the parameter $t$ parameterizing the size of the sphere $P^1$. We expect that introducing $t$ in more general topological string partition functions will have a related effect. }. \subsection{The Clavelli-Shapiro trace reduction method} In \cite{clavelli.shapiro}, Clavelli and Shapiro proposed a method to reduce the evaluation of a trace of exponentials of Heisenberg generators, of the type that appears in string theory and in the present work, to the evaluation of a single vacuum expectation value of exponentials of \textit{two} Heisenberg generators \footnote{\, Apart from a slight change of notation, this outline follows section \textbf{C.1}, p. 522, of \cite{clavelli.shapiro}. }. Given a single Heisenberg annihilation operator $a$ and its conjugate creation operator $a^{\, \dagger}$, a product $\mathcal{O} \left\lgroup a, a^{\, \dagger} \right\rgroup$ of exponentials in $a$ and $a^{\, \dagger}$, and a parameter $x < 1$, we wish to evaluate the trace, \begin{equation} Tr \left\lgroup \, x^{\, a^{\, \dagger} \, a} \mathcal{O} \left\lgroup a, a^{\, \dagger} \right\rgroup \right\rgroup = \sum_{n = 0}^\infty \, \langle \, n \, | \, x^{\, a^{\, \dagger} \, a} \, \mathcal{O} \, \left\lgroup a, a^{\, \dagger} \right\rgroup \, | \, n \, \rangle, \quad [a, a^{\, \dagger}] = 1, \label{trace.heisenberg.01} \end{equation} where $\langle \, n \, |$ and $| \, n \, \rangle$ are the state created by the action of $n$ copies of $a$ and of $a^{\, \dagger}$ on the vacuum states $\langle \, 0 \, |$ and $| \, 0 \, \rangle$ respectively. In \cite{clavelli.shapiro}, Clavelli and Shapiro noticed that by introducing a second Heisenberg algebra, generated by $b$ and $b^{\, \dagger}$, that commutes with the first, generated by $a$ and $a^{\, \dagger}$, the infinite sum over states on the right hand side of the first Equation \ref{trace.heisenberg.01} becomes, \begin{equation} \sum_{n = 0}^\infty \, \langle \, n \, | \, x^{\, a^{\, \dagger} \, a} \, \mathcal{O} \, \left\lgroup a, a^{\, \dagger} \right\rgroup \, | \, n \, \rangle = \langle \, 0 \, | \, e^{\, b \, a} x^{\, a^{\, \dagger} \, a} \, \mathcal{O} \, \left\lgroup a, a^{\, \dagger} \right\rgroup \, e^{\, a^{\, \dagger} \, b^{\, \dagger}} | \, 0 \, \rangle, \label{trace.heisenberg.02} \end{equation} and the trace has been reduced to computing a single expectation value of operators in a pair of Heisenberg algebras. Following that, Clavelli and Shapiro show that the right hand side of Equation \ref{trace.heisenberg.02} can be written in the form, \begin{multline} \langle \, 0 \, | \, e^{\, b \, a} x^{\, a^{\, \dagger} \, a} \, \mathcal{O} \, \left\lgroup a, a^{\, \dagger} \right\rgroup \, e^{\, a^{\, \dagger} \, b^{\, \dagger}} | \, 0 \, \rangle = \frac{ 1 }{ 1 - x } \, \langle \, 0 \, | \, \mathcal{O} \left\lgroup c, c^{\, \dagger} \right\rgroup \, | \, 0 \, \rangle, \\ c = \frac{ a }{ 1 - x } + b^{\, \dagger}, \quad c^{\, \dagger} = a^{\, \dagger} - \frac{ b }{ 1 - 1 / x }, \label{trace.heisenberg.03} \end{multline} and the trace of an operator that depends on a single Heisenberg algebra, and a weight (propagator-type) parameter $x$ is reduced to a vacuum expectation value of the same operator that now depends on two Heisenberg algebras that are deformed in a specific way using the parameter $x$. The above single-mode result readily extends to the case of a Heisenberg algebra with infinitely-many modes \footnote{\, Equation C.4, page 522, in \cite{clavelli.shapiro}. }, and the conclusion is that traces over exponentials of free fields can be re-written as vacuum expectation values in twice the number of free fields. \subsubsection{The elliptic vertex \textit{versus} taking traces, and similarities with the Clavelli-Shapiro method} \label{similarities} The relation between the result in terms of a trace and the same result in terms of no trace but twice the number of fields is identical to the relation between the computation of the 6D instanton partition functions in terms of the refined vertex and taking traces in \cite{hollowood.iqbal.vafa}, and the computation of the same objects in terms of the elliptic vertex proposed in the present work, which is basically a deformation of a doubling of the refined vertex. Even the deformation of the pair of Heisenberg algebras in Equation \ref{trace.heisenberg.03} is identical to that that appears in the present work, and in so far as these computations are concerned, using the elliptic vertex as in the present work is related to taking traces as in \cite{iqbal.kozcaz.yau, nieri} \textit{via} a Clavelli-Shapiro trace reduction. By using the elliptic vertex, the effect of the compactification is local and can be traced to the deformation of each vertex in the strip, unlike in the case of taking traces. For that reason, it is possible that, while both methods lead to the same 6D partition functions, using the elliptic vertex may be more suited to studies of the algebraic structures that underlie the elliptic deformation, particularly in 2D integrable models. \section*{Acknowledgements} We thank F Benini, J-E Bourgine, P Gavrylenko, A Hanay, C Kozcaz, Kimyoung Lee, M Manabe, Y Matsuo, V Mitev, E Pomoni, J Shiraishi, Y Tachikawa, O Warnaar, Jian-Feng Wu, F Yagi, and G Zafrir for discussions on this work and on related topics. We thank the organizers of \textit{\lq Combinatorics, Statistical Mechanics, and Conformal Field Theory\rq} for hospitality at the Australian Mathematical Research Institute, MATRIX, Creswick, Victoria, the organizers of \textit{\lq Supersymmetric Quantum Field Theories in the Non-perturbative Regime\rq}, the Galileo Galilei Institute for Theoretical Physics, Arcetri, Firenze, and OF thanks Prof A Dabholkar of the Abdus Salam International Center for Theoretical Physics, Trieste, Italy, Profs K Lechner, M Matone and D Sorokin of the Physics Department, University of Padova, Italy for hospitality at various stages of this work. OF is supported by a Special Studies Program grant from the Faculty of Science, University of Melbourne, and the Australian Research Council. RDZ is supported by JSPS fellowship for young students.
2,877,628,089,262
arxiv
\section{Introduction} Anderson localization (AL) is a single-particle disorder induced effect which leads to exponential localization of particles' eigenfunctions \cite{Anderson1958,Krame1993,VanTiggelen1999,MuellerDelande:Houches:2009,Lagendijk2009}. In his groundbreaking work, Anderson considered non-interacting electronic gas in a tight-binding model in the presence of on-site disorder. Since then, AL was investigated in many different models, including off-diagonal disorder \cite{Eilmes1998,Reza2010,Biddle2011}, disorder correlations \cite{Moura1998,Piraud2013,Kosior2015,Major2016}, random fluxes \cite{Kalmeyer1993,Sheng2000}, localization in the momentum space of classically chaotic systems \cite{Fishman,Garreau16} and, recently, localization in the time domain \cite{sacha2015,sacha2016}. The interest in AL renewed after the first experimental observation of the phenomenon in ultracold atomic gases \cite{Billy2008,Aspect2009,Modugno2010,Inguscio2008}. Although the Anderson model was created to describe electronic gases, AL is difficult to observe in metals due to electron-phonon and electron-electron interactions. On the contrary, interatomic interactions can be switched off in a system of ultracold atomic gases trapped in optical lattice potentials \cite{Chin2010}. An optical lattice serves as an artificial, phononless crystalline structure, whose geometry and properties can be easily changed \cite{Jaksch2005,Bloch2008}. Therefore, in recent years, ultracold atoms in optical lattices have become a very important toolbox used to test diverse physical models and phenomena \cite{feynmann1982,Buluta2009,Hauke2012}. The dimensionality of a system plays an important role in the context of AL \cite{Krame1993,VanTiggelen1999,Lagendijk2009}. In particular, in three dimensional (3D) space a phase transition occurs at a critical energy, called the mobility edge, separating localized and extended states \cite{Abrahams,Mott1987}. It is therefore natural to investigate AL phenomenon in systems with non-integer dimension, i.e. in fractals \cite{Nakayama1994,Garcia2010}. Whereas in Euclidean space it is sufficient to define one kind of space dimension, in the case of fractals one needs to distinguish: the dimension of the embedding Euclidean space~D, the Hausdorff dimension $d_H$ and the spectral dimension $d_s$ \cite{Nakayama1994,Rammal1983}. Whereas the Hausdorff dimension describes how the number of sites scales with the system size, the spectral dimension is related to a random walk on the lattice: the number of distinct sites $S_n$ visited by a random walker in $n$ steps scales as $S_n \propto n^{d_s/2}$, provided that $d_s < 2$. The studies proved that it is the spectral dimension $d_s$ that is relevant in AL and that $d_s=2$ is the lower critical dimension, below which all the states are localized in the presence of a disorder potential \cite{Schreiber1996}. \begin{figure}[bt] \begin{center} \resizebox{0.9\columnwidth}{!}{\includegraphics{fr_zoom2.png}} \caption{(color online) An example of a minimal random fractal lattice (RFL) mapped on the 2D square lattice for $\eta=1$, see text. The solid blue lines indicate that the neighboring sites are linked, i.e. the quantum tunneling between these sites is possible. Red dotted lines represent the lack of quantum tunneling between nearest neighbors. By adding links to the lattice (i.e. replacing a number of red dotted lines with blue lines) one increases the spectral dimension but leaves the Hausdorff dimension unchanged. } \label{fractal_pic} \end{center} \end{figure} Another important theoretical model in study of localization properties is the quantum percolation model (QP) \cite{Kirkpatrick1972,Nakanishi2009,Schubert2005,Schubert2009}. Anderson model describes particles in the presence of potential disorder (purely diagonal), whilst QP involves the binary kinetic disorder (purely off-diagonal). In QP models the disorder comes from random geometry: a QP lattice, which is a subset of the D-dimension lattice, arises by a removal of a number of sites or links with a probability $q$ (being the only parameter of the model). Despite its simplicity, QP models still arise controversies. The main concern is the question of existence of the localization-delocalization transition in 2D models for $q>0$ (see \cite{Islam2008,Gong2009,Dillon2014} and references therein). In particular, this issue might be important in the context of application of QP models in the description of transport properties in e.g. manganite films \cite{Zhang2002}, granular metals \cite{Feigelman2004} and doped semiconductors \cite{Drchal2004}. In the present paper we investigate eigenfunction localization in random fractal lattices (RFL), i.e. in fractal objects with random site connectivity in the absence of any diagonal disorder, Fig.\ref{fractal_pic}. We would like to stress that QP systems do not belong to fractal objects. That is, in the QP case only an infinite cluster of a percolated lattice at the percolation threshold is a fractal object \cite{Christensen2005}. Here, on the contrary, we consider a family of lattices with well defined Hausdorff dimension $d_H$ \cite{Niemeyer1984}. Starting with the minimal, connected lattice (i.e. a lattice without loops) and adding links between nearest neighbors we can increase the spectral dimension $d_s$ of a lattice and keep the Hausdorff dimension $d_H$ fixed. Therefore, the main focus of this paper is to investigate the presence or absence of localization in RFLs while changing the spectral and Hausdorff dimensions independently. It is worth noting that the theoretical model investigated here can be realized in ultra-cold atoms laboratories where lattice geometry can be nearly arbitrarily shaped \cite{Bakr2009,Gaunt2012,Bijnen2015}. This paper is organized as follows. In Section~\ref{secii} we describe the growth algorithm of random fractal lattices and how the spectral dimension changes when new lattice links are created. In Section~\ref{seciii} we focus on localization properties of RFLs and on their dependence on the spectral and Hausdorff dimensions. Particularly, we analyze superlocalization resonances and formation of an energy gap which emerges in the system for small spectral dimension. In Section~\ref{seciv} we investigate transmission probabilities through the system and quantum evolution of initially localized particles. Finally, in Section~\ref{secv} we conclude. \section{Random fractal lattices}\label{secii} \begin{figure}[tb] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics{bladzenie.pdf}} \caption{The plot illustrates the change of the spectral dimension of RFLs for the Hausdorff dimension $d_H=1.75\pm0.02$ when increasing the number of lattice links. The dimensionless parameter $p$ is the fraction of links added to the minimal, single connected lattice. A parameter value $p=0$ represents RFLs with the minimal number of links, i.e. no link can be removed without disconnecting a part of the lattice. Conversely, $p=1$ represents RFLs with the maximal number of links, i.e. no link can be added without creating an extra lattice site. The value of the spectral dimension $d_s$ for a given $p$ was obtained from the exponent of the number of distinct sites visited by a random walker $S_n \propto n^{d_s/2}$, and averaging over $2000$ independent RFLs and $500$ realizations of random walks of length $2500$.} \label{spectral_dim} \end{center} \end{figure} We consider a family of lattices that we call \emph{random fractal lattices} (RFLs), which first arose in a model of dielectric breakdown \cite{Niemeyer1984}. The RFLs are lattices with random site connectivity and with well-defined Hausdorff $d_H$ and spectral $d_s$ dimensions. The Hausdorff dimension, or the capacity dimension, describes how the number of sites scales with the system size. In other words, if a lattice in Fig.~\ref{fractal_pic} is a fractal object, then the number of sites inside a sphere o radius $r$ is proportional to $r^{d_H}$, where in general $d_H$ is a noninteger exponent \cite{Christensen2005}. On the other hand, the spectral dimension is related to a random walk on the lattice. The number of distinct sites $S_n$ covered in $n$ steps of a random walk is proportional to $n^{d_s/2}$, if $d_s < 2$. From a general analysis, the spectral dimension is never larger than the Hausdorff dimension \cite{Nakayama1994,Rammal1983}. RFLs under consideration are embedded in the 2D Euclidean space and have their Hausdorff and spectral dimensions smaller than 2. A minimal RFL, i.e. a lattice with the smallest number of lattice links, is a single connected lattice generated by the growth algorithm defined in Ref.~\cite{Niemeyer1984}. In a nutshell, a new lattice site $(i',j')$ is chosen and linked to the existing lattice at site $(i,j)$ with the probability \begin{equation}\label{probability} P\Big((i,j)\rightarrow (i',j') \Big)= \frac{(\phi_{i',j'})^\eta}{\sum(\phi_{i',j'})^\eta}, \end{equation} where the summation goes over all possible choices, $\eta$ is a free parameter and $\phi_{i,j}$ is a function fulfilling discrete Laplace equation: \begin{equation} \phi_{i,j}=\frac{1}{4}\left(\phi_{i,j+1}+\phi_{i,j-1}+\phi_{i+1,j}+\phi_{i-1,j}\right). \label{laplace} \end{equation} At start, we set $\phi_{i,j}=0.5$ everywhere. When a site $(i',j')$ is being connected to the lattice, then the value of $\phi_{i',j'}$ is changed to zero. Before linking another lattice site, the values of $\phi$ in the neighborhood of $(i,j)$ needs to be updated [typically in 5-20 iterations of Eq.~(\ref{laplace})]. The algorithm is stopped after reaching $N$ lattice sites. The lattices grown in this manner have nonuniform geometry, both in the lattice sites and lattice links occurrence, as in Fig.~\ref{fractal_pic}. In particular, the two neighboring lattice sites do not necessarily need to be connected and the closed loops are forbidden, i.e. a minimal RFL is created. By adding links between nearest neighbors to a given minimal RFL, one opens up new possibilities for a random walker to explore and therefore increases the spectral dimension $d_s$ of the system while keeping the Hausdorff dimension $d_H$ intact. The Hausdorff dimension of RFLs depends on the value of the parameter $\eta$ \cite{Niemeyer1984}. For example, setting $\eta=1$ one can generate a minimal RFL with the Hausdorff dimension $d_H=1.75\pm0.02$ and the spectral dimension $d_s=1.33\pm0.03$. Adding links to a minimal RFL results in an increase of the spectral dimension, Fig.~\ref{spectral_dim}. At the same time the Hausdorff dimension $d_H$ remains unchanged. \section{Localization properties}\label{seciii} In the following we analyze solutions of the Schr\"odinger equation \begin{equation} E\psi_{(i,j)}=-\sum_{i',j'}\psi_{(i',j')}, \label{schrod} \end{equation} where $(i,j)$ denotes a position on a RFL embedded in the 2D Euclidean space and the sum runs over nearest neighbor sites if there is a link between $(i,j)$ and $(i',j')$. We assume that the tunneling amplitudes of a particle between neighboring sites and the Planck constant are equal to unity. \subsection{Analysis of energy level statistics} In our analysis we investigate two distinct scenarios: \begin{itemize} \item First, we fix the Hausdorff dimension by setting $\eta=1$ in \eqref{probability}, which corresponds to $d_H=1.75\pm0.02$. Then, we change the spectral dimension in the range between 1.33 and 1.55 by adding links to the minimal RFLs, see Fig.~\ref{spectral_dim}. \item In the second scenario, we do the opposite, i.e. we fix the spectral dimension ($d_s\approx1.35$ or $d_s\approx1.5$) and change the Hausdorff dimension by varying the parameter $\eta$ in \eqref{probability}. \end{itemize} \begin{figure}[t] \begin{center} \resizebox{0.85\columnwidth}{!}{\includegraphics{phase_cutsv3.png}} \caption{(color online) A plot of the averaged ratio of consecutive energy level spacings $r(E)$ of RFLs with the fixed Hausdorff dimension $d_H=1.75\pm0.02$ (top panel), and two cuts for the extreme cases (middle panel). The vertical axis in the top panel shows the impact of the spectral dimension $d_s$, through the dimensionless parameter $p$, see Fig.~\ref{spectral_dim}, on the level statistics. With increasing $d_s$ the system gradually delocalizes. For low values of $p$ (corresponding to $d_s\approx 1.35$) and near $E=0$ there is a narrow energy gap emerging, see the discussion in the main text. The localization is not much influenced by a change of the Hausdorff dimension (bottom panel). The RFL systems with different values of $d_H$ and very similar $d_s$ possess similar localization properties, showing that it is the spectral dimension $d_s$ that is the relevant dimension in this context. All RFLs analyzed here consist of $N=5000$ sites. } \label{phase_diag} \end{center} \end{figure} In order to explore the presence or absence of localization of eigenfunctions of a particle in RFLs, we use a convenient method engaging the energy level statistics obtained via direct diagonalization of finite systems \cite{Oganesyan2007,Pal2010,Tang2015}. Since the localized states usually have very small overlaps (as they may be localized in different parts of the system), the energy levels can be nearly degenerate. This is why the localization of eigenstates can be observed directly from the spectrum. Therefore, we expect that in the localized phase the energy level statistics follow the Poisson distribution, and in the delocalized phase fulfill the Wigner Dyson distribution \cite{Oganesyan2007,Pal2010,Tang2015}. Having the ordered spectrum of energy levels $\{E_i\}$, we can calculate a quantity \begin{equation}\label{ri_eq} r_i = \frac{\min(\delta_i,\delta_{i-1})}{\max(\delta_i,\delta_{i-1})}, \end{equation} where $\delta_i=E_i-E_{i-1}$. Next, we average the results over 2000 realizations of RFL and over neighboring energies $r(E)=\langle r_i \rangle$. The distinction between the two regimes is possible since for the Poisson distribution $r(E)\approx 0.3863$ localized phase) and for the Wigner Dyson distribution $r(E) \approx 0.5359$ (delocalized phase) \cite{Oganesyan2007,Pal2010,Tang2015}. To investigate the localization properties of wavefunctions under the change of the spectral dimension of RFLs, we plot $r(E)$ in Fig.~\ref{phase_diag} (top panel). The vertical axis expresses the spectral dimension via the dimensionless parameter $p$, see Fig.~\ref{spectral_dim}. The panel illustrates strong dependence of the localization properties on the spectral dimension $d_s$. While increasing the spectral dimension we observe a smooth transition from the localized to delocalized phase, what is evident in the middle panel of Fig.~\ref{phase_diag}, where the averaged ratio $r(E)$ is plotted for two extreme values of $d_s$. Furthermore, we observe nonmonotonous dependence of $r(E)$ on energy (i.e. the localization is stronger near the edges and the center of the spectrum), which has been also observed in QP models \cite{Schubert2005,Gong2009}. In the bottom panel of Fig.~\ref{phase_diag} we present $r(E)$ for a few different Hausdorff dimensions $d_H$ and very similar spectral dimension $d_s$. The data with different $d_H$ do not differ significantly apart from the edges of the spectra. The results show that, similarly to the AL models \cite{Schreiber1996}, it is the spectral dimension $d_s$ that is the relevant dimension in the context of the localization properties of the system. \subsection{Energy gap and superlocalization resonances} In Fig.~\ref{phase_diag} we can observe a peculiar narrow energy gap $\Delta$ around $E=0$ for $d_s \lesssim 1.4$, which eventually closes up when the spectral dimension is being increased. We find that the value of energy gap $\Delta$ for the minimal RFLs (i.e. $d_s=1.33\pm0.03$) of length $N=5000$ is \begin{equation}\label{energy_gap_value} \Delta = 0.114\pm 0.017. \end{equation} The energy gap \eqref{energy_gap_value} decreases slightly in larger systems, however, it seems that $\Delta$ survives in the thermodynamical limit, see Fig.~\ref{energy_gap_fig}. In order to illustrate the energy gap more clearly we show in Fig.~\ref{pr} the participation ratio $PR(E)$ \begin{equation} PR(E) = \sum_{(i,j)} \Big|\langle (i,j) | \psi(E)\rangle \Big|^4, \end{equation} where $ |\psi(E)\rangle $ is an eigenstate corresponding to an energy $E$ and $|(i,j) \rangle$ is a state localized at a lattice site denoted by $(i,j)$ in the 2D Euclidean space. The participation ratio is yet another measure of the localization \cite{Krame1993,VanTiggelen1999}. That is, the inverse of $PR$ estimates a number of fractal points on which an eigenstate is localized on. \begin{figure}[tb] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics{gap.pdf}} \caption{The change of the energy gap $\Delta$ near $E=0$ with the increasing size of the system $N$ for the minimal RFLs. The values were averaged over 200 realizations. The error bar indicates one standard deviation, and the black dashed lines represent the minimal and the maximal value obtained in 200 realizations. The numerical values were obtained via the direct diagonalization.} \label{energy_gap_fig} \end{center} \end{figure} \begin{figure}[tb] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics{pr_v4.pdf}} \caption{A plot of the participation ratio $PR(E)$ of the minimal RFLs (averaged over 2000 realizations). The participation ratio $PR$ confirms the existence of the energy gap $\Delta$. Also, an additional structure is revealed - we observe some very narrow superlocalization resonances for a discrete set of degenerate energies, where $E_r=0,\pm\frac{\sqrt{5}\pm1}{2}, \pm1, \pm\sqrt{2}$ are the most dominant. For the resonant eigenenergies $E_r$ the system reduces to a few very small clusters, see Fig.~\ref{superlocalization_fig}. Note that the energy gap $\Delta$ is averaged over many realizations and a single realization value might differ, see Fig.~\ref{energy_gap_fig}. Due to this fact, the averaged $PR(E)$ is smeared around $E=\Delta$.} \label{pr} \end{center} \end{figure} What is the most striking in Fig.~\ref{pr} is the emergence of peaks that are related to superlocalization resonances also observed in the QP model \cite{Schubert2005}. The resonances appear for a discrete set of energies $E_r$ and they are dominant for $E_r=0,\pm\frac{\sqrt{5}\pm1}{2}, \pm1, \pm\sqrt{2}$. The presence of the resonances is not visible in Fig.~\ref{phase_diag} because in Eq.~\eqref{ri_eq} the degenerate levels are discarded to avoid divergence. The name \emph{superlocalization} stems from the fact that the eigenstates localize on very small (a few lattice site) disjoint clusters. For example, a zero energy state can be localized on two sites only, as long as a certain building block appears on the lattice boundary. In Fig.~\ref{superlocalization_fig} (a) we see a block with one vertex and four lattice sites. Note, that its zero energy eigenstate has non-zero values on two sites only and the other two sites form an \emph{,,empty leg''}. Now notice, that a structure in Fig.~\ref{superlocalization_fig} (c) must have similar zero energy eigenstates because connecting empty lattice to an empty leg of a block like in Fig.~\ref{superlocalization_fig}(a) does not change a zero energy eigenstate localized on two sites. Therefore, if a lattice geometry allows for small blocks (like e.g. in panel (a) or (b) of Fig.\ref{superlocalization_fig}), then some eigenstates of the lattice coincide with those of small blocks and superlocalization resonances emerge. If some blocks are frequently occurring in the lattice, the corresponding superlocalization resonances can be extremely degenerate. For example, for the minimal RFLs with $d_H=1.75\pm0.02$ about 10\% of eigenstates have zero eigenenergy. The zero energy manifold is thus extended over a substantial number of lattice points that is related to the appearance of the energy gap in the spectrum. That is, the zero energy manifold is large and other eigenstates with non-vanishing overlap on the manifold must necessary possess different energies. The four-site structure shown in Fig.~\ref{superlocalization_fig}(a) has also non-zero energy eigensolutions, for instance corresponding to $E=-\sqrt{3}$. However, such eigenstates do not have an ,,empty leg'' and therefore if the block is connected to an empty big lattice, these eigenstates are disturbed. Nevertheless, it is possible to build a 3-vertex block, see Fig.~\ref{superlocalization_fig}(b), where there are eigenstates corresponding to $E=-\sqrt{3}$ which possess ,,empty legs'' and are not disturbed when the block is attached to a big empty lattice. Such a 3-vertex structure is far less common in RFLs, which explains low abundance of the $E=-\sqrt{3}$ superlocalization resonance (less then 1\textperthousand). \begin{figure}[tb] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics{resonances.png}} \caption{(color online) A sample building blocks responsible for the superlocalization resonances: an eigenstate of a 4-site block corresponding to $E=0$ (a) and an eigenstate of a 10-site block related to $E=\sqrt{3}$ (b). A dot represents a non-zero value of an eigenstate on a given sites $\psi_i$: a dot's size scales with $|\psi_i|$ and red/blue color represents plus/minus sign of $\psi_i$. Notice that a state of block (a) has the same energy as an eigenstate of a lattice in panel (c) (connecting empty sites to an empty leg of block (a) does not change its energy). } \label{superlocalization_fig} \end{center} \end{figure} \section{Transmission and quantum evolution}\label{seciv} In this section we investigate transport properties of a quantum particle on RFLs: transmission probability through the lattice and evolution of a particle initially localized on a single lattice site. Here, we focus on small systems with 500 lattice sites only because smaller systems are closer to the experimental reality in ultra-cold atomic gases. \begin{figure}[tb] \begin{center} \resizebox{0.8\columnwidth}{!}{\includegraphics{transm_en.pdf}} \resizebox{0.8\columnwidth}{!}{\includegraphics{transmisja.pdf}} \caption{ (color online) The energy dependent transmission probability of a quantum particle between the most distant sites of a lattice of length 500 for $\eta=1$ (top panel) and the same transmission probability averaged over the corresponding energy spectra (bottom panel). The curves on the top panel were plotted for $p=0.95$ (solid red) $p=0.55$ (black dashed) and $p=0.05$ (blue dotted). We observe that the transition probability strongly increases while we add links to the lattice, which is in the agreement with Fig.~\ref{phase_diag}. Also, one can distinguish the termination of transport for $p=0.05$ around $E=0$ which corresponds to the energy gap in the system and for the resonant energies $E_r$ (most critically for $E=\pm1$). The data were averaged over 2000 realizations.} \label{transmisja_fig} \end{center} \end{figure} \begin{figure}[bt] \begin{center} \resizebox{0.9\columnwidth}{!}{\includegraphics{ewolucja.png}} \caption{(color online) The evolution of a quantum particle in a sample RFL lattice for the two extreme cases: the minimal number of links (a) and the maximal number of links (b) in a given fractal geometry. We chose a system of 500 lattice sites generated for $\eta=1$, Eq.~\eqref{probability}. The initial state was localized on a single lattice site. Panels (a) and (b) present the probability densities of finding a particle for different evolution times. Panels (c) and (d) present the time averaged densities for $50<t<150$.} \label{ewolucja_fig} \end{center} \end{figure} The transmission probability from site $r$ to site $r'$ of a~quantum particle with the energy $E$ is defined \cite{Krame1993} as: \begin{equation} t(r,r',E)= \left\langle \left| \langle r |G^+(E)| r'\rangle \right|^2 \right\rangle, \end{equation} where $\langle ..\rangle$ denotes average over different realizations of RFL, $G^+(E)=\lim_{\eta\rightarrow 0_+}\left(E+i\eta\,-H\right)^{-1}$ is the retarded one-particle Green's function \cite{Economou2006}. We plot the transmission probability throughout considered lattices, i.e. between the most distant sites, in Fig.~\ref{transmisja_fig}. The top panel presents the dependence of the transmission probability on energy for different spectral dimension of RFLs whereas in the bottom panel there are transmission probabilities averaged over entire energy spectra for different number of links in the system. In an agreement with the results presented in Fig.~\ref{phase_diag}, we observe a drastic reduction (5 orders of magnitude) of the transport while decreasing the number of links in the system (bottom panel). Furthermore, we can see a number of strong dips in the plot of $t(E)$ (top panel), especially for $p=0.05$. These dips correspond to the energy gap around $E=0$ and the superlocalization resonances for discrete degenerate energies, see Fig.~\ref{pr}. The most pronounced dips are related to: $E=0$ (about 10\% of all energy levels correspond to $E=0$), $E=\pm1$ (4\%) and $E=\pm \sqrt{2}$ (1\%). Note, that an increase of $p$ (black dashed and red solid curves in the top panel) narrows the dip around $E=0$ significantly down because the gap is disappearing. Furthermore, the transport properties of a quantum particle can be investigated more directly by solving the time dependent Schr\"odinger equation \begin{equation} i\partial_t \psi_{(i,j)} = -\sum_{i',j'} \psi_{(i',j')}, \end{equation} cf. Eq.~(\ref{schrod}). In panels (a) and (b) of Fig.~\ref{ewolucja_fig} we present the snapshots of the evolution of a quantum particle in lattices for the two extreme cases: the minimal and the maximal number of links for a given geometry. In panels (c) and (d) the time averaged results are shown. Starting from the same initial, fully localized (on a single lattice site) state, we obtain the opposing results: the probability density of finding a particles is either localized around the initial state or explores the whole lattice. \section{Conclusions}\label{secv} We have investigated localization and transport of a quantum particle in lattices with a fractal structure. The lattices consist of points that form a connected cluster. Sites of the lattices are generated so that their fractal (Hausdorff) dimension $d_H$ is controlled. Independently one can control the spectral dimension $d_s$ of the systems by choosing how many nearest neighbor sites are linked to a given lattice point. It allows us to analyze how the localization properties vary with independent changes of the Hausdorff and spectral dimensions. Analysis of energy level statistics and participation ratio of eigenstates shows that while the localization properties depend very weakly on $d_H$, they change strongly with $d_s$. For the smallest spectral dimension of the systems we observe strong localization of eigenstates. With an increase of $d_s$, eigenstates loose their localization properties and become extended over the entire finite lattices that we consider. Disorder in our systems stems from a non-uniform distribution of lattice points and from their random connections. When $d_s$ approaches $d_H$ all nearest neighbor sites become connected and the randomness is related to the non-uniform distribution of lattice points only. The latter introduces too weak dephasing and eigenstates do not localize. We observe also eigenstates that are strongly localized on small parts of the random fractal lattices. The smaller part of the fractal, the higher chance for such eigenstates to occur. The zero energy eigenstates can occupy two sites only and consequently they form the largest degenerate manifold. At low spectral dimension they are so many that an energy gap around $E=0$ is created. The presence of strongly localized eigenstates is imprinted in the transport properties of the systems, i.e. the particle transmission probability drops at the corresponding energies. \section*{Acknowledgments} We are grateful to Marcin P\l{}odzie\'{n} for encouraging discussions. AK acknowledges a support of the National Science Centre, Poland via project DEC-2015/17/N/ST2/04006. KS acknowledges a support of the National Science Centre, Poland via project No.2015/19/B/ST2/01028.
2,877,628,089,263
arxiv
\section*{Acknowledgement} This work was supported by the UW Reality Lab, Facebook, Google, and Futurewei. \bibliographystyle{splncs04} \section{Problem Definition} We start with a video of a scene, taken by a stationary camera. As objects -- people, cars, bicycles -- pass through the scene, they occlude and are occluded by scene elements, pass into and out of shadowed regions, cast shadows into the scene, and, due to perspective, appear larger or smaller in an image depending on their position in the scene. From the video sequence, we seek to extract occlusion layering, shadowing, and position-dependent scale to enable realistically compositing new objects (of similar classes) into the scene. We design a fully automatic pipeline to tackle this problem. Our key idea is that the occurrence and motion of existing objects (aka {\em scene probes}) through the video is the primary cue for inferring properties of the scene. These properties include depth, occlusion ordering, lighting, and shadows. Unlike some related methods, our pipeline does not require active scanning for shadow matting~\cite{chuang2003shadow}, or manual annotation for layering~\cite{brostow1999motion}. Furthermore, our pipeline does not require the camera to be calibrated. \section{Technical Approach} \subsection{People as Occlusion Probes} \label{sec:geometry} Occlusion is key for realistic image composition. An inserted object should be occluded by the foreground and occlude the background properly. Cars driving on the street are occluded by the trees on the sidewalk nearer to the viewer, and road signs occlude people walking behind them. We propose a method to estimate the occlusion order by {\em analyzing the occlusion relationships between people, other moving objects in the video, and static scene structures and objects}. Our method records the occlusion relationship between object and the scene to yield an occlusion map $\tilde{z}(x,y)$, similar to a depth map, for determining which pixels of an object occlude or are occluded by the scene, depending on the location of the object. To make the problem tractable, we approximate the scene as a single ground plane, with moving objects and occluders represented as planar sprites (on vertical planes parallel to the image plane) that are in contact with the ground plane. Based on this simplification, we can assume a monotonic relationship between object location and occlusion order; the closer the object, the lower in the image its ground contact occurs. \subsubsection{Algorithm} We first calculate a median image within a local temporal window of one second to serve as a background plate; we have found the one second window to work well for scenes that are not densely crowded, and with objects (especially people) moving at a natural pace. For each frame in this temporal window, we apply Mask-RCNN~\cite{he2017mask} to estimate segmentation masks for people, cars, bikes, trucks, buses, and related categories. For each individual object $O_i$, Mask-RCNN returns a binary mask $M_i$, and we record the lowest point $y_i$ of the mask. We refine this mask to avoid accidental inclusion of background pixels: each pixel in $M_i$ whose color difference with the median image is greater than a threshold is assigned to refined mask $M'_i$. Now we construct the occlusion map. We set the image origin $(x,y) = (0,0)$ at the lower left corner of the image. The key idea is that if an object $O_i$ with bottom pixel $y_i$ occludes a background pixel, then another object $O_j$ with $y_j < y_i$ is likely to be closer to the camera and would then also occlude this pixel. We initialize the occlusion map with $\tilde{z}(x,y) = -1$ at all pixels and then iteratively update the map for each object $O_i$: \begin{equation} \tilde{z}(x,y) = \begin{cases} y_i,& \text{if } (x,y) \in M'_i \text{ and } y_i > \tilde{z}(x,y)\\ \tilde{z}(x,y), & \text{otherwise}. \end{cases} \end{equation} To create a new composite, we initialize image $I_{\rm comp}$ with one of the median images. For a new object $O_j$ (e.g., cropped from another photo) with mask $M_j$ and bottom coordinate $y_j$, we update $I_{\rm comp}$: \begin{equation} I_{\rm comp}(x,y) = \begin{cases} O_j(x,y), & \text{if } (x,y) \in M_j \text{ and } y_j < \tilde{z}(x,y)\\ I_{\rm comp}(x,y), & \text{otherwise}. \end{cases} \end{equation} where $O_j(x,y)$ is the color of the object at a given pixel $(x,y)$. Note that this composite image lacks shadows cast by $O_j$. Further, if $O_j$ is inserted into an area that is itself in shadow, then $O_j$ should be darkened before compositing. We discuss these shadowing effects in the next section. \subsection{People as Light Probes} \label{sec:scene-shadow} People appear brighter in direct sunlight and darker in shadow. Hence, we can potentially use people to {\em probe} lighting variation in different parts of a scene. Based on this cue, we compute a lighting map that enables automatically adjusting overall brightness of new objects as a function of position in the image. We do not attempt to recover an environment map to relight objects, or to cast complex/partial shadows on objects (areas for future work). Instead, we simply estimate a darkening/lightening factor to apply to each object depending on its location in the scene, approximating the effect of the object being in shadow or in open illumination. We call this factor, stored at each pixel, the lighting map $L(x,y)$. This lighting map is a {\em spatially varying} illumination map across the image, whereas the prior work~\cite{georgoulis2017around,park2020seeing,hold2019deep,hold2017deep} generally solves for a single {\em directionally varying} illumination model for the entire scene. From the input video, we observe that people walking in well-lit areas tend to have higher pixel intensity than people in shadowed areas. We further assume there is no correlation between the color of people's clothing and where they appear in the image; e.g., people wearing red do not walk along different paths than those wearing blue. Given these conditions, we estimate the lighting map from statistics of overall changes in object colors as they move through the scene. Note that this lighting map is a combination of average illumination and reflection from the surface; it does not give absolute brightness of illumination, but gives a measure of relative illumination for different parts of the scene. \subsubsection{Algorithm} Starting with the detected objects $\{O_i\}$ and associated masks $\{M'_i\}$ described in Section~\ref{sec:geometry}, we compute the mean color $C_i$ per object across all pixels in its mask. The lighting map is then the average of the $C_i$ that cover a given pixel, i.e.: \begin{equation} L(x,y) = \frac{1}{|\{i \mid (x,y) \in M'_i\}|}\sum_{i \mid (x,y) \in M'_i} C_i \end{equation} When compositing a new object, $O_j$ with mask $M_j$ into the background plate, we first compute the average lighting $L_j$ for the pixels covered by $M_j$: \begin{equation} L_j = \frac{1}{|\{(x,y) \in M_j\}|}\sum_{(x,y) \in M_j} L(x,y) \end{equation} and apply this color factor component-wise to all of the colors in $O_j$. As noted above, this lighting factor makes the most sense as a relative measure. Thus, when compositing a new object into the scene in our application scenario, the user would first set the brightness of the object at a given point in the scene (with the lighting multiplied in), and can then move the object to different parts of the scene with plausible changes to the brightness then occurring automatically. \input{fig_text/network} \subsection{People as Shadow Probes} \label{sec:cast-shadow} Shadows are one of the most interesting and complex ways that moving objects interact with a scene. Predicting shadows is challenging, as their shapes and locations depend on the position of the sun in the sky, the weather, and the geometry of both the object casting the shadow and the scene receiving it. Furthermore, unlike other lighting effects, shadows are not additive, as a surface already in shadow does not darken further when a second shadow is cast on it from the same light source. We propose using observations of objects passing through the scene to recover these shadowing effects, using a deep network -- a pix2pix~\cite{isola2017image} GAN with improved losses~\cite{wang2018high} -- trained on the given scene to learn how objects cast shadows depending on their shapes and locations in scene. Further, since the discriminator encourages generation of realistic images, the network also tends to improve jagged silhouettes. \subsubsection{Algorithm} A natural choice of generator would take as input a shadow-free, composite image $I_{\rm comp}$ and directly output an image with shadows. In our experience, such a network does not produce high quality shadows, typically blurring them out and sometimes adding unwanted color patterns. Instead, we use the object masks of inserted objects as input, which are stronger indicators of cast shadow shapes. Inspired by \cite{liu2018intriguing}, we concatenate an image channel comprised of just the per-pixel $x$-coordinate, and another channel with just the per-pixel $y$-coordinate; we found that adding these channels was key to learning shadows that varied depending on the placement of the object, e.g., to ensure the shadow warped across surfaces or was correctly occluded when moving the object around. As in Figure~\ref{fig:network}, we feed this $x$-$y$ augmented object mask through a deep residual convolutional neural network to generate a scalar gain image $G$ and color bias image $B$, similar to the formulation in \cite{le2019shadow,le2018a+}. The final image is then $I_{\rm final} = G \cdot I_{\rm comp} + B$. We found that having the generator produce $I_{\rm final}$ directly resulted in repetitive pattern artifacts that were alleviated by indirectly generating the result through bias and gain images. For training, we take each input image $I$ and follow the procedure in Section~\ref{sec:geometry} to extract objects $\{O_i\}$ and masks $\{M'_i\}$ from an image and then composite the objects directly back onto the local median image to create the shadow-free image $I_{\rm comp}$. The resulting $I_{\rm final}$, paired with the ground truth $I$, can then be used to supervise training of the generator and discriminator, following the method described in~\cite{wang2018high}. \subsection{People as Depth Probes} \label{sec:plane} The size of a person (or other object) in an image is inversely proportional to depth. Hence, the presence of people and their motion through a scene provides a strong depth cue. Using this cue, we can infer how composited people should be resized as a function of placement in the scene. We propose a method to estimate how the scale of an object should vary across an image without directly estimating scene depth or camera focal length, but based instead on the sizes of people at different positions in the scene. Our problem is related to \cite{bose2003ground} who rectify a planar image by tracking moving objects, although they require constant velocity assumptions, which we avoid. \cite{criminisi2000single} determines the height of a person using a set of parallel planes and a reference direction, which we do not require. We make two assumptions: (1) the ground (on which people walk) can be approximated by a single plane, and (2) all the people in the video are roughly the same height. While the second assumption is not strictly true, it facilitates scale estimation, essentially treating individual height differences among people as Gaussian noise, as in \cite{Hoiem2008}, and solving via least squares. \subsubsection{Algorithm} According to our first assumption, all ground plane points $(X,Y,Z)$ in the world coordinate should fit a plane equation: \begin{equation} a X + b Y + c Z = 1 \label{eqn:world_plane} \end{equation} Under the second assumption, all people are roughly the same height $H$ in world coordinates. Under perspective projection, we have: \begin{equation} x = X \cdot \frac{f}{Z}, y = Y \cdot \frac{f}{Z}, h = H \cdot \frac{f}{Z} \end{equation} where $f$ is the focal length of the camera. Multiplying both sides of Equation~\ref{eqn:world_plane} by $H \cdot \frac{f}{Z}$, we arrive at a linear relation between pixel coordinates and height: \begin{equation} a' x + b' y + c' = h \label{eqn:camera_plane} \end{equation} where $a', b', c'$ are constants. Because people in the scene are grounded, Equation~\ref{eqn:camera_plane} suggests that any person's bottom middle point $(x_i, y_i)$ and her height $h_i$ follow this linear relationship. Given the input video, we use the same segmentation network as in Section~\ref{sec:geometry} to segment out all the people in the video. For each person in the video, we record her height $h_i$ and bottom middle point $(x_i, y_i)$ in camera coordinates. After collecting all the $(x_i, y_i)$ and $h_i$ from the image segmentation network, we use the least squares method to solve for the $(a', b', c')$ in Equation~\ref{eqn:camera_plane}. When inserting a new object into the scene at $(x_j, y_j)$, we apply Equation~\ref{eqn:camera_plane} to estimate height $h_j$. The inserted object will then be resized accordingly and translated to $(x_j, y_j)$. In our application, if the user requires a different height for an inserted object, then she can simply place the object and rescale as desired, and the system will then apply this rescaling factor on top of the height factor from Equation~\ref{eqn:camera_plane} when moving the object around the scene. \subsection{Implementation Details} We use Mask-RCNN \cite{he2017mask} as the instance segmentation network. Inspired by \cite{johnson2016perceptual}, our shadow network uses a deep residual generator. The generator has 5 residual blocks, followed by two different transposed convolution layers to output the bias and gain maps. The loss function is the same as in \cite{wang2018high}. We use ADAM with an initial learning rate of 1e-4, and decays linearly after 25 epochs to optimize the objectives. More details can be found in supplementary. \section{Introduction} \label{sec:intro} The presence of people in an image reveals much about scene structure. Each pedestrian effectively acts as a {\em depth probe}, whose image height is inversely proportional to distance. Similarly, people act as {\em occlusion probes}, revealing which parts of the scene are in front of others, as they pass in front of or behind signs, trees, cars, fences, and other structures. They also act as {\em light probes}, revealing both how the scene casts light on them (shade vs. sun), as well as how they cast shadows on the scene. Taken together (and over many observations), these cues capture a great deal of information about the scene. This paper presents techniques for inferring depth, occlusion, and lighting/shadow information from image sequences of a scene, through analysis of people (and other objects such as cars). A key property of our approach is that it is completely {\em passive} -- unlike prior use of light probes \cite{debevec2008rendering}, depth probes \cite{brostow1999motion}, or shadow probes \cite{chuang2003shadow} which require actively placing and/or moving special objects in the scene; we recover all of this information purely from the images themselves. As an application, we focus on geometry- and lighting-aware image compositing, i.e., pasting new people or other objects into an image in a way that automatically accounts for depth, occlusions, lighting, and shadows. For example, when you drag a segmented image of a person onto the image, it automatically resizes her to be larger in the foreground and smaller in the background. Placing her on the stairs will correctly occlude the car behind, and the system will add a realistic shadow that bends over the stairs and onto the pavement below. Dragging her behind the tree close to a building automatically adds partial occlusion from the branches in front, and the part of her you see is darker due to the shadow cast by the building. All of these effects occur in real-time, as you move her to different locations in the image. Our technical contributions include (1)~an automatic method for estimating occlusion maps from objects (e.g., people and cars) moving through a scene, (2)~a network that learns from a video of a scene how to cast shadows of newly inserted objects into that scene, including warping and occlusion of shadows due to scene geometry, and (3)~a method for implicitly estimating a ground plane based on heights of people observed throughout the scene without camera calibration or explicit depth estimation, enabling depth-dependent scaling of newly inserted objects based on their locations. Combined with a technique for estimating approximate illumination -- modeled by just scaling whole-object brightness depending on placement in the image -- we demonstrate a novel system for compositing objects into the scene with plausible occlusions, object scaling, lighting, and cast shadows. We show that components of the system outperform alternatives such as single-view depth estimation for occlusions and standard GANs for shadow insertion. We note that the method does have limitations: it works for a single scene at a time for which stationary camera video is available, assumes a single ground plane, and does not handle complex re-shading of inserted objects or their reflections off of specular surfaces in the scene, and the shadow generation works when inserting in-class objects that are observed in typical places in the scene (people on sidewalks and cars on streets, but not cars on sidewalks or, say, sharks placed anywhere). Despite these restrictions, we show that much can be learned about the geometry and lighting of a scene simply by observing the effects of everyday people and cars passing through, thus enabling new image compositing capabilities. \section{Conclusion} In this paper, we have introduced a fully automatic pipeline for inferring depth, occlusion, and lighting/shadow information from image sequences of a scene. The central contributions of this work are recognizing that so much information can be extracted just by using people (and other objects such as cars) as scene probes to passively scan the scene. We show that the inferred depth, occlusion ordering, lighting, and shadows are plausible, with the occlusion layering and shadow casting methods outperforming single-image depth estimation and traditional pix2pix shadow synthesis baselines. Further, we show results using a tool for image compositing based on our synthesis pipeline. As noted earlier, our method is not without limitations, requiring a single scene video as input, assumes a ground plane, does not model advanced shading effects, and cannot composite arbitrary objects at arbitrary locations. Figure~\ref{fig:failure}, e.g., highlights two failure modes; objects either placed in unusual locations or objects in previously unseen categories do not result in plausible shadows, and reflections of objects off of reflective surfaces in the scene are not handled correctly. These limitations point to a number of fruitful areas for future work. \section{Results and Evaluation} In this section, we first introduce our collected datasets, and then evaluate our entire pipeline, including individual components. \input{fig_text/shadow_results} \input{fig_text/height_results} \subsection{Data} \label{sec:data} We collected 11 videos with an iPhone camera on a monopod. These videos cover a range of scenes including city streets, parks, plazas, beaches, etc, under lighting conditions from clear sky to cloudy day. The videos are $25$ minutes long on average, during which the ambient lighting changes little. We center-crop to $800 \times 800$ at $15$ fps in training. We use the first $95\%$ of the video for training and the last $5\%$ for test. \input{fig_text/geometry_results} \subsection{Occlusion Probing} \label{sec:eval_geometry} Following Section~\ref{sec:geometry}, we generated occlusion maps for each scene; two of them are illustrated in Figure~\ref{fig:occlusion_depth} in yellow to red pseudocolor. The quantization in colors corresponds to how objects moved through the scene; e.g., the two tones in the street correspond to the restricted placement of cars, which are generally centered in one of two lanes. The black regions correspond to pixels that were never observed to be covered by an object; these are treated as never-to-be-occluded during compositing. As a baseline, we also constructed depth maps using the state-of-the-art, depth-from-single-image MiDaS network~\cite{lasinger2019towards}. MiDaS produces visually pleasing depth maps, but misses details that are crucial for compositing, such as the street signs toward the back of the scene in the second row of Figure~\ref{fig:occlusion_depth}. Figure~\ref{fig:geometry_results} shows several (shadow-free) composites. For our method, we simply place an object (such as a person, scaled by the method in Section~\ref{sec:plane}) into the scene, and it is correctly occluded by foreground elements such as trees, benches, and signs. For the baseline method, the depth of the element must be determined somehow. Analogous to our plane estimation method for height prediction, we fit a plane to the scene points at the bottoms of person detections and then placed new elements at the depths found at the bottom-middle pixel of each inserted element. In a number of cases, elements inserted into the MiDaS depth map were not correctly occluded as shown in the figure, due to erroneous depth estimates and the difficulty of placing the element at a meaningful depth given the reconstruction. \subsection{Shadow Probing} \label{sec:eval_light} We trained our shadow estimation network (Section~\ref{sec:cast-shadow}) on each scene separately, i.e, one network per scene. On average, each scene had $17,000$ images for training with $900$ images held out for testing. Figure~\ref{fig:shadow_results} shows example results for shadow estimation using (1)~a baseline pix2pix-style method~\cite{wang2018high} that takes a shadow-free image and directly generates a shadowed image and (2)~our method that takes an $x$-$y$-mask image and produces bias and gain maps which are applied to the shadow-free image. Both networks had similar capacity (with $5$ residual blocks). In this case, we also had ground truth, as we could segment out a person from one of the test images and copy them into the median background image for processing. The conventional pix2pix network tends to produce ``blobbier'' shadows when compared to the more structured shadows produced by our method, which is generally more similar to ground truth. \subsection{Depth (Ground Plane) Probing} \label{sec:eval_plane} For each input video, we predict the plane parameters from the training images using the method described in Section~\ref{sec:plane}. When inserting a new object into the scene, we apply Equation~\ref{eqn:camera_plane} with regressed plane parameters to get its estimated height. We then resize it based on the estimated height, and copy-paste it onto the background frame. To numerically evaluate the accuracy of the plane estimation as height predictor, we use it to measure relative, rather than absolute, height variation across the image. This measure factors out errors due to, e.g., children not being of average adult height as the absolute model would predict. In particular, we take one image as reference and another as a test image with the same person at two different positions in the images. Suppose Equation~\ref{eqn:camera_plane} predicts height $h$ in the reference image, but the actual height of the object is observed to be $\hat{h}$. The prediction ratio is then $r = \hat{h}/h$. For the same person in the test image, we then predict the new height $h'$ again using Equation~\ref{eqn:camera_plane} and rescale the extracted person by $r \cdot h'$ before compositing again. We compared this rescaled height to the actual height of the person in the test image and found that on a small set of selected reference/test image pairs, the estimates were within $3\%$ of ground truth. Figure~\ref{fig:height_results} illustrates this accuracy qualitatively. Note that without relative height prediction, i.e., instead using Equation~\ref{eqn:camera_plane} to predict absolute heights, the height prediction error was $13.28\%$, reasonable enough for inserting an adult of near-average height, though of course more objectionable when inserting, say, a young child. In our demo application, we allow the user to change the initial height of the inserted element. \input{fig_text/insert_results} \input{fig_text/alias} \input{fig_text/failure} \subsection{Inserting New Objects} \label{sec:inserting} We have developed an interactive image compositing application (see suppl. video) that leverages the elements of our system. With this tool, a user can take an object (e.g., a person or car) downloaded from the internet or segmented from another image and drag it to a desired location in one of our captured background plates. The system both resizes the object and adjusts its brightness depending on where it is placed, applies our occlusion map, and synthesizes any cast shadows. We do not fully re-light or re-orient objects, so we rely on the user to select objects with generally compatible lighting and orientation. We provide the user with the option of adjusting the height and overall brightness of the object after initially placing it, but afterward the height and brightness are updated automatically as the user moves the object around in the scene. Figure~\ref{fig:insert_results} shows several results for object insertion using our tool. Here we again compare to using the pix2pix variant just for the final shadow generation step. Our method produces crisper shadows whereas the pix2pix method sometimes produces no shadow at all, generalizing poorly on some of the newly inserted objects and sometimes injecting undesirable color patterns. We conducted human studies to quantify the realism of the synthesized shadows and found that our synthesized images were preferred by users $70.0\%$ of the time to baseline pix2pix, demonstrating a clear advantage for novel composites. Details of the study appear in the supplementary material. In Figure~\ref{fig:ui}, we demonstrate the effect of moving a person around in the scene. Note how the brightness and height changes when moved from the lit front area to the back shadowed area, and how the person is occluded by the foreground bench with shadow wrapped over the bench when bringing the person closer to the camera. We also show in Figure~\ref{fig:alias} how the shadow network reduces aliasing artifacts arising from the object's binary mask when initially inserted. \section{Related Work} \noindent {\bf Conditional Image Synthesis} \quad Deep generative models can learn to synthesize images, including generative adversarial networks (GANs) \cite{Goodfellow2014} and variational autoencoders (VAE) \cite{Kingma2014a}. Conditional GANs \cite{brock2018large,mescheder2018training,mirza2014conditional,miyato2018cgans,odena2017conditional} are used to synthesize images given category labels. \cite{park2019semantic,wang2018high,isola2017image} focus on converting segmentation masks to photo-realistic images. They offer users an interactive GUI to draw their own segmentation masks and output a realistic image based on the given segmentation masks. However, these GANs do not leverage scene-specific geometry and lighting information, derived from many images. Our work embeds the scene's geometry and lighting into the GAN, to generate more realistic compositions. \noindent {\bf Image Composition} \quad Lalonde \textit{et al.} \cite{lalonde2007photo} proposed a system for inserts new objects into existing photographs by querying a vast image-based object library. Several authors have explored use of GANs to transform a foreground object to better match a background. ST-GAN \cite{lin2018st} learns a homography of a foreground object conditioned on the background image. Compositional GAN \cite{azadi2018compositional} additionally learns the correct occlusion for the foreground object. SF-GAN \cite{zhan2019spatial} warps and adjusts the color, brightness, and styles of the foreground objects and embeds them into background images harmoniously. However, a realistic composition should also consider the foreground object's effect on the background (including shadows). Some approaches aim to compose an object by rendering its appearance. \cite{hong2018learning} inserts an object into a scene based on a specified location and bounding box. \cite{lee2018context} learns the joint distribution of the location and shape of an object conditioned on the semantic label map. PS-GAN \cite{ouyang2018pedestrian} replaces a pedestrian's bounding box by random noise and infills with a new pedestrian based on the surrounding context. \cite{lee2019inserting} blends the object with the background image in the bounding box, and learns a mapping to synthesize realistic images using both real and fake pairs. These works all train on images without hard shadows, and focus on person rather than shadow synthesis. For example, they only synthesize an area around the person's bounding box (not including long shadows), and don't take into account shadow casting information from other images of the same scene. \noindent {\bf Shadow Matting} \quad Matting \cite{porter1984compositing} is an effective tool to handle shadows. \cite{chuang2003shadow} enables synthesizing correct cast shadows, by estimating a shadow displacement map obtained by {\em manually} waving a shadow-casting stick over every part of the scene. Given an object to be composited, they can then synthesize correct shadows based on the object shape and shadow displacement map. The related problem of shadow {\em removal} has also been explored by a number of authors, e.g., \cite{guo2012paired,zhang2015shadow,le2019shadow}. We present the first shadow matting (synthesis) method that is completely {\em passive}, i.e., does not require manually waving a stick, but instead learns from the movement of objects (people and cars) in the scene itself. \noindent {\bf Image Layering} \quad Our work was inspired in part by \cite{brostow1999motion}, who first proposed using the motion of people (and other objects) to infer scene occlusions relationships. As the technology in the 1990's was more limited, their approach required manual intervention and made a number of simplifying assumptions. Less related to our work, but also worth mentioning is the use of layered representation for view synthesis, e.g. \cite{Shade1998,dhamo2019peeking,tulsiani2018layer,zhou2018stereo,srinivasan2019pushing}. Like \cite{brostow1999motion}, our approach infers occlusion order purely from the movement of objects in the scene, but is entirely automated and leverages modern techniques for object detection and tracking. \section*{Supplementary} \section*{A. Implementation Details} We use Mask-RCNN \cite{he2017mask} with ResNet-152 backbone as the instance segmentation network. For occlusion probes in Section~\ref{sec:geometry}, we use a local median window of one second to compute the local background frame. The confidence threshold for Mask-RCNN is set to $0.75$ for both occlusion probes in Section~\ref{sec:geometry} and light probes Section~\ref{sec:scene-shadow}. We segment out person, bicycle, car, motorcycle, bus, truck, backpack, umbrella, handbag, tie and suitcase for a complete set of moving objects. For depth probes in Section~\ref{sec:plane}, we only segment out person and use a confidence threshold of $0.9$. The high threshold usually gives a complete segmentation of a person's full body, which reduce the noise in estimating the ground plane parameters in Equation~\ref{eqn:camera_plane}. To generate the data for shadow probes in Section~\ref{sec:cast-shadow}, person, bicycle, car, motorcycle, bus, truck, backpack, umbrella, handbag, tie and suitcase are segmented out with a confidence threshold of $0.8$. The local median window is set to be $50$ frames for the shadow-free composite image $I_{\rm comp}$. Inspired by \cite{johnson2016perceptual}, our shadow network uses a deep residual generator. It has 5 residual blocks instead of 9 in \cite{wang2018high}, because predicting the bias and gain map is an easier task than synthesizing the whole frame. Two different transposed convolution layers follow the decoder to output the bias and gain maps. The loss function is the same as in \cite{wang2018high}, i.e., two multi-scale discriminators with LSGAN loss, a feature matching loss, and a VGG perceptual loss. The initial learning rate is set to 1e-4, and decays linearly after 25 epochs. It decays to $0$ in another 25 epochs. The batch size is $4$. We train our network on four Nvidia RTX 2080 Ti GPUs, and each iteration takes about $200$ms. The depth, occlusion, and lighting estimation take two hours to train for a 30-minute long video, while the shadow network takes two days to converge on four Nvidia RTX 2080 Ti GPUs. As shown in the \href{https://youtu.be/bYJ_WdnsEbI}{supplementary video}, applying the depth, occlusion, and lighting estimates to newly inserted objects happens in real-time, while the shadow synthesis takes about 150ms on a single Nvidia RTX 2080 Ti GPU. \section*{B. Scenes} \input{fig_text_suppl/scene} We captured 11 video sequences of scenes in a variety of locations including urban settings, parks, and a beach. Figure~\ref{fig:scenes} shows the background images for those scenes. Each background image is either an original frame that had no people in it, or, for scenes that were more crowded, the first 1-second median image. \section*{C. Additional Experiments} \subsection*{C.1. Importance of $x$-$y$ Channels for Shadow Probing} \input{fig_text_suppl/mask} To study the importance of the $x$-$y$ channel in our shadow synthesis network (Figure~\ref{fig:network} in the paper), we created a new baseline without it. I.e., we trained another pix2pix-style network~\cite{wang2018high} that takes only the one-channel mask image and produces bias and gain maps which are applied to the shadow-free image. This mask-only network has the same capacity (5 residual blocks) as our shadow network. Figure~\ref{fig:mask_results} shows example results for shadow estimation on the test set. Without the $x$-$y$ channels, the network fails to learn the geometric impact of the scene, leading to incorrect shadow warping and occlusion. In addition, the mask-only network tends to produce more repetitive patterns and color artifacts. Our additional $x$-$y$ input helps stabilize the training. \subsection*{C.2. Human Study} To quantify the realism of the synthesized shadows and our shadow synthesis network to baseline pix2pix, we conducted two human studies on (1) the test set (with ground truth reference) and (2) composites with new inserted objects. For the first study, we collected $75$ test images per scene and generated results with our network and the pix2pix baseline in the paper (shadow-free image $I_{\rm comp}$ as input, direct synthesis of output image with shadow). Since these were test images, we also had ground truth. We used Amazon Mechanical Turk (AMT) for the study. We set the requirement for workers to: 1) masters, 2) greater than $97\%$ approval rate. For each comparison, we showed two images to 3 human subjects, and asked them to choose the one that looked more realistic. We explicitly told them to focus on the shadow areas, because the inserted object had almost the same appearance. We set the title of the task to be ``Which image looks more realistic? Focus on the shadows.''. The order of the image pairs were randomized and each pair was assigned to different workers by AMT. We took the majority vote over three votes on each image pair. By taking the majority vote, we reduce the effect of noisy labeling due to ``lazy'' workers who just click randomly. Overall, our synthesized images were preferred $58.4\%$ of the time to the baseline which was preferred $41.6\%$ of the time (a difference of $16.8\%$). In addition, our results were on average preferred $41.9\%$ of the time to ground truth, whereas the baseline was preferred $34.8\%$ to ground truth. A detailed score breakdown for each scene is shown in Table~\ref{tbl:human}. For the second study, we collected 60 image composites: for each scene, 4-7 image pairs -- our shadow composite vs baseline pix2pix (no ground truth available) -- each image with 1-6 inserted objects. We inserted each random object manually onto the background in locations that made sense (no cars on the sidewalk or people on the street except for crosswalk). As we had fewer images in the second study, we asked for more workers per comparison in an effort to increase accuracy. In particular, each image pair was shown to 5 human subjects (different workers from the first study), and we again took the majority vote for robustness. Our synthesized images were preferred on average $70.0\%$ of the time to the baseline, demonstrating a clear advantage for novel composites. Pix2pix struggles significantly more to generalize when inserting new objects that are less similar in appearance to the ones seen during training. A detailed score breakdown for each scene is shown in Table~\ref{tbl:human_insert}. \begin{table}[t!] \centering \begin{tabular}{l|cccccc} & scene 1 & scene 2 & scene 3 & scene 4 & scene 5 & scene 6 \\ \hline Ours to baseline & 61.3 & 50.0 & 65.3 & 49.3 & 62.7 & 58.0 \\ GT to ours & 86.7 & 54.7 & 64.7 & 54.0 & 51.3 & 54.0 \\ GT to baseline & 95.3 & 58.7 & 81.3 & 58.0 & 57.3 & 58.7 \\ \hline & scene 7 & scene 8 & scene 9 & scene 10 & scene 11 & \\ \hline Ours to baseline & 58.7 & 58.7 & 51.3 & 68.0 & 58.7 & \\ GT to ours & 50.7 & 63.3 & 52.7 & 62.7 & 44.0 & \\ GT to baseline & 53.3 & 74.0 & 64.7 & 64.7 & 51.3 & \end{tabular} \caption{Human study results on each scene's test set.} \label{tbl:human} \end{table} \begin{table}[t!] \centering \begin{tabular}{l|cccccc} & scene 1 & scene 2 & scene 3 & scene 4 & scene 5 & scene 6 \\ \hline Ours to baseline & 50.0 & 75.0 & 100.0 & 42.9 & 70.0 & 80.0 \\ \hline & scene 7 & scene 8 & scene 9 & scene 10 & scene 11 & \\ \hline Ours to baseline & 70.0 & 75.0 & 75.0 & 70.0 & 78.6 & \end{tabular} \caption{Human study results on inserting new objects for each scene.} \label{tbl:human_insert} \end{table} \subsection*{C.3. Additional Results on Inserting New Objects} \input{fig_text_suppl/insert} In Figure~\ref{fig:insert_results_1} and Figure~\ref{fig:insert_results_2}, we show more qualitative results on inserting new objects, with comparison to the pix2pix baseline for the shadow compositing. Our method produces crisper shadows whereas the pix2pix method sometimes produces no shadow at all, generalizing poorly on some of the newly inserted objects and sometimes injecting undesirable color patterns. For the simpler case of inserting shadows on a cloudy day -- mostly a darkening under/near a person's feet or under a car -- pix2pix performs about as well as our method. Note that the comparison to pix2pix is just to understand the benefit of our shadow network; both our method and pix2pix benefit from all the other components of our approach (occlusion, lighting, and depth probing). \section*{D. Supplementary video} The \href{https://youtu.be/bYJ_WdnsEbI}{supplementary video} demonstrates our compositing tool and shows the importance of each of depth, occlusion, lighting, and shadow probing. The video also shows that occlusion probing currently does not work well with very thin structures like skinny (leafless) tree branches. This limitation is because all the operations in our method is at pixel level, while these thin structures are in some cases barely a pixel wide. Furthermore, these thin structures like branches often wiggle in the wind, which can lead to errors in occlusion labeling with our method.
2,877,628,089,264
arxiv
\section{Introduction} Image captioning aims to automatically describe the visual content of a given image with fluent and credible sentences. It is a typical multi-modal learning task, which connects Computer Vision (CV) and Natural Language Processing (NLP). Inspired by the success of deep learning methods in machine translation \cite{DBLP:conf/acl/PapineniRWZ02, DBLP:conf/emnlp/ChoMGBBSB14}, almost all image captioning models adopt the encoder-decoder framework with the visual attention mechanism. The encoder encodes input images into fix-length vector features, and the decoder decodes image features into descriptions word by word \cite{DBLP:conf/cvpr/VinyalsTBE15, DBLP:conf/icml/XuBKCCSZB15, DBLP:conf/cvpr/00010BT0GZ18, huang2019attention, DBLP:conf/cvpr/PanYLM20}. Initially, researchers adopted a pre-trained Convolutional Neural Network (CNN) as an encoder to extract image grid-level features and Recurrent Neural Network (RNN) as a decoder \cite{DBLP:conf/cvpr/VinyalsTBE15, DBLP:conf/icml/XuBKCCSZB15}. \cite{DBLP:conf/cvpr/00010BT0GZ18} first adopted Faster R-CNN to extract region-level features. Due to its overwhelming advantage, most subsequent works followed this pattern, and grid-level features extracted by CNN were discarded. Nevertheless, there are still some inherent defects in region-level features and encoder of object detector: 1) region-level features may not cover the entire image, which results in the lack of fine-grained information \cite{DBLP:conf/aaai/LuoJSCWHLJ21}; 2) extracting region features is high time consuming, and the object detector needs an extra Visual Genome \cite{DBLP:journals/ijcv/KrishnaZGJHKCKL17} dataset for pre-training, which makes it difficult to train image captioning model end-to-end from image pixels to descriptions, and also limits potential applications in the actual scene \cite{DBLP:conf/cvpr/JiangMRLC20}. Decoder of LSTM \cite{DBLP:journals/neco/HochreiterS97} with soft attention \cite{DBLP:conf/icml/XuBKCCSZB15} mechanism has remained the common and dominant approach in the past few years. However, the shortcomings of training efficiency and expression ability of LSTM also limit the effect of relevant models. Inspired by the success of Multi-head Self-Attention (MSA) mechanism and Transformer architecture in NLP tasks \cite{DBLP:conf/nips/VaswaniSPUJGKP17}, many researchers began to introduce MSA into decoder of LSTM \cite{huang2019attention,DBLP:conf/cvpr/PanYLM20} or directly adopt Transformer architecture as decoder \cite{DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/cvpr/PanYLM20,DBLP:conf/aaai/LuoJSCWHLJ21,DBLP:conf/aaai/JiLSCLW0J21} of image captioning models. Especially, Transformer architecture gradually shows extraordinary potential in CV tasks \cite{DBLP:conf/iclr/DosovitskiyB0WZ21,liu2021Swin} and multi-modal tasks \cite{DBLP:conf/nips/LuBPL19, DBLP:conf/cvpr/ZhuY20a,DBLP:conf/icml/RadfordKHRGASAM21}, which provides a new choice for encoding images into vector features. Different from Faster R-CNN, features extracted by a visual transformer are grid-level features, which have a higher computing efficiency and allows expediently exploring more effective and complex designs for image captioning. Considering the disadvantage of pre-trained CNN and object detector in encoder and limitations of LSTM in decoder, we build a pure Transformer-based image captioning model (PureT) to integrate this task into one stage without pre-training process of object detection to achieve end-to-end training. In Encoder, we adopt Swin-Transformer \cite{liu2021Swin} to extract grid features from given images as the initial vector features and compute the average pooling of gird features as the initial image global feature. Then, we construct a refining encoder similar to \cite{huang2019attention,DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/aaai/JiLSCLW0J21} by Shifted Window MSA (SW-MSA) from Swin-Transformer to refine image initial grid features and global feature. The refining encoder has a similar architecture with Transformer Encoder in machine translation \cite{DBLP:conf/nips/VaswaniSPUJGKP17} which can be regarded as an extension of Encoder of SwinTransformer for image captioning model. In Decoder, we directly adopt Transformer Decoder in machine translation \cite{DBLP:conf/nips/VaswaniSPUJGKP17} to generate captions. Furthermore, we pre-fuse the word embedding vector with the image global feature from Encoder before the MSA of word embedding vector to increase the interaction of inter-model (image-to-words) features. We validate our model on MSCOCO \cite{DBLP:conf/eccv/LinMBHPRDZ14} offline ``Karpathy'' \cite{DBLP:journals/pami/KarpathyF17} test split and official online test server. The results demonstrate that our PureT achieves new state-of-the-art performance on both single model and ensemble of 4 models configurations: on offline ``Karpathy'' test split, a single model and an ensemble of 4 models achieve 138.2\% and 140.8\% CIDEr scores respectively; on official online test server, an ensemble of 4 models achieves 135.3\% (c5) and 138.0\% (c40) CIDEr. Our main contributions are summarized as follows: \begin{itemize} \item We construct a pure Transformer-based (PureT) model for image captioning, which integrates this task into one stage again without the pre-training process of object detector and provide a new simple and solid baseline of image captioning. \item We add a pre-fusion process between the generated word embeddings and image global feature, which aims to increase the interaction of inter-modal features and enhance the reasoning ability from image to captions. \item We conduct extensive experiments on the MSCOCO dataset, which demonstrate the effectiveness of our proposed model, and achieve a new state-of-the-art performance on both `Karpathy' offline test split and official online test server. \end{itemize} \section{Related Work} Existing works of image captioning can be divided into CNN-LSTM based models \cite{DBLP:conf/cvpr/VinyalsTBE15,DBLP:conf/icml/XuBKCCSZB15,DBLP:conf/cvpr/00010BT0GZ18,DBLP:conf/aaai/WangC019,huang2019attention} and CNN-Transformer based models \cite{DBLP:conf/nips/HerdadeKBS19,GuangLi2019,DBLP:conf/cvpr/PanYLM20,DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/aaai/JiLSCLW0J21,DBLP:conf/aaai/LuoJSCWHLJ21}. Both adopted pre-trained CNN or Faster R-CNN as the encoder to encode image into grid or region-level features, where the former models adopted Long Short-Term Memory Network (LSTM) \cite{DBLP:journals/neco/HochreiterS97} as the decoder and the latter models adopted Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17} as the decoder to generate description word by word. Earlier works used pre-trained CNN, e.g., VGG-16 \cite{DBLP:journals/corr/SimonyanZ14a} and ResNet-101 \cite{DBLP:conf/cvpr/HeZRS16}, as the encoder to encode image into grid-level features with fixed-length, and then LSTM with attention mechanism was applied among them to generate captions \cite{DBLP:conf/icml/XuBKCCSZB15,DBLP:conf/cvpr/RennieMMRG17}. \cite{DBLP:conf/cvpr/00010BT0GZ18} first introduced Faster R-CNN \cite{DBLP:conf/nips/RenHGS15} into image captioning to extract the region-level features more in line with the human visual habits, which has become a typical pattern to extract image features in subsequent works. Above all models adopted LSTM as the decoder, which have shortcomings in training efficiency and expression ability. Recently, researchers began to explore the application of transformer in image captioning. \cite{DBLP:conf/nips/HerdadeKBS19} proposed the Object Relation Transformer to introduce the region spatial information. \cite{DBLP:conf/cvpr/PanYLM20} proposed the X-Linear attention block to capture the $2^{nd}$ order interactions between the single- or multi-modal, and integrated it into the Transformer encoder and decoder. \cite{DBLP:conf/cvpr/CorniaSBC20} designed a mesh-like connectivity in decoder to exploit both low-level and high-level features from the encoder. \cite{DBLP:conf/aaai/LuoJSCWHLJ21} proposed a Dual-Level Collaborative Transformer (DLCT) to process both grid- and region-level features for realizing the complementary advantages. Despite the outstanding performance of region-level features extracted by Faster R-CNN, the lack of fine-grained information of region-level and the time cost of Faster R-CNN pre-training are unavoidable problems. Furthermore, extracting region-level features is time-consuming, so most models directly trained and evaluated on cached features instead of image, which makes it difficult to train image captioning model end-to-end from image to descriptions. \begin{figure*}[htb] \centering \includegraphics[width=15cm]{figures/overall-1.pdf} \caption{Overview of our proposed PureT model. We first extract image grid features $V_G$ using SwinTransformer. $v_g$ is calculated as the average pooling of $V_G$. Then $V_G$ and $v_g$ are refined into $V_G^N$ and $v_g^N$ through the Refining Encoder composed of N blocks stacks and are fed into the Decoder to generate description word by word.}\medskip \label{fig:overall} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{figures/sw-msa.pdf} \caption{Illustration of regular window partitioning scheme and shifted window partitioning scheme adopted in refining encoder. The size of input feature map is $H\times W = 12\times 12$.}\medskip \label{fig:sw-msa} \end{figure} \section{Model} The overall architecture of our PureT model is shown in Figure~\ref{fig:overall}. We adopt the widely used encoder-decoder framework, in which the encoder consists of a backbone of SwinTransformer and stacks of N refining encoder blocks and the decoder consists of stacks of N decoder blocks. The encoder is in charge of extracting grid features from the input image and refining them by capturing the intra-relationship between them. The decoder uses the refined image grid features to generate the captions word by word by capturing the inter-relationship between word and image grid features. \subsection{Attention Mechanism} The attention mechanism can be abstractly summarized as follows: \begin{equation} \operatorname{Attention}(q, k, v) = f_{sim}(q, k)v \end{equation} where $f_{sim}(\cdot)$ is a function used to compute the similarity scores between some queries ($q$) and keys ($k$). The output of attention mechanism is the weighted sum on values ($v$) based on similarity scores. In our model, Multi-head Self Attention (MSA) \cite{DBLP:conf/nips/VaswaniSPUJGKP17} and its variants Window MSA / Shifted Window MSA (W-MSA / SW-MSA) modules proposed by SwinTransformer \cite{liu2021Swin} are used, where MSA is adopted in the decoder to model the intra-relationship of word sequence and the inter-relationship between word and grid features, and W-MSA / SW-MSA are adopted in the encoder to model intra-relationship of image grid features. The above three attention modules use $\operatorname{Softmax}(\cdot)$ as the similarity scoring function, which can be formulated as follows: \begin{equation} \operatorname{Attention}(q, k, v) = \operatorname{Softmax}\left(\frac{qk^\mathrm{T}}{\sqrt{d_k}}\right)v \end{equation} where $d_k$ is the dimension of $k$. \iffalse \subsubsection{MSA} MSA can be formulated as follows: \begin{align} & \operatorname{MSA}(Q, K, V) = \operatorname{Concat}(head_1, \ldots, head_h) \\ & head_i = \operatorname{Attention}(Q^i, K^i, V^i), i = 1,2,\ldots,h \end{align} \fi \begin{equation} \begin{split} & \operatorname{MSA}(Q, K, V) = \operatorname{Concat}(head_1, \ldots, head_h) \\ & head_i = \operatorname{Attention}(Q^i, K^i, V^i), i = 1,2,\ldots,h \end{split} \end{equation} where $h$ is the number of heads. $Q^i, K^i$ and $V^i$ are the $i$-th slice of $Q, K$ and $V$ respectively, which can be formulated as follows: \begin{align} \bigstar = \operatorname{Concat}(\bigstar^1, \ldots, \bigstar^i, \ldots, \bigstar^h) \end{align} \iffalse \begin{align} Q &= \operatorname{Concat}(Q^1, \ldots, Q^i, \ldots, Q^h) \\ K &= \operatorname{Concat}(K^1, \ldots, K^i, \ldots, K^h) \\ V &= \operatorname{Concat}(V^1, \ldots, V^i, \ldots, V^h) \end{align} \fi where $\bigstar \in \mathbb{R}^{L_\bigstar \times D_\bigstar}$ and $\bigstar^i \in \mathbb{R}^{L_\bigstar \times \frac{D_\bigstar}{h}}$ ($\bigstar$ refers to $Q, K$ and $V$), $L_\bigstar$ and $D_\bigstar$ are the length and dimension. In the $i$-th head of MSA, each token of the query $Q^i$ calculates its similarity with all tokens of the key $K^i$, and performs the weighted sum on all tokens of the value $V^i$ to obtain the corresponding output. Therefore, MSA can be regarded as a global attention mechanism. \subsubsection{W-MSA and SW-MSA} Aiming at the quadratic complexity caused by the global computation of MSA, SwinTransformer proposed W-MSA and SW-MSA to compute self-attention within local windows \cite{liu2021Swin}. In this paper, both W-MSA and SW-MSA are used in the encoder, in which inputs of $Q, K$ and $V$ are all from image grid features, therefore they have the same length $L = H\times W$ and dimension $D$. Compared with MSA, W-MSA and SW-MSA first partition the inputs of $Q, K$ and $V$ into several windows, and then apply MSA separately in each window. Figure~\ref{fig:sw-msa} illustrate the regular window partitioning scheme and shifted window partitioning scheme of W-MSA and SW-MSA respectively. Adding SW-MSA after W-MSA aims to solve the lack of connections across windows of W-MSA module to further improve the modeling ability. W-MSA and SW-MSA can be formulated as follows: \begin{align} &\operatorname{(S)W-MSA}(Q, K, V) = \operatorname{Merge}(window_1, \ldots, window_w) \notag\\ &window_i = \operatorname{MSA}(Q_W^{i}, K_W^{i}, V_W^{i}), i=1, 2, \ldots, w \end{align} where $w$ is the number of windows and $\operatorname{Merge}(\cdot)$ is the reverse operation of regular/shifted window partitioning scheme. $Q_W^i, K_W^i$ and $V_W^i$ are the $i$-th window of $Q, K$ and $V$ respectively, which can be formulated as follows: \iffalse \begin{align} Q = &\operatorname{Merge}\left(Q_W^1, \ldots, Q_W^i, \ldots, Q_W^w\right) \\ K = &\operatorname{Merge}\left(K_W^1, \ldots, K_W^i, \ldots, K_W^w\right) \\ V = &\operatorname{Merge}\left(V_W^1, \ldots, V_W^i, \ldots, V_W^w\right) \end{align} \fi \begin{align} \bigstar = &\operatorname{Merge}\left(\bigstar_W^1, \ldots, \bigstar_W^i, \ldots, \bigstar_W^w\right) \end{align} where $\bigstar \in \mathbb{R}^{L \times D}$ and $\bigstar_W^i \in \mathbb{R}^{\frac{L}{w} \times D}$ ($\bigstar$ refers to $Q, K$ and $V$). \subsection{Encoder} Different from most existing models, we first employ SwinTransformer \cite{liu2021Swin} instead of pre-trained CNN or Faster R-CNN as the backbone encoder to extract a set of grid features $V_G = \{v_1, v_2, \ldots, v_m\}$ from given input images as the initial visual features, where $v_i \in \mathbb{R}^D$, $D$ is the embedding dimension of each grid feature, and $m$ is the number of grid features ($m\operatorname{=12\times 12}$ in this paper). After grid features $V_G$ are extracted, we refer to the standard transformer encoder \cite{DBLP:conf/nips/VaswaniSPUJGKP17} to construct a refining encoder to enhance the grid features by capturing the intra-relationship between them. Furthermore, inspired by \cite{DBLP:conf/aaai/JiLSCLW0J21}, we calculate the mean pooling of grid features $v_g = \frac{1}{m}\sum_{i=1}^mv_i$ as the initial global feature and introduce it into W-MSA and SW-MSA. Specifically, when applying MSA in each window, the global feature is added into the keys $k$ and values $v$ as an extra token. Meanwhile, we also refine the global feature by using it as an extra query $q$ token and applying MSA on all grid features. As shown in Figure~\ref{fig:overall}, the refining encoder is composed of $N$ blocks stacked in sequence ($N=3$ in this paper), and each block consists of a W-MSA or SW-MSA module with feedforward layer, in which W-MSA and SW-MSA are used alternately. The $l$-th block can be formulated as follows: \begin{align} \hat{V}_G^l = &\operatorname{LayerNorm}\left(V_G^{l-1} + \left.\operatorname{(S)W-MSA}\left(W_Q^lV_G^{l-1}, \right.\right.\right. \notag\\ &\left.\left.W_K^l\left[V_G^{l-1}; v_g^{l-1}\right]_s, W_V^l\left[V_G^{l-1}; v_g^{l-1}\right]_s\right)\right) \\ \hat{v}_g^l = &\operatorname{LayerNorm}(v_g^{l-1} + \operatorname{MSA}\left(W_Q^lv_g^{l-1}, \notag \right.\\ &\left.\left.W_K^l[V_G^{l-1}; v_g^{l-1}]_s, W_V^l[V_G^{l-1}; v_g^{l-1}]_s\right)\right) \\ V_G^l = &\operatorname{LayerNorm}\left(\hat{V}_G^l + \operatorname{FeedForward}(\hat{V}_G^l)\right)\\ v_g^l = &\operatorname{LayerNorm}\left(\hat{v}_g^l + \operatorname{FeedForward}(\hat{v}_g^l)\right) \end{align} where $V_G^{l-1}$ and $v_g^{l-1}$ denote the output grid features and global feature of block $l-1$ respectively, and which are used as the input of block $l$, in which $V_G^0 = V_G$ and $v_g^0 = v_g$, $W_Q^l,Q_K^l,W_V^l\in \mathbb{R}^{D\times D}$ are learnt parameter matrices; $[V_G^{l-1} ;v_g^{l-1}]_s\in \mathbb{R}^{(k+1)\times D}$ denotes the stack operation of grid features and global feature and $\operatorname{FeedForward}(\cdot)$ consists of two linear layer with $\operatorname{ReLU}$ activation function in between, as formulated below: \begin{align} \operatorname{FeedForward}\left(x\right) = W_2\operatorname{ReLU}\left(W_1x\right) \end{align} where $W_1\in\mathbb{R}^{(4D)\times D}$ and $W_2\in\mathbb{R}^{D\times(4D)}$ are the learnt parameter matrices of two linear layers respectively. Note that the parameter of refining process for grid features and global feature are reused. The output refined grid features $V_G^N$ and refined global feature $v_g^N$ of block $N$ are fed into the decoder as the input of visual content. \subsection{Decoder} The decoder aims to generate the output caption word by word conditioned on the refined global and grid features from the encoder. The interaction between multi-modal occurs in this part. As shown in Figure~\ref{fig:overall}, the decoder is composed of $N$ blocks stacked in sequence ($N=3$ in this paper), where each block can be divided into three modules: 1) Pre-Fusion Module, which contains the pre-fusion process between previously generated words and refined global feature, which can be regarded as the first inter-modal interaction between natural language and visual content; 2) Language Masked MSA Module, which can be regarded as the intra-modal interaction within the generated words; 3) Cross MSA Module, which contains a MSA module with a FeedForward layer, which can be regarded as the second inter-modal interaction between visual content and natural language; 4) Word Generation Module, which contains a linear layer with softmax function. \subsubsection{Pre-Fusion Module} Most recent Transformer-based models only use image region or grid features without global feature, where the interaction between multi-modal features only occurs in cross attention between generated word and visual features before generating the next word. The lack of interaction of global contextual information limits reasoning capability to a certain extent. Therefore, we construct a pre-fusion module to fuse the refined global feature $v_g^N$ into the input of each block of decoder, which can be regarded as the first time multi-modal interaction to capture global visual context information and can be formulated as follows: \begin{align} X_{1:t-1}^{p, l} = \operatorname{Layer}&\operatorname{Norm}\left(X_{1:t-1}^{l-1} + \right.\notag\\ &\left.\operatorname{ReLU}\left(W_f\left[X_{1:t-1}^{l-1}; v_g^N\right]\right) \right) \end{align} where $X_{1:t-1}^{l-1}\in \mathbb{R}^{(t-1)\times D}$ denotes the output of block $l-1$ and is used as the input of block $l$ at $t$-th timestep , $\left[X_{1:t-1}^{l-1}; v_g\right]\in \mathbb{R}^{(t-1)\times 2D}$ indicates concatenation and $W_f\in \mathbb{R}^{D\times 2D}$ is learnt parameters of a linear layer; the output $X_{1:t-1}^{p,l}\in \mathbb{R}^{(t-1)\times D}$ is fed into the Language Masked MSA Module. Note that the initial input at the first block comes from the previously generated words: \begin{align} X_{1:t-1}^0=W_ex_{1:t-1} \end{align} where $x_{1:t-1}$ are one-hot encodings of the generated words before $t$-th timestep, and $W_e\in\mathbb{R}^{D\times|\Sigma|}$ is the word embedding matrix of the vocablulary $\Sigma$. \subsubsection{Language Masked MSA Module} The module aims to model the intra-modal relationship (words-to-words) within $X_{1:t-1}^{p,l}$, which can be formulated as follows: \begin{align} \tilde{X}_{t-1}^l = &\operatorname{LayerNorm}\left( X_{t-1}^{p,l} + \operatorname{MSA}\left(W_Q^{m,l}X_{t-1}^{p,l}, \right.\right.\notag\\ &\left.\left. W_K^{m,l}X_{1:t-1}^{p,l}, W_V^{m,l}X_{1:t-1}^{p,l} \right) \right) \end{align} where $W_Q^{m,l}, W_K^{m,l}, W_V^{m,l} \in \mathbb{R}^{D\times D}$ are learnt parameters, and $X_{t-1}^{p,l}$ indicates the corresponding embedding vector of the generated word at $(t-1)$-th timestep, which means that each word is only allowed to calculate attention map at its earlier generated words. \begin{table*} \begin{center} \footnotesize \begin{tabular}{lcccccccccccc} \hline \multirow{2}{*}{Models} & \multicolumn{6}{c}{Single Model} & \multicolumn{6}{c}{Ensemble Model} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{S} \\ \hline \multicolumn{1}{c}{} & \multicolumn{12}{c}{CNN-LSTM based models}\\ \hline SCST & - & 34.2 & 26.7 & 55.7 & 114.0 & - & - & 35.4 & 27.1 & 56.6 & 117.5 & -\\ RFNet & 79.1 & 36.5 & 27.7 & 57.3 & 121.9 & 21.2 & 80.4 & 37.9 & 28.3 & 58.3 & 125.7 & 21.7 \\ Up-Down & 79.8 & 36.3 & 27.7 & 56.9 & 120.1 & 21.4 & - & - & - & - & - & -\\ GCN-LSTM & 80.5 & 38.2 & 28.5 & 58.3 & 127.6 & 22.0 & 80.9 & 38.3 & 28.6 & 58.5 & 128.7 & 22.1\\ AoANet & 80.2 & 38.9 & 29.2 & 58.8 & 129.8 & 22.4 & 81.6 & 40.2 & 29.3 & 59.4 & 132.0 & 22.8\\ X-LAN & 80.8 & 39.5 & 29.5 & 59.2 & 132.0 & 23.4 & 81.6 & 40.3 & 29.8 & 59.6 & 133.7 & 23.6 \\ \hline \multicolumn{1}{c}{} & \multicolumn{12}{c}{CNN-Transformer based models}\\ \hline ORT & 80.5 &38.6 & 28.7 & 58.4 & 128.3 & 22.6 & - & - & - & - & - & -\\ X-Transformer & 80.9 & 39.7 & 29.5 & 59.1 & 132.8 & 23.4 & 81.7 & 40.7 & 29.9 & 59.7 & 135.3 & 23.8\\ $\mathcal{M}^2$ Transformer & 80.8 & 39.1 & 29.2 & 58.6 & 131.2 & 22.6 & 82.0 & 40.5 & 29.7 & 59.5 & 134.5 & 23.5 \\ RSTNet & 81.8 & 40.1 & 29.8 & 59.5 & 135.6 & 23.3 & - & - & - & - & - & - \\ GET & 81.5 & 39.5 & 29.3 & 58.9 & 131.6 & 22.8 & 82.1 & 40.6 & 29.8 & 59.6 & 135.1 & 23.8 \\ DLCT & 81.4 & 39.8 & 29.5 & 59.1 & 133.8 & 23.0 & 82.2 & 40.8 & 29.9 & 59.8 & 137.5 & 23.3 \\ \hline PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} & \textbf{83.4} & \textbf{42.1} & \textbf{30.4} & \textbf{60.8} & \textbf{141.0} & \textbf{24.3} \\ \hline \end{tabular} \caption{Offline evaluation results of our proposed model and other existing state-of-the-art models on MSCOCO ``Karpathy" test split, where B-$N$, M, R, C and S denote BLEU-$N$, METEOR, ROUGE-L, CIDEr and SPICE respectively.} \label{table:offline} \end{center} \end{table*} \subsubsection{Cross MSA Module} The module aims to model the inter-modal relationship (words-to-vision) between $\tilde{X}_{1:t-1}^l$ and $V_G^N$, which can be regarded as the second time multi-modal interaction to capture local visual context information and can be formulated as follows: \begin{align} \hat{X}_{t-1}^l = &\operatorname{LayerNorm}\left( \tilde{X}_{t-1}^l + \operatorname{MSA}\left( W_Q^{c,l}\tilde{X}_{t-1}^l, \right.\right.\notag\\ &\left.\left.W_K^{c,l}V_G^N, W_V^{c,l}V_G^N\right) \right) \\ X_{t-1}^l = &\operatorname{LayerNorm}(\hat{X}_{t-1}^l + \operatorname{FeedForward}(\hat{X}_{t-1}^l) ) \end{align} where $W_Q^{c,l}, W_K^{c,l}, W_V^{c,l} \in \mathbb{R}^{D\times D}$ are learnt parameters, $\tilde{X}_{t-1}^l$ from the Language Masked MSA Module is fed into MSA as query, and refined grid features $V_G^N$ from the last block of encoder are fed into MSA as keys and values. \subsubsection{Word Generation Module} Given the output $X_{1:t-1}^N$ of the last decoder block, the conditional distribution over the vocablary $\Sigma$ is given by: \begin{align} p(x_t|x_{1:t-1}) = \operatorname{Softmax}(W_xX_{t-1}^N) \end{align} where $W_x\in\mathbb{R}^{|\Sigma|\times D}$ is learnt parameters. \iffalse The probabilities of the entire sequence of caption is calculated as follows: \begin{align} p(x_{1:T}) = \prod_{t=1}^{T}p(x_t | x_{1:t-1}) \end{align} \fi \subsection{Objective Functions} We first optimize our model by applying cross entropy (XE) loss as the objective function: \begin{equation} L_{XE}(\theta)=-\sum_{t=1}^T\log(p_\theta(y_t^*|y_{1:t-1}^*)) \end{equation} where $y_{1:T}^*$ is the target ground truth sequence, and $\theta$ denotes the parameters of our model. Then, we adopt self-critical sequence training (SCST) strategy \cite{DBLP:conf/cvpr/RennieMMRG17} to optimize CIDEr \cite{DBLP:conf/cvpr/VedantamZP15} metrics: \begin{equation} L_R(\theta)=-\operatorname{\textbf{E}}_{y_{1:T}\sim p_\theta}[r(y_{1:T})] \end{equation} where $r(\cdot)$ is the score of CIDEr. The gradient of $L_R$ can be approximated as follows: \begin{equation} \nabla_\theta L_R(\theta)\approx -\left(r(y_{1:T}^s)-r(\hat{y}_{1:T})\right)\nabla_\theta \log p_\theta(y_{1:T}^s) \end{equation} where $y_{1:T}^s$ is a sampled caption and $r(\hat{y}_{1:T}^s)$ defines the greedily decoded score obtained from the current model. \begin{table*} \begin{center} \footnotesize \begin{tabular}{lcccccccccccccc} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{BLEU-1} & \multicolumn{2}{c}{BLEU-2} & \multicolumn{2}{c}{BLEU-3} & \multicolumn{2}{c}{BLEU-4} & \multicolumn{2}{c}{METEOR} & \multicolumn{2}{c}{ROUGE-L} & \multicolumn{2}{c}{CIDEr} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9} \cmidrule(r){10-11} \cmidrule(r){12-13} \cmidrule(r){14-15} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} \\ \hline SCST & 78.1 & 93.7 & 61.9 & 86.0 & 47.0 & 75.9 & 35.2 & 64.5 & 27.0 & 35.5 & 56.3 & 70.7 & 114.7 & 116.7 \\ GCN-LSTM & 80.8 & 95.2 & 65.5 & 89.3 & 50.8 & 80.3 & 38.7 & 69.7 & 28.5 & 37.6 & 58.5 & 73.4 & 125.3 & 126.5 \\ Up-Down & 80.2 & 95.2 & 64.1 & 88.8 & 49.1 & 79.4 & 36.9 & 68.5 & 27.6 & 36.7 & 57.1 & 72.4 & 117.9 & 120.5 \\ SGAE & 81.0 & 95.3 & 65.6 & 89.5 & 50.7 & 80.4 & 38.5 & 69.7 & 28.2 & 37.2 & 58.6 & 73.6 & 123.8 & 126.5 \\ AoANet & 81.0 & 95.0 & 65.8 & 89.6 & 51.4 & 81.3 & 39.4 & 71.2 & 29.1 & 38.5 & 58.9 & 74.5 & 126.9 & 129.6\\ X-Transformer & 81.9 & 95.7 & 66.9 & 90.5 & 52.4 & 82.5 & 40.3 & 72.4 & 29.6 & 39.2 & 59.5 & 75.0 & 131.1 & 133.5 \\ $\mathcal{M}^2$ Transformer & 81.6 & 96.0 & 66.4 & 90.8 & 51.8 & 82.7 & 39.7 & 72.8 & 29.4 & 39.0 & 59.2 & 74.8 & 129.3 & 132.1 \\ RSTNet & 82.1 & 96.4 & 67.0 & 91.3 & 52.2 & 83.0 & 40.0 & 73.1 & 29.6 & 39.1 & 59.5 & 74.6 & 131.9 & 134.0 \\ GET & 81.6 & 96.1 & 66.5 & 90.9 & 51.9 & 82.8 & 39.7 & 72.9 & 29.4 & 38.8 & 59.1 & 74.4 & 130.3 & 132.5 \\ DLCT & 82.4 & \textbf{96.6} & 67.4 & 91.7 & 52.8 & 83.8 & 40.6 & 74.0 & 29.8 & 39.6 & 59.8 & 75.3 & 133.3 & 135.4 \\ \hline PureT & \textbf{82.8} & 96.5 & \textbf{68.1} & \textbf{91.8} & \textbf{53.6} & \textbf{83.9} & \textbf{41.4} & \textbf{74.1} & \textbf{30.1} & \textbf{39.9} & \textbf{60.4} & \textbf{75.9} & \textbf{136.0} & \textbf{138.3} \\ \hline \end{tabular} \caption{Online evaluation results of our proposed model and other existing state-of-the-art models on MSCOCO.} \label{table:online} \end{center} \end{table*} \section{Experiments} \subsection{Dataset and Evaluation Metrics} We conduct experiments on the MSCOCO 2014 dataset \cite{DBLP:conf/eccv/LinMBHPRDZ14}, which contains 123287 images (82783 for training and 40504 for validation), and each is annotated with 5 reference captions. In this paper, we follow the ``Karpathy'' split \cite{DBLP:journals/pami/KarpathyF17} to redivide the MSCOCO, where 113287 images for training, 5000 images for validation and 5000 images for offline evaluation. Besides, MSCOCO also provides 40775 images for online testing. For the training process, we convert all training captions to lower case and drop the words occur less than 6 times, collect the rest 9487 words as our vocabulary $\Sigma$. For fair evaluation, we adopt five widely used metrics to evaluate the quality of generated captions, including BLEU \cite{DBLP:conf/acl/PapineniRWZ02}, METEOR \cite{DBLP:conf/wmt/LavieA07}, ROUGE-L \cite{lin-2004-rouge}, CIDEr \cite{DBLP:conf/cvpr/VedantamZP15}, and SPICE \cite{DBLP:conf/eccv/AndersonFJG16}. \subsection{Experimental Settings} We set the model embedding size $D$ to 512, the number of transformer heads to 8, the number of blocks $N$ for both refining encoder and decoder to 3. For the training process, we first train our model under XE loss $L_{XE}$ for 20 epochs, and set the batch size to 10 and warmup steps to 10,000; then we train our model under $L_R$ for another 30 epochs with fixed learning rate of $5\times 10^{-6}$. We adopt Adam \cite{DBLP:journals/corr/KingmaB14} optimizer in both above stages and the beam size is set to 5 in validation and evaluation process. \subsection{Comparisons with State-of-The-Art Models} \subsubsection{Offline Evaluation} Table~\ref{table:offline} reports the performances of some existing state-of-the-art models and our proposed model on MSCOCO offline test split. The compared models include: SCST \cite{DBLP:conf/cvpr/RennieMMRG17}, RFNet \cite{DBLP:conf/eccv/JiangMJLZ18}, Up-Down \cite{DBLP:conf/cvpr/00010BT0GZ18}, GCN-LSTM \cite{DBLP:conf/eccv/YaoPLM18}, AoANet \cite{huang2019attention} and X-LAN \cite{DBLP:conf/cvpr/PanYLM20}; ORT \cite{DBLP:conf/nips/HerdadeKBS19}, X-Transformer \cite{DBLP:conf/cvpr/PanYLM20}, $\mathcal{M}^2$ Transformer \cite{DBLP:conf/cvpr/CorniaSBC20}, RSTNet\cite{DBLP:conf/cvpr/ZhangSLJZWHJ21}, GET \cite{DBLP:conf/aaai/JiLSCLW0J21} and DLCT \cite{DBLP:conf/aaai/LuoJSCWHLJ21}. We divide these models into CNN-LSTM based models and CNN-Transformer based models according to the difference mothods adopted in decoder. For fair comparisons, we report the results of a single model and ensemble of 4 models after SCST training. As shown in Table~\ref{table:offline}, both our single model and ensemble of 4 models achieve best performances in all metrics. In the case of single model, the CIDEr score of our model reaches 138.2\%, which achieves advancements of 2.6\% and 4.4\% to the strong competitors RSTNet and DLCT. Meanwhile, our model achieves improvements of over 0.6\% to RSTNet, and improvements of over 1.0\% to DLCT in terms of metrics BLEU-4, ROUGE-L and SPICE. In the case of ensemble model, our model also achieves the best performance, and advances all other models by more than 1.0\% over all metrics except METEOR. In particular, the CIDEr score of our ensemble model reaches 141.0\%, which achieves advancements of 3.5\% and 5.9\% to DLCT and GET. In general, the significant improvements of all metrics (especially CIDEr) demonstrate the advantage of our proposed model. In addition, compared to models that use region-level features or both region and grid-level features, our model has a relatively more balanced computational cost due to it avoids the prediction of object regions coordinates. And our model can be trained end-to-end, which allows us to explore it in more actual scenes. \subsubsection{Online Evaluation} As shown in Table~\ref{table:online}, we also report the performance with 5 reference captions (c5) and 40 reference captions (c40) of our model on the MSCOCO official online test server. Compared to the other state-of-the-arts, our model achieves the best scores in all metrics except a slightly lower 0.1\% in BLEU-1 (c40) than DLCT. Notably, the scores of CIDEr (c5) and CIDEr (c40) of our model reach 136.0\% and 138.3\%, which achieve advancements of 2.7\% and 2.9\% with respect to the best performer DLCT. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{figures/captions_examples.pdf} \caption{Examples of captions generated by standard Transformer, $\mathcal{M}^2$ Transformer and our PureT with ground-truths.}\medskip \label{fig:cap_example} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=17cm]{figures/attention_vis.pdf} \caption{Visualization of attention heatmap on image along caption generation process. For each generated word, we show the image with different brigtness to represent the difference of attention weights.}\medskip \label{fig:att_vis} \end{figure*} \subsection{Ablation Study} We conduct several ablation studies to quantify the influences of different modules in our model. \subsubsection{Influence of W-MSA and SW-MSA} For quantifying the influence of W-MSA and SW-MSA in our Refining Encoder, we ablate our model with different configurations of window size $ws$ and shift size $ss$ as shown in Table~\ref{table:ablation_1}. The number of refining encoder and decoder blocks is set to 3. Note that the input $V_G\in \mathbb{R}^{m\times D}$ of Refining Encoder has a size of $m\operatorname{=12\times 12}$ in this paper. The W-MSA and SW-MSA degenerate into MSA when $ws\operatorname{=12}$ and SW-MSA into W-MSA when $ss\operatorname{=0}$. It can be seen that the model with only MSA ($ws\operatorname{=12},ss\operatorname{=0}$) performs better than the model with only W-MSA ($ws\operatorname{=6},ss\operatorname{=0}$) because W-MSA lacks connections across windows. However, the model combining W-MSA and SW-MSA ($ws\operatorname{=6},ss\operatorname{=3}$) can improve the performance of both models above in all metrics. \begin{table} \begin{center} \footnotesize \begin{tabular}{rrcccccc} \hline $ws$ & $ss$ & B-1 & B-4 & M & R & C & S \\ \hline 12 & 0 & 82.0 & 40.3 & 29.9 & 59.9 & 137.5 & 23.8 \\ 6 & 0 & 81.8 & 40.1 & 29.9 & 59.7 & 136.8 & 23.8 \\ 6 & 3 & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different configuration of window size $ws$ and shift size $ss$.} \label{table:ablation_1} \end{center} \end{table} \subsubsection{Influence of Pre-Fusion module} \begin{table} \begin{center} \footnotesize \begin{tabular}{lcccccc} \hline Models & B-1 & B-4 & M & R & C & S \\ \hline Transformer & 81.6 & 39.8 & 29.9 & 59.6 & 136.4 & 23.8 \\ \makecell[l]{Transformer \\\quad\quad + p-f.} & 82.0 & 40.3 & 29.9 & 59.9 & 137.5 & 23.8 \\ \hline PureT (w/o p-f.) & 81.8 & 40.3 & 30.0 & 59.9 & 137.9 & 24.0 \\ PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison with / without Pre-Fusion for standard Transformer and our proposed PureT.} \label{table:ablation_2} \end{center} \end{table} To demonstrate the effectiveness of the Pre-Fusion module in our Decoder, we remove the Pre-Fusion module from our PureT model and compare it with the full model as shown in rows 4 and 5 of Table~\ref{table:ablation_2}. It can be seen that the Pre-Fusion module improves the performance in all metrics. Furthermore, we construct the standard Transformer (3 blocks of encoder/decoder) as the baseline model, which reaches an excellent performance as shown in row 1 in Table~\ref{table:ablation_2}. Then we extend the baseline model by adding the Pre-Fusion module (equivalent to the model in row 1 of Table~\ref{table:ablation_1}), which also has a better performance in all metrics. \subsubsection{Influence of the number of stacked blocks} We also conduct several experiments to evaluate the influence of the number of the Refining Encoder and Decoder blocks. As shown in Table~\ref{table:ablation_3}, models with more than 2 blocks have a significant improvement (more than 2.0\%) in CIDEr score compare to the model with 1 block. Note that the model with 4 blocks has a significant advantage in BLEU scores to other models, but considering the increase of model parameters and the sufficiently excellent performance of the model with 3 blocks, we set the number of blocks $N$ to 3 as the final configuration. Remarkably, the model with only 1 block also has a better performance in comparison to earlier state-of-the-art works (e.g. RSTNet, GET and DLCT) in Table~\ref{table:offline}, which further indicates the effectiveness of our model. \begin{table} \begin{center} \footnotesize \begin{tabular}{ccccccc} \hline Layer & B-1 & B-4 & M & R & C & S \\ \hline 1 & 81.8 & 40.2 & 29.7 & 59.5 & 135.8 & 23.5 \\ 2 & 81.8 & 40.5 & 30.0 & 59.9 & \textbf{138.2} & 23.9 \\ 3 & 82.1 & 40.9 & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ 4 & \textbf{82.7} & \textbf{41.1} & 30.0 & \textbf{60.1} & \textbf{138.2} & 24.0 \\ \hline \end{tabular} \caption{Performance comparison of different number of Refining Encoder and Decoder blocks.} \label{table:ablation_3} \end{center} \end{table} \subsubsection{Influence of different backbone} \begin{table*} \setcounter{table}{5} \begin{center} \footnotesize \begin{tabular}{lllclcccccccc} \hline Baseline Models & Backbone & Feat. Type & Feat. Size & N & B-1 & B-2 & B-3 & B-4 & M & R & C & S \\ \hline \multirow{3}{*}{$\mathcal{M}^2$ Transformer} & ResNet-101 & Region & (10$-$100) & 3$^\dagger$ & 80.8 & - & - & 39.1 & 29.2 & 58.6 & 131.2 & 22.6 \\ & ResNeXt-101 & Grid & $7\times 7$ & 3$^\ddagger$ & 80.8 & - & - & 38.9 & 29.1 & 58.5 & 131.7 & 22.6 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.8 & 66.8 & \textbf{52.6} & 40.5 & 29.6 & 59.9 & 135.4 & 23.3 \\ \hline \multirow{4}{*}{X-Transformer} & ResNet-101 & Region & (10$-$100) & 6$^\dagger$ & 80.9 & 65.8 & 51.5 & 39.7 & 29.5 & 59.1 & 132.8 & 23.4 \\ & ResNeXt-101 & Grid & $7\times 7$ & 6$^\ddagger$ & 81.0 & - & - & 39.7 & 29.4 & 58.9 & 132.5 & 23.1 \\ & SwinTransformer & Grid & $12\times 12$ & 6 & 81.4 & 66.3 & 52.0 & 39.9 & 29.5 & 59.5 & 133.7 & 23.4 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.9 & 66.7 & 52.3 & 40.1 & 29.6 & 59.6 & 134.8 & 23.4 \\ \hline \multirow{4}{*}{\makecell[l]{standard \\Transformer}} & ResNet-101 & Region & (10$-$100) & 3 & 80.0 & 64.9 & 50.5 & 38.7 & 29.0 & 58.6 & 130.1 & 22.9 \\ & ResNeXt-101 & Grid & $7\times 7$ & 3$^\ddagger$ & 81.2 & - & - & 39.0 & 29.2 & 58.9 & 131.7 & 22.6 \\ & ResNeXt-101 & Grid & $12\times 12$ & 3 & 80.8 & 65.8 & 51.4 & 39.4 & 29.4 & 59.2 & 132.8 & 23.2 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.6 & 66.5 & 52.0 & 39.8 & 29.9 & 59.6 & 136.4 & 23.8 \\ \hline \multirow{2}{*}{PureT} & ResNeXt-101 & Grid & $12\times 12$ & 3 & 80.7 & 65.9 & 51.7 & 39.9 & 29.2 & 59.1 & 131.8 & 23.0 \\ & ViT & Grid & $12\times 12$ & 3 & 81.6 & 66.6 & 52.3 & 40.3 & 29.7 & 59.5 & 135.2 & 23.6 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & \textbf{82.1} & \textbf{67.3} & 52.0 & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different configuration of backbone models. ResNet-101 and ResNeXt-101 indicate Faster R-CNN in conjunction with them respectively. Region features extracted by ResNet-101 have adaptive size of 10 to 100. Grid features extracted by ResNeXt-101 can be extracted in the size of $12\times 12$ or $7\times 7$ by average pooling as need. Grid features (SwinTransformer) are extracted in the size of $12\times 12$. $N$ denotes the number of encoder and decoder blocks, superscript $\dagger$ indicates that the results are from the respectively official paper and $\ddagger$ indicates that the results are from \cite{DBLP:conf/aaai/LuoJSCWHLJ21}, and other results come from our experiments.} \label{table:ablation_exp} \end{center} \end{table*} To quantify the influence of different features extracted by different backbone models, we adopt different image captioning models, as baseline models and ablate them with different configurations of backbone models as shown in Table~\ref{table:ablation_exp}. The baseline models include: $\mathcal{M}^2$ Transformer \cite{DBLP:conf/cvpr/CorniaSBC20}, X-Transformer \cite{DBLP:conf/cvpr/PanYLM20} and standard Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17}. The backbone models include: Faster R-CNN \cite{DBLP:conf/nips/RenHGS15} in conjunction with ResNet-101, which is adopted in \cite{DBLP:conf/cvpr/00010BT0GZ18}; Faster R-CNN in conjunction with ResNeXt-101, which is adopted in \cite{DBLP:conf/cvpr/JiangMRLC20}; ViT \cite{DBLP:conf/iclr/DosovitskiyB0WZ21} and SwinTransformer \cite{liu2021Swin}. As we can see, grid features extracted by SwinTransformer can achieve significant performance improvement compared with region features extracted by ResNet-101 and grid features extracted by ResNeXt-101 and ViT. In terms of $\mathcal{M}^2$ Transformer and X-Transformer, the backbone models of ResNet-101 and ResNeXt-101 have similar performance. The backbone model of SwinTransformer comprehensively improves scores of all metrics, which boosts the CIDEr score more than 3.7\% in $\mathcal{M}^2$ Transformer especially. Note that the backbone model with $N=3$ has a better performance than $N=6$ in X-Transformer, which indicates the superiority of SwinTransformer in image captioning and allows us to explore more tiny and efficient models and apply it in more actual scenes. In terms of standard Transformer, the backbone model of SwinTransformer reaches an excellent performance and is even better than $\mathcal{M}^2$ Transformer and X-Transformer in scores of METEOR, CIDEr and SPICE. In terms of our PureT, the backbone of SwinTransformer also achieves a better performance than ResNeXt-101. In general, in our extensive experiments, we find that the backbone models of CNN (e.g. Faster RCNN in conjunction with ResNet-101 or ResNeXt-101) are more suitable for using LSTM or Transformer with non-standard MSA (e.g. X-Transformer) as decoder, and the backbone of SwinTransformer is more suitable for using Transformer with standard MSA (e.g. $\mathcal{M}^2$ Transformer, standard Transformer and our PureT) as decoder. Therefore, we intend to explore a lighter and simpler Transformer-based model in our future work. \subsubsection{Influence of different Refining Encoder} \begin{table} \begin{center} \footnotesize \begin{tabular}{ccccccc} \hline Ref. Enc. & B-1 & B-4 & M & R & C & S \\ \hline w/o & 81.5 & 39.5 & 29.3 & 59.2 & 134.3 & 23.0 \\ $\mathcal{M}^2$ & 81.9 & 40.2 & 29.6 & 59.7 & 135.9 & 23.7 \\ X & 81.7 & 40.0 & 29.7 & 59.5 & 135.5 & 23.5 \\ PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different Refining Encoder. w/o indicates deleting Refining Encoder, $\mathcal{M}^2$ and X indicate replacing Refining Encoder with encoders of $\mathcal{M}^2$ Transformer and X-Transformer respectively.} \label{table:ablation_exp_1} \end{center} \end{table} To further quantify the influence of Refining Encoder, we ablate the Refining Encoder by different configurations as shown in Table~\ref{table:ablation_exp_1}. We delete the Refining Encoder to confirm whether the Refining Encoder is a necessary module, and replace our proposed Refining Encoder with encoders of $\mathcal{M}^2$ Transformer and X-Transformer to verify the advantages of our Refining Encoder. As we can see, deleting Refining Encoder can also achieve good performance, which is better than most existing SOTAs in Table~\ref{table:offline}. But our proposed Refining Encoder or other encoders bring significant performance gain than deleting Refining Encoder, which denotes the importance of Refining Encoder. Our proposed Refining Encoder brings the maximum gain and achieves the best performance than other, which denotes that the effectiveness and advantages of our proposed Refining Encoder. \subsection{Visualization Analysis} Figure~\ref{fig:cap_example} proposes some example image captions generated by $\mathcal{M}^2$ Transformer (official model), standard Transformer and our PureT. Note that $\mathcal{M}^2$ Transformer adopts Faster R-CNN, standard Transformer and PureT adopt SwinTransformer as the encoder. Generally, our PureT is able to catch additional fine-grained information and generate more accurate and descriptive captions. To qualitatively evaluate the effect of our PureT, we give the visualization of attention heatmap on the image along caption generation process in Figure~\ref{fig:att_vis}. It can be observed that our model can attend to correct areas when generating words. When generating nominal words, such as ``zebras'', ``rainbow'', ``field'' and ``sky'', the attention heatmap is correctly transformed into the body area of the corresponding objects. In addition, our model focuses on the nearby areas of zebra heads when generating ``grazing'', which correctly captures the semantic information and confirms the advantages of our model. \section{Conclusion} In this paper, we propose a pure Transformer-based model, which adopts SwinTransformer as the backbone encoder and can be trained end-to-end from image to descriptions easily. Furthermore, we construct a refining encoder to refine both image grid features and global feature with the mutual guidance between them, which realizes the complementary advantages between local and global attention. We also fuse the refined global feature with previously generated words in the decoder to enhance the multi-modal interaction, which further improves the modeling capability. Experimental results on MSCOCO dataset demonstrate that our proposed model achieves a new state-of-the-art performance. \section{Introduction} Image captioning aims to automatically describe the visual content of a given image with fluent and credible sentences. It is a typical multi-modal learning task, which connects Computer Vision (CV) and Natural Language Processing (NLP). Inspired by the success of deep learning methods in machine translation \cite{DBLP:conf/acl/PapineniRWZ02, DBLP:conf/emnlp/ChoMGBBSB14}, almost all image captioning models adopt the encoder-decoder framework with the visual attention mechanism. The encoder encodes input images into fix-length vector features, and the decoder decodes image features into descriptions word by word \cite{DBLP:conf/cvpr/VinyalsTBE15, DBLP:conf/icml/XuBKCCSZB15, DBLP:conf/cvpr/00010BT0GZ18, huang2019attention, DBLP:conf/cvpr/PanYLM20}. Initially, researchers adopted a pre-trained Convolutional Neural Network (CNN) as an encoder to extract image grid-level features and Recurrent Neural Network (RNN) as a decoder \cite{DBLP:conf/cvpr/VinyalsTBE15, DBLP:conf/icml/XuBKCCSZB15}. \cite{DBLP:conf/cvpr/00010BT0GZ18} first adopted Faster R-CNN to extract region-level features. Due to its overwhelming advantage, most subsequent works followed this pattern, and grid-level features extracted by CNN were discarded. Nevertheless, there are still some inherent defects in region-level features and encoder of object detector: 1) region-level features may not cover the entire image, which results in the lack of fine-grained information \cite{DBLP:conf/aaai/LuoJSCWHLJ21}; 2) extracting region features is high time consuming, and the object detector needs an extra Visual Genome \cite{DBLP:journals/ijcv/KrishnaZGJHKCKL17} dataset for pre-training, which makes it difficult to train image captioning model end-to-end from image pixels to descriptions, and also limits potential applications in the actual scene \cite{DBLP:conf/cvpr/JiangMRLC20}. Decoder of LSTM \cite{DBLP:journals/neco/HochreiterS97} with soft attention \cite{DBLP:conf/icml/XuBKCCSZB15} mechanism has remained the common and dominant approach in the past few years. However, the shortcomings of training efficiency and expression ability of LSTM also limit the effect of relevant models. Inspired by the success of Multi-head Self-Attention (MSA) mechanism and Transformer architecture in NLP tasks \cite{DBLP:conf/nips/VaswaniSPUJGKP17}, many researchers began to introduce MSA into decoder of LSTM \cite{huang2019attention,DBLP:conf/cvpr/PanYLM20} or directly adopt Transformer architecture as decoder \cite{DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/cvpr/PanYLM20,DBLP:conf/aaai/LuoJSCWHLJ21,DBLP:conf/aaai/JiLSCLW0J21} of image captioning models. Especially, Transformer architecture gradually shows extraordinary potential in CV tasks \cite{DBLP:conf/iclr/DosovitskiyB0WZ21,liu2021Swin} and multi-modal tasks \cite{DBLP:conf/nips/LuBPL19, DBLP:conf/cvpr/ZhuY20a,DBLP:conf/icml/RadfordKHRGASAM21}, which provides a new choice for encoding images into vector features. Different from Faster R-CNN, features extracted by a visual transformer are grid-level features, which have a higher computing efficiency and allows expediently exploring more effective and complex designs for image captioning. Considering the disadvantage of pre-trained CNN and object detector in encoder and limitations of LSTM in decoder, we build a pure Transformer-based image captioning model (PureT) to integrate this task into one stage without pre-training process of object detection to achieve end-to-end training. In Encoder, we adopt Swin-Transformer \cite{liu2021Swin} to extract grid features from given images as the initial vector features and compute the average pooling of gird features as the initial image global feature. Then, we construct a refining encoder similar to \cite{huang2019attention,DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/aaai/JiLSCLW0J21} by Shifted Window MSA (SW-MSA) from Swin-Transformer to refine image initial grid features and global feature. The refining encoder has a similar architecture with Transformer Encoder in machine translation \cite{DBLP:conf/nips/VaswaniSPUJGKP17} which can be regarded as an extension of Encoder of SwinTransformer for image captioning model. In Decoder, we directly adopt Transformer Decoder in machine translation \cite{DBLP:conf/nips/VaswaniSPUJGKP17} to generate captions. Furthermore, we pre-fuse the word embedding vector with the image global feature from Encoder before the MSA of word embedding vector to increase the interaction of inter-model (image-to-words) features. We validate our model on MSCOCO \cite{DBLP:conf/eccv/LinMBHPRDZ14} offline ``Karpathy'' \cite{DBLP:journals/pami/KarpathyF17} test split and official online test server. The results demonstrate that our PureT achieves new state-of-the-art performance on both single model and ensemble of 4 models configurations: on offline ``Karpathy'' test split, a single model and an ensemble of 4 models achieve 138.2\% and 140.8\% CIDEr scores respectively; on official online test server, an ensemble of 4 models achieves 135.3\% (c5) and 138.0\% (c40) CIDEr. Our main contributions are summarized as follows: \begin{itemize} \item We construct a pure Transformer-based (PureT) model for image captioning, which integrates this task into one stage again without the pre-training process of object detector and provide a new simple and solid baseline of image captioning. \item We add a pre-fusion process between the generated word embeddings and image global feature, which aims to increase the interaction of inter-modal features and enhance the reasoning ability from image to captions. \item We conduct extensive experiments on the MSCOCO dataset, which demonstrate the effectiveness of our proposed model, and achieve a new state-of-the-art performance on both `Karpathy' offline test split and official online test server. \end{itemize} \section{Related Work} Existing works of image captioning can be divided into CNN-LSTM based models \cite{DBLP:conf/cvpr/VinyalsTBE15,DBLP:conf/icml/XuBKCCSZB15,DBLP:conf/cvpr/00010BT0GZ18,DBLP:conf/aaai/WangC019,huang2019attention} and CNN-Transformer based models \cite{DBLP:conf/nips/HerdadeKBS19,GuangLi2019,DBLP:conf/cvpr/PanYLM20,DBLP:conf/cvpr/CorniaSBC20,DBLP:conf/aaai/JiLSCLW0J21,DBLP:conf/aaai/LuoJSCWHLJ21}. Both adopted pre-trained CNN or Faster R-CNN as the encoder to encode image into grid or region-level features, where the former models adopted Long Short-Term Memory Network (LSTM) \cite{DBLP:journals/neco/HochreiterS97} as the decoder and the latter models adopted Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17} as the decoder to generate description word by word. Earlier works used pre-trained CNN, e.g., VGG-16 \cite{DBLP:journals/corr/SimonyanZ14a} and ResNet-101 \cite{DBLP:conf/cvpr/HeZRS16}, as the encoder to encode image into grid-level features with fixed-length, and then LSTM with attention mechanism was applied among them to generate captions \cite{DBLP:conf/icml/XuBKCCSZB15,DBLP:conf/cvpr/RennieMMRG17}. \cite{DBLP:conf/cvpr/00010BT0GZ18} first introduced Faster R-CNN \cite{DBLP:conf/nips/RenHGS15} into image captioning to extract the region-level features more in line with the human visual habits, which has become a typical pattern to extract image features in subsequent works. Above all models adopted LSTM as the decoder, which have shortcomings in training efficiency and expression ability. Recently, researchers began to explore the application of transformer in image captioning. \cite{DBLP:conf/nips/HerdadeKBS19} proposed the Object Relation Transformer to introduce the region spatial information. \cite{DBLP:conf/cvpr/PanYLM20} proposed the X-Linear attention block to capture the $2^{nd}$ order interactions between the single- or multi-modal, and integrated it into the Transformer encoder and decoder. \cite{DBLP:conf/cvpr/CorniaSBC20} designed a mesh-like connectivity in decoder to exploit both low-level and high-level features from the encoder. \cite{DBLP:conf/aaai/LuoJSCWHLJ21} proposed a Dual-Level Collaborative Transformer (DLCT) to process both grid- and region-level features for realizing the complementary advantages. Despite the outstanding performance of region-level features extracted by Faster R-CNN, the lack of fine-grained information of region-level and the time cost of Faster R-CNN pre-training are unavoidable problems. Furthermore, extracting region-level features is time-consuming, so most models directly trained and evaluated on cached features instead of image, which makes it difficult to train image captioning model end-to-end from image to descriptions. \begin{figure*}[htb] \centering \includegraphics[width=15cm]{figures/overall-1.pdf} \caption{Overview of our proposed PureT model. We first extract image grid features $V_G$ using SwinTransformer. $v_g$ is calculated as the average pooling of $V_G$. Then $V_G$ and $v_g$ are refined into $V_G^N$ and $v_g^N$ through the Refining Encoder composed of N blocks stacks and are fed into the Decoder to generate description word by word.}\medskip \label{fig:overall} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{figures/sw-msa.pdf} \caption{Illustration of regular window partitioning scheme and shifted window partitioning scheme adopted in refining encoder. The size of input feature map is $H\times W = 12\times 12$.}\medskip \label{fig:sw-msa} \end{figure} \section{Model} The overall architecture of our PureT model is shown in Figure~\ref{fig:overall}. We adopt the widely used encoder-decoder framework, in which the encoder consists of a backbone of SwinTransformer and stacks of N refining encoder blocks and the decoder consists of stacks of N decoder blocks. The encoder is in charge of extracting grid features from the input image and refining them by capturing the intra-relationship between them. The decoder uses the refined image grid features to generate the captions word by word by capturing the inter-relationship between word and image grid features. \subsection{Attention Mechanism} The attention mechanism can be abstractly summarized as follows: \begin{equation} \operatorname{Attention}(q, k, v) = f_{sim}(q, k)v \end{equation} where $f_{sim}(\cdot)$ is a function used to compute the similarity scores between some queries ($q$) and keys ($k$). The output of attention mechanism is the weighted sum on values ($v$) based on similarity scores. In our model, Multi-head Self Attention (MSA) \cite{DBLP:conf/nips/VaswaniSPUJGKP17} and its variants Window MSA / Shifted Window MSA (W-MSA / SW-MSA) modules proposed by SwinTransformer \cite{liu2021Swin} are used, where MSA is adopted in the decoder to model the intra-relationship of word sequence and the inter-relationship between word and grid features, and W-MSA / SW-MSA are adopted in the encoder to model intra-relationship of image grid features. The above three attention modules use $\operatorname{Softmax}(\cdot)$ as the similarity scoring function, which can be formulated as follows: \begin{equation} \operatorname{Attention}(q, k, v) = \operatorname{Softmax}\left(\frac{qk^\mathrm{T}}{\sqrt{d_k}}\right)v \end{equation} where $d_k$ is the dimension of $k$. \iffalse \subsubsection{MSA} MSA can be formulated as follows: \begin{align} & \operatorname{MSA}(Q, K, V) = \operatorname{Concat}(head_1, \ldots, head_h) \\ & head_i = \operatorname{Attention}(Q^i, K^i, V^i), i = 1,2,\ldots,h \end{align} \fi \begin{equation} \begin{split} & \operatorname{MSA}(Q, K, V) = \operatorname{Concat}(head_1, \ldots, head_h) \\ & head_i = \operatorname{Attention}(Q^i, K^i, V^i), i = 1,2,\ldots,h \end{split} \end{equation} where $h$ is the number of heads. $Q^i, K^i$ and $V^i$ are the $i$-th slice of $Q, K$ and $V$ respectively, which can be formulated as follows: \begin{align} \bigstar = \operatorname{Concat}(\bigstar^1, \ldots, \bigstar^i, \ldots, \bigstar^h) \end{align} \iffalse \begin{align} Q &= \operatorname{Concat}(Q^1, \ldots, Q^i, \ldots, Q^h) \\ K &= \operatorname{Concat}(K^1, \ldots, K^i, \ldots, K^h) \\ V &= \operatorname{Concat}(V^1, \ldots, V^i, \ldots, V^h) \end{align} \fi where $\bigstar \in \mathbb{R}^{L_\bigstar \times D_\bigstar}$ and $\bigstar^i \in \mathbb{R}^{L_\bigstar \times \frac{D_\bigstar}{h}}$ ($\bigstar$ refers to $Q, K$ and $V$), $L_\bigstar$ and $D_\bigstar$ are the length and dimension. In the $i$-th head of MSA, each token of the query $Q^i$ calculates its similarity with all tokens of the key $K^i$, and performs the weighted sum on all tokens of the value $V^i$ to obtain the corresponding output. Therefore, MSA can be regarded as a global attention mechanism. \subsubsection{W-MSA and SW-MSA} Aiming at the quadratic complexity caused by the global computation of MSA, SwinTransformer proposed W-MSA and SW-MSA to compute self-attention within local windows \cite{liu2021Swin}. In this paper, both W-MSA and SW-MSA are used in the encoder, in which inputs of $Q, K$ and $V$ are all from image grid features, therefore they have the same length $L = H\times W$ and dimension $D$. Compared with MSA, W-MSA and SW-MSA first partition the inputs of $Q, K$ and $V$ into several windows, and then apply MSA separately in each window. Figure~\ref{fig:sw-msa} illustrate the regular window partitioning scheme and shifted window partitioning scheme of W-MSA and SW-MSA respectively. Adding SW-MSA after W-MSA aims to solve the lack of connections across windows of W-MSA module to further improve the modeling ability. W-MSA and SW-MSA can be formulated as follows: \begin{align} &\operatorname{(S)W-MSA}(Q, K, V) = \operatorname{Merge}(window_1, \ldots, window_w) \notag\\ &window_i = \operatorname{MSA}(Q_W^{i}, K_W^{i}, V_W^{i}), i=1, 2, \ldots, w \end{align} where $w$ is the number of windows and $\operatorname{Merge}(\cdot)$ is the reverse operation of regular/shifted window partitioning scheme. $Q_W^i, K_W^i$ and $V_W^i$ are the $i$-th window of $Q, K$ and $V$ respectively, which can be formulated as follows: \iffalse \begin{align} Q = &\operatorname{Merge}\left(Q_W^1, \ldots, Q_W^i, \ldots, Q_W^w\right) \\ K = &\operatorname{Merge}\left(K_W^1, \ldots, K_W^i, \ldots, K_W^w\right) \\ V = &\operatorname{Merge}\left(V_W^1, \ldots, V_W^i, \ldots, V_W^w\right) \end{align} \fi \begin{align} \bigstar = &\operatorname{Merge}\left(\bigstar_W^1, \ldots, \bigstar_W^i, \ldots, \bigstar_W^w\right) \end{align} where $\bigstar \in \mathbb{R}^{L \times D}$ and $\bigstar_W^i \in \mathbb{R}^{\frac{L}{w} \times D}$ ($\bigstar$ refers to $Q, K$ and $V$). \subsection{Encoder} Different from most existing models, we first employ SwinTransformer \cite{liu2021Swin} instead of pre-trained CNN or Faster R-CNN as the backbone encoder to extract a set of grid features $V_G = \{v_1, v_2, \ldots, v_m\}$ from given input images as the initial visual features, where $v_i \in \mathbb{R}^D$, $D$ is the embedding dimension of each grid feature, and $m$ is the number of grid features ($m\operatorname{=12\times 12}$ in this paper). After grid features $V_G$ are extracted, we refer to the standard transformer encoder \cite{DBLP:conf/nips/VaswaniSPUJGKP17} to construct a refining encoder to enhance the grid features by capturing the intra-relationship between them. Furthermore, inspired by \cite{DBLP:conf/aaai/JiLSCLW0J21}, we calculate the mean pooling of grid features $v_g = \frac{1}{m}\sum_{i=1}^mv_i$ as the initial global feature and introduce it into W-MSA and SW-MSA. Specifically, when applying MSA in each window, the global feature is added into the keys $k$ and values $v$ as an extra token. Meanwhile, we also refine the global feature by using it as an extra query $q$ token and applying MSA on all grid features. As shown in Figure~\ref{fig:overall}, the refining encoder is composed of $N$ blocks stacked in sequence ($N=3$ in this paper), and each block consists of a W-MSA or SW-MSA module with feedforward layer, in which W-MSA and SW-MSA are used alternately. The $l$-th block can be formulated as follows: \begin{align} \hat{V}_G^l = &\operatorname{LayerNorm}\left(V_G^{l-1} + \left.\operatorname{(S)W-MSA}\left(W_Q^lV_G^{l-1}, \right.\right.\right. \notag\\ &\left.\left.W_K^l\left[V_G^{l-1}; v_g^{l-1}\right]_s, W_V^l\left[V_G^{l-1}; v_g^{l-1}\right]_s\right)\right) \\ \hat{v}_g^l = &\operatorname{LayerNorm}(v_g^{l-1} + \operatorname{MSA}\left(W_Q^lv_g^{l-1}, \notag \right.\\ &\left.\left.W_K^l[V_G^{l-1}; v_g^{l-1}]_s, W_V^l[V_G^{l-1}; v_g^{l-1}]_s\right)\right) \\ V_G^l = &\operatorname{LayerNorm}\left(\hat{V}_G^l + \operatorname{FeedForward}(\hat{V}_G^l)\right)\\ v_g^l = &\operatorname{LayerNorm}\left(\hat{v}_g^l + \operatorname{FeedForward}(\hat{v}_g^l)\right) \end{align} where $V_G^{l-1}$ and $v_g^{l-1}$ denote the output grid features and global feature of block $l-1$ respectively, and which are used as the input of block $l$, in which $V_G^0 = V_G$ and $v_g^0 = v_g$, $W_Q^l,Q_K^l,W_V^l\in \mathbb{R}^{D\times D}$ are learnt parameter matrices; $[V_G^{l-1} ;v_g^{l-1}]_s\in \mathbb{R}^{(k+1)\times D}$ denotes the stack operation of grid features and global feature and $\operatorname{FeedForward}(\cdot)$ consists of two linear layer with $\operatorname{ReLU}$ activation function in between, as formulated below: \begin{align} \operatorname{FeedForward}\left(x\right) = W_2\operatorname{ReLU}\left(W_1x\right) \end{align} where $W_1\in\mathbb{R}^{(4D)\times D}$ and $W_2\in\mathbb{R}^{D\times(4D)}$ are the learnt parameter matrices of two linear layers respectively. Note that the parameter of refining process for grid features and global feature are reused. The output refined grid features $V_G^N$ and refined global feature $v_g^N$ of block $N$ are fed into the decoder as the input of visual content. \subsection{Decoder} The decoder aims to generate the output caption word by word conditioned on the refined global and grid features from the encoder. The interaction between multi-modal occurs in this part. As shown in Figure~\ref{fig:overall}, the decoder is composed of $N$ blocks stacked in sequence ($N=3$ in this paper), where each block can be divided into three modules: 1) Pre-Fusion Module, which contains the pre-fusion process between previously generated words and refined global feature, which can be regarded as the first inter-modal interaction between natural language and visual content; 2) Language Masked MSA Module, which can be regarded as the intra-modal interaction within the generated words; 3) Cross MSA Module, which contains a MSA module with a FeedForward layer, which can be regarded as the second inter-modal interaction between visual content and natural language; 4) Word Generation Module, which contains a linear layer with softmax function. \subsubsection{Pre-Fusion Module} Most recent Transformer-based models only use image region or grid features without global feature, where the interaction between multi-modal features only occurs in cross attention between generated word and visual features before generating the next word. The lack of interaction of global contextual information limits reasoning capability to a certain extent. Therefore, we construct a pre-fusion module to fuse the refined global feature $v_g^N$ into the input of each block of decoder, which can be regarded as the first time multi-modal interaction to capture global visual context information and can be formulated as follows: \begin{align} X_{1:t-1}^{p, l} = \operatorname{Layer}&\operatorname{Norm}\left(X_{1:t-1}^{l-1} + \right.\notag\\ &\left.\operatorname{ReLU}\left(W_f\left[X_{1:t-1}^{l-1}; v_g^N\right]\right) \right) \end{align} where $X_{1:t-1}^{l-1}\in \mathbb{R}^{(t-1)\times D}$ denotes the output of block $l-1$ and is used as the input of block $l$ at $t$-th timestep , $\left[X_{1:t-1}^{l-1}; v_g\right]\in \mathbb{R}^{(t-1)\times 2D}$ indicates concatenation and $W_f\in \mathbb{R}^{D\times 2D}$ is learnt parameters of a linear layer; the output $X_{1:t-1}^{p,l}\in \mathbb{R}^{(t-1)\times D}$ is fed into the Language Masked MSA Module. Note that the initial input at the first block comes from the previously generated words: \begin{align} X_{1:t-1}^0=W_ex_{1:t-1} \end{align} where $x_{1:t-1}$ are one-hot encodings of the generated words before $t$-th timestep, and $W_e\in\mathbb{R}^{D\times|\Sigma|}$ is the word embedding matrix of the vocablulary $\Sigma$. \subsubsection{Language Masked MSA Module} The module aims to model the intra-modal relationship (words-to-words) within $X_{1:t-1}^{p,l}$, which can be formulated as follows: \begin{align} \tilde{X}_{t-1}^l = &\operatorname{LayerNorm}\left( X_{t-1}^{p,l} + \operatorname{MSA}\left(W_Q^{m,l}X_{t-1}^{p,l}, \right.\right.\notag\\ &\left.\left. W_K^{m,l}X_{1:t-1}^{p,l}, W_V^{m,l}X_{1:t-1}^{p,l} \right) \right) \end{align} where $W_Q^{m,l}, W_K^{m,l}, W_V^{m,l} \in \mathbb{R}^{D\times D}$ are learnt parameters, and $X_{t-1}^{p,l}$ indicates the corresponding embedding vector of the generated word at $(t-1)$-th timestep, which means that each word is only allowed to calculate attention map at its earlier generated words. \begin{table*} \begin{center} \footnotesize \begin{tabular}{lcccccccccccc} \hline \multirow{2}{*}{Models} & \multicolumn{6}{c}{Single Model} & \multicolumn{6}{c}{Ensemble Model} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{S} \\ \hline \multicolumn{1}{c}{} & \multicolumn{12}{c}{CNN-LSTM based models}\\ \hline SCST & - & 34.2 & 26.7 & 55.7 & 114.0 & - & - & 35.4 & 27.1 & 56.6 & 117.5 & -\\ RFNet & 79.1 & 36.5 & 27.7 & 57.3 & 121.9 & 21.2 & 80.4 & 37.9 & 28.3 & 58.3 & 125.7 & 21.7 \\ Up-Down & 79.8 & 36.3 & 27.7 & 56.9 & 120.1 & 21.4 & - & - & - & - & - & -\\ GCN-LSTM & 80.5 & 38.2 & 28.5 & 58.3 & 127.6 & 22.0 & 80.9 & 38.3 & 28.6 & 58.5 & 128.7 & 22.1\\ AoANet & 80.2 & 38.9 & 29.2 & 58.8 & 129.8 & 22.4 & 81.6 & 40.2 & 29.3 & 59.4 & 132.0 & 22.8\\ X-LAN & 80.8 & 39.5 & 29.5 & 59.2 & 132.0 & 23.4 & 81.6 & 40.3 & 29.8 & 59.6 & 133.7 & 23.6 \\ \hline \multicolumn{1}{c}{} & \multicolumn{12}{c}{CNN-Transformer based models}\\ \hline ORT & 80.5 &38.6 & 28.7 & 58.4 & 128.3 & 22.6 & - & - & - & - & - & -\\ X-Transformer & 80.9 & 39.7 & 29.5 & 59.1 & 132.8 & 23.4 & 81.7 & 40.7 & 29.9 & 59.7 & 135.3 & 23.8\\ $\mathcal{M}^2$ Transformer & 80.8 & 39.1 & 29.2 & 58.6 & 131.2 & 22.6 & 82.0 & 40.5 & 29.7 & 59.5 & 134.5 & 23.5 \\ RSTNet & 81.8 & 40.1 & 29.8 & 59.5 & 135.6 & 23.3 & - & - & - & - & - & - \\ GET & 81.5 & 39.5 & 29.3 & 58.9 & 131.6 & 22.8 & 82.1 & 40.6 & 29.8 & 59.6 & 135.1 & 23.8 \\ DLCT & 81.4 & 39.8 & 29.5 & 59.1 & 133.8 & 23.0 & 82.2 & 40.8 & 29.9 & 59.8 & 137.5 & 23.3 \\ \hline PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} & \textbf{83.4} & \textbf{42.1} & \textbf{30.4} & \textbf{60.8} & \textbf{141.0} & \textbf{24.3} \\ \hline \end{tabular} \caption{Offline evaluation results of our proposed model and other existing state-of-the-art models on MSCOCO ``Karpathy" test split, where B-$N$, M, R, C and S denote BLEU-$N$, METEOR, ROUGE-L, CIDEr and SPICE respectively.} \label{table:offline} \end{center} \end{table*} \subsubsection{Cross MSA Module} The module aims to model the inter-modal relationship (words-to-vision) between $\tilde{X}_{1:t-1}^l$ and $V_G^N$, which can be regarded as the second time multi-modal interaction to capture local visual context information and can be formulated as follows: \begin{align} \hat{X}_{t-1}^l = &\operatorname{LayerNorm}\left( \tilde{X}_{t-1}^l + \operatorname{MSA}\left( W_Q^{c,l}\tilde{X}_{t-1}^l, \right.\right.\notag\\ &\left.\left.W_K^{c,l}V_G^N, W_V^{c,l}V_G^N\right) \right) \\ X_{t-1}^l = &\operatorname{LayerNorm}(\hat{X}_{t-1}^l + \operatorname{FeedForward}(\hat{X}_{t-1}^l) ) \end{align} where $W_Q^{c,l}, W_K^{c,l}, W_V^{c,l} \in \mathbb{R}^{D\times D}$ are learnt parameters, $\tilde{X}_{t-1}^l$ from the Language Masked MSA Module is fed into MSA as query, and refined grid features $V_G^N$ from the last block of encoder are fed into MSA as keys and values. \subsubsection{Word Generation Module} Given the output $X_{1:t-1}^N$ of the last decoder block, the conditional distribution over the vocablary $\Sigma$ is given by: \begin{align} p(x_t|x_{1:t-1}) = \operatorname{Softmax}(W_xX_{t-1}^N) \end{align} where $W_x\in\mathbb{R}^{|\Sigma|\times D}$ is learnt parameters. \iffalse The probabilities of the entire sequence of caption is calculated as follows: \begin{align} p(x_{1:T}) = \prod_{t=1}^{T}p(x_t | x_{1:t-1}) \end{align} \fi \subsection{Objective Functions} We first optimize our model by applying cross entropy (XE) loss as the objective function: \begin{equation} L_{XE}(\theta)=-\sum_{t=1}^T\log(p_\theta(y_t^*|y_{1:t-1}^*)) \end{equation} where $y_{1:T}^*$ is the target ground truth sequence, and $\theta$ denotes the parameters of our model. Then, we adopt self-critical sequence training (SCST) strategy \cite{DBLP:conf/cvpr/RennieMMRG17} to optimize CIDEr \cite{DBLP:conf/cvpr/VedantamZP15} metrics: \begin{equation} L_R(\theta)=-\operatorname{\textbf{E}}_{y_{1:T}\sim p_\theta}[r(y_{1:T})] \end{equation} where $r(\cdot)$ is the score of CIDEr. The gradient of $L_R$ can be approximated as follows: \begin{equation} \nabla_\theta L_R(\theta)\approx -\left(r(y_{1:T}^s)-r(\hat{y}_{1:T})\right)\nabla_\theta \log p_\theta(y_{1:T}^s) \end{equation} where $y_{1:T}^s$ is a sampled caption and $r(\hat{y}_{1:T}^s)$ defines the greedily decoded score obtained from the current model. \begin{table*} \begin{center} \footnotesize \begin{tabular}{lcccccccccccccc} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{BLEU-1} & \multicolumn{2}{c}{BLEU-2} & \multicolumn{2}{c}{BLEU-3} & \multicolumn{2}{c}{BLEU-4} & \multicolumn{2}{c}{METEOR} & \multicolumn{2}{c}{ROUGE-L} & \multicolumn{2}{c}{CIDEr} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9} \cmidrule(r){10-11} \cmidrule(r){12-13} \cmidrule(r){14-15} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} & \multicolumn{1}{c}{c5} & \multicolumn{1}{c}{c40} \\ \hline SCST & 78.1 & 93.7 & 61.9 & 86.0 & 47.0 & 75.9 & 35.2 & 64.5 & 27.0 & 35.5 & 56.3 & 70.7 & 114.7 & 116.7 \\ GCN-LSTM & 80.8 & 95.2 & 65.5 & 89.3 & 50.8 & 80.3 & 38.7 & 69.7 & 28.5 & 37.6 & 58.5 & 73.4 & 125.3 & 126.5 \\ Up-Down & 80.2 & 95.2 & 64.1 & 88.8 & 49.1 & 79.4 & 36.9 & 68.5 & 27.6 & 36.7 & 57.1 & 72.4 & 117.9 & 120.5 \\ SGAE & 81.0 & 95.3 & 65.6 & 89.5 & 50.7 & 80.4 & 38.5 & 69.7 & 28.2 & 37.2 & 58.6 & 73.6 & 123.8 & 126.5 \\ AoANet & 81.0 & 95.0 & 65.8 & 89.6 & 51.4 & 81.3 & 39.4 & 71.2 & 29.1 & 38.5 & 58.9 & 74.5 & 126.9 & 129.6\\ X-Transformer & 81.9 & 95.7 & 66.9 & 90.5 & 52.4 & 82.5 & 40.3 & 72.4 & 29.6 & 39.2 & 59.5 & 75.0 & 131.1 & 133.5 \\ $\mathcal{M}^2$ Transformer & 81.6 & 96.0 & 66.4 & 90.8 & 51.8 & 82.7 & 39.7 & 72.8 & 29.4 & 39.0 & 59.2 & 74.8 & 129.3 & 132.1 \\ RSTNet & 82.1 & 96.4 & 67.0 & 91.3 & 52.2 & 83.0 & 40.0 & 73.1 & 29.6 & 39.1 & 59.5 & 74.6 & 131.9 & 134.0 \\ GET & 81.6 & 96.1 & 66.5 & 90.9 & 51.9 & 82.8 & 39.7 & 72.9 & 29.4 & 38.8 & 59.1 & 74.4 & 130.3 & 132.5 \\ DLCT & 82.4 & \textbf{96.6} & 67.4 & 91.7 & 52.8 & 83.8 & 40.6 & 74.0 & 29.8 & 39.6 & 59.8 & 75.3 & 133.3 & 135.4 \\ \hline PureT & \textbf{82.8} & 96.5 & \textbf{68.1} & \textbf{91.8} & \textbf{53.6} & \textbf{83.9} & \textbf{41.4} & \textbf{74.1} & \textbf{30.1} & \textbf{39.9} & \textbf{60.4} & \textbf{75.9} & \textbf{136.0} & \textbf{138.3} \\ \hline \end{tabular} \caption{Online evaluation results of our proposed model and other existing state-of-the-art models on MSCOCO.} \label{table:online} \end{center} \end{table*} \section{Experiments} \subsection{Dataset and Evaluation Metrics} We conduct experiments on the MSCOCO 2014 dataset \cite{DBLP:conf/eccv/LinMBHPRDZ14}, which contains 123287 images (82783 for training and 40504 for validation), and each is annotated with 5 reference captions. In this paper, we follow the ``Karpathy'' split \cite{DBLP:journals/pami/KarpathyF17} to redivide the MSCOCO, where 113287 images for training, 5000 images for validation and 5000 images for offline evaluation. Besides, MSCOCO also provides 40775 images for online testing. For the training process, we convert all training captions to lower case and drop the words occur less than 6 times, collect the rest 9487 words as our vocabulary $\Sigma$. For fair evaluation, we adopt five widely used metrics to evaluate the quality of generated captions, including BLEU \cite{DBLP:conf/acl/PapineniRWZ02}, METEOR \cite{DBLP:conf/wmt/LavieA07}, ROUGE-L \cite{lin-2004-rouge}, CIDEr \cite{DBLP:conf/cvpr/VedantamZP15}, and SPICE \cite{DBLP:conf/eccv/AndersonFJG16}. \subsection{Experimental Settings} We set the model embedding size $D$ to 512, the number of transformer heads to 8, the number of blocks $N$ for both refining encoder and decoder to 3. For the training process, we first train our model under XE loss $L_{XE}$ for 20 epochs, and set the batch size to 10 and warmup steps to 10,000; then we train our model under $L_R$ for another 30 epochs with fixed learning rate of $5\times 10^{-6}$. We adopt Adam \cite{DBLP:journals/corr/KingmaB14} optimizer in both above stages and the beam size is set to 5 in validation and evaluation process. \subsection{Comparisons with State-of-The-Art Models} \subsubsection{Offline Evaluation} Table~\ref{table:offline} reports the performances of some existing state-of-the-art models and our proposed model on MSCOCO offline test split. The compared models include: SCST \cite{DBLP:conf/cvpr/RennieMMRG17}, RFNet \cite{DBLP:conf/eccv/JiangMJLZ18}, Up-Down \cite{DBLP:conf/cvpr/00010BT0GZ18}, GCN-LSTM \cite{DBLP:conf/eccv/YaoPLM18}, AoANet \cite{huang2019attention} and X-LAN \cite{DBLP:conf/cvpr/PanYLM20}; ORT \cite{DBLP:conf/nips/HerdadeKBS19}, X-Transformer \cite{DBLP:conf/cvpr/PanYLM20}, $\mathcal{M}^2$ Transformer \cite{DBLP:conf/cvpr/CorniaSBC20}, RSTNet\cite{DBLP:conf/cvpr/ZhangSLJZWHJ21}, GET \cite{DBLP:conf/aaai/JiLSCLW0J21} and DLCT \cite{DBLP:conf/aaai/LuoJSCWHLJ21}. We divide these models into CNN-LSTM based models and CNN-Transformer based models according to the difference mothods adopted in decoder. For fair comparisons, we report the results of a single model and ensemble of 4 models after SCST training. As shown in Table~\ref{table:offline}, both our single model and ensemble of 4 models achieve best performances in all metrics. In the case of single model, the CIDEr score of our model reaches 138.2\%, which achieves advancements of 2.6\% and 4.4\% to the strong competitors RSTNet and DLCT. Meanwhile, our model achieves improvements of over 0.6\% to RSTNet, and improvements of over 1.0\% to DLCT in terms of metrics BLEU-4, ROUGE-L and SPICE. In the case of ensemble model, our model also achieves the best performance, and advances all other models by more than 1.0\% over all metrics except METEOR. In particular, the CIDEr score of our ensemble model reaches 141.0\%, which achieves advancements of 3.5\% and 5.9\% to DLCT and GET. In general, the significant improvements of all metrics (especially CIDEr) demonstrate the advantage of our proposed model. In addition, compared to models that use region-level features or both region and grid-level features, our model has a relatively more balanced computational cost due to it avoids the prediction of object regions coordinates. And our model can be trained end-to-end, which allows us to explore it in more actual scenes. \subsubsection{Online Evaluation} As shown in Table~\ref{table:online}, we also report the performance with 5 reference captions (c5) and 40 reference captions (c40) of our model on the MSCOCO official online test server. Compared to the other state-of-the-arts, our model achieves the best scores in all metrics except a slightly lower 0.1\% in BLEU-1 (c40) than DLCT. Notably, the scores of CIDEr (c5) and CIDEr (c40) of our model reach 136.0\% and 138.3\%, which achieve advancements of 2.7\% and 2.9\% with respect to the best performer DLCT. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{figures/captions_examples.pdf} \caption{Examples of captions generated by standard Transformer, $\mathcal{M}^2$ Transformer and our PureT with ground-truths.}\medskip \label{fig:cap_example} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=17cm]{figures/attention_vis.pdf} \caption{Visualization of attention heatmap on image along caption generation process. For each generated word, we show the image with different brigtness to represent the difference of attention weights.}\medskip \label{fig:att_vis} \end{figure*} \subsection{Ablation Study} We conduct several ablation studies to quantify the influences of different modules in our model. \subsubsection{Influence of W-MSA and SW-MSA} For quantifying the influence of W-MSA and SW-MSA in our Refining Encoder, we ablate our model with different configurations of window size $ws$ and shift size $ss$ as shown in Table~\ref{table:ablation_1}. The number of refining encoder and decoder blocks is set to 3. Note that the input $V_G\in \mathbb{R}^{m\times D}$ of Refining Encoder has a size of $m\operatorname{=12\times 12}$ in this paper. The W-MSA and SW-MSA degenerate into MSA when $ws\operatorname{=12}$ and SW-MSA into W-MSA when $ss\operatorname{=0}$. It can be seen that the model with only MSA ($ws\operatorname{=12},ss\operatorname{=0}$) performs better than the model with only W-MSA ($ws\operatorname{=6},ss\operatorname{=0}$) because W-MSA lacks connections across windows. However, the model combining W-MSA and SW-MSA ($ws\operatorname{=6},ss\operatorname{=3}$) can improve the performance of both models above in all metrics. \begin{table} \begin{center} \footnotesize \begin{tabular}{rrcccccc} \hline $ws$ & $ss$ & B-1 & B-4 & M & R & C & S \\ \hline 12 & 0 & 82.0 & 40.3 & 29.9 & 59.9 & 137.5 & 23.8 \\ 6 & 0 & 81.8 & 40.1 & 29.9 & 59.7 & 136.8 & 23.8 \\ 6 & 3 & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different configuration of window size $ws$ and shift size $ss$.} \label{table:ablation_1} \end{center} \end{table} \subsubsection{Influence of Pre-Fusion module} \begin{table} \begin{center} \footnotesize \begin{tabular}{lcccccc} \hline Models & B-1 & B-4 & M & R & C & S \\ \hline Transformer & 81.6 & 39.8 & 29.9 & 59.6 & 136.4 & 23.8 \\ \makecell[l]{Transformer \\\quad\quad + p-f.} & 82.0 & 40.3 & 29.9 & 59.9 & 137.5 & 23.8 \\ \hline PureT (w/o p-f.) & 81.8 & 40.3 & 30.0 & 59.9 & 137.9 & 24.0 \\ PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison with / without Pre-Fusion for standard Transformer and our proposed PureT.} \label{table:ablation_2} \end{center} \end{table} To demonstrate the effectiveness of the Pre-Fusion module in our Decoder, we remove the Pre-Fusion module from our PureT model and compare it with the full model as shown in rows 4 and 5 of Table~\ref{table:ablation_2}. It can be seen that the Pre-Fusion module improves the performance in all metrics. Furthermore, we construct the standard Transformer (3 blocks of encoder/decoder) as the baseline model, which reaches an excellent performance as shown in row 1 in Table~\ref{table:ablation_2}. Then we extend the baseline model by adding the Pre-Fusion module (equivalent to the model in row 1 of Table~\ref{table:ablation_1}), which also has a better performance in all metrics. \subsubsection{Influence of the number of stacked blocks} We also conduct several experiments to evaluate the influence of the number of the Refining Encoder and Decoder blocks. As shown in Table~\ref{table:ablation_3}, models with more than 2 blocks have a significant improvement (more than 2.0\%) in CIDEr score compare to the model with 1 block. Note that the model with 4 blocks has a significant advantage in BLEU scores to other models, but considering the increase of model parameters and the sufficiently excellent performance of the model with 3 blocks, we set the number of blocks $N$ to 3 as the final configuration. Remarkably, the model with only 1 block also has a better performance in comparison to earlier state-of-the-art works (e.g. RSTNet, GET and DLCT) in Table~\ref{table:offline}, which further indicates the effectiveness of our model. \begin{table} \begin{center} \footnotesize \begin{tabular}{ccccccc} \hline Layer & B-1 & B-4 & M & R & C & S \\ \hline 1 & 81.8 & 40.2 & 29.7 & 59.5 & 135.8 & 23.5 \\ 2 & 81.8 & 40.5 & 30.0 & 59.9 & \textbf{138.2} & 23.9 \\ 3 & 82.1 & 40.9 & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ 4 & \textbf{82.7} & \textbf{41.1} & 30.0 & \textbf{60.1} & \textbf{138.2} & 24.0 \\ \hline \end{tabular} \caption{Performance comparison of different number of Refining Encoder and Decoder blocks.} \label{table:ablation_3} \end{center} \end{table} \subsubsection{Influence of different backbone} \begin{table*} \setcounter{table}{5} \begin{center} \footnotesize \begin{tabular}{lllclcccccccc} \hline Baseline Models & Backbone & Feat. Type & Feat. Size & N & B-1 & B-2 & B-3 & B-4 & M & R & C & S \\ \hline \multirow{3}{*}{$\mathcal{M}^2$ Transformer} & ResNet-101 & Region & (10$-$100) & 3$^\dagger$ & 80.8 & - & - & 39.1 & 29.2 & 58.6 & 131.2 & 22.6 \\ & ResNeXt-101 & Grid & $7\times 7$ & 3$^\ddagger$ & 80.8 & - & - & 38.9 & 29.1 & 58.5 & 131.7 & 22.6 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.8 & 66.8 & \textbf{52.6} & 40.5 & 29.6 & 59.9 & 135.4 & 23.3 \\ \hline \multirow{4}{*}{X-Transformer} & ResNet-101 & Region & (10$-$100) & 6$^\dagger$ & 80.9 & 65.8 & 51.5 & 39.7 & 29.5 & 59.1 & 132.8 & 23.4 \\ & ResNeXt-101 & Grid & $7\times 7$ & 6$^\ddagger$ & 81.0 & - & - & 39.7 & 29.4 & 58.9 & 132.5 & 23.1 \\ & SwinTransformer & Grid & $12\times 12$ & 6 & 81.4 & 66.3 & 52.0 & 39.9 & 29.5 & 59.5 & 133.7 & 23.4 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.9 & 66.7 & 52.3 & 40.1 & 29.6 & 59.6 & 134.8 & 23.4 \\ \hline \multirow{4}{*}{\makecell[l]{standard \\Transformer}} & ResNet-101 & Region & (10$-$100) & 3 & 80.0 & 64.9 & 50.5 & 38.7 & 29.0 & 58.6 & 130.1 & 22.9 \\ & ResNeXt-101 & Grid & $7\times 7$ & 3$^\ddagger$ & 81.2 & - & - & 39.0 & 29.2 & 58.9 & 131.7 & 22.6 \\ & ResNeXt-101 & Grid & $12\times 12$ & 3 & 80.8 & 65.8 & 51.4 & 39.4 & 29.4 & 59.2 & 132.8 & 23.2 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & 81.6 & 66.5 & 52.0 & 39.8 & 29.9 & 59.6 & 136.4 & 23.8 \\ \hline \multirow{2}{*}{PureT} & ResNeXt-101 & Grid & $12\times 12$ & 3 & 80.7 & 65.9 & 51.7 & 39.9 & 29.2 & 59.1 & 131.8 & 23.0 \\ & ViT & Grid & $12\times 12$ & 3 & 81.6 & 66.6 & 52.3 & 40.3 & 29.7 & 59.5 & 135.2 & 23.6 \\ & SwinTransformer & Grid & $12\times 12$ & 3 & \textbf{82.1} & \textbf{67.3} & 52.0 & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different configuration of backbone models. ResNet-101 and ResNeXt-101 indicate Faster R-CNN in conjunction with them respectively. Region features extracted by ResNet-101 have adaptive size of 10 to 100. Grid features extracted by ResNeXt-101 can be extracted in the size of $12\times 12$ or $7\times 7$ by average pooling as need. Grid features (SwinTransformer) are extracted in the size of $12\times 12$. $N$ denotes the number of encoder and decoder blocks, superscript $\dagger$ indicates that the results are from the respectively official paper and $\ddagger$ indicates that the results are from \cite{DBLP:conf/aaai/LuoJSCWHLJ21}, and other results come from our experiments.} \label{table:ablation_exp} \end{center} \end{table*} To quantify the influence of different features extracted by different backbone models, we adopt different image captioning models, as baseline models and ablate them with different configurations of backbone models as shown in Table~\ref{table:ablation_exp}. The baseline models include: $\mathcal{M}^2$ Transformer \cite{DBLP:conf/cvpr/CorniaSBC20}, X-Transformer \cite{DBLP:conf/cvpr/PanYLM20} and standard Transformer \cite{DBLP:conf/nips/VaswaniSPUJGKP17}. The backbone models include: Faster R-CNN \cite{DBLP:conf/nips/RenHGS15} in conjunction with ResNet-101, which is adopted in \cite{DBLP:conf/cvpr/00010BT0GZ18}; Faster R-CNN in conjunction with ResNeXt-101, which is adopted in \cite{DBLP:conf/cvpr/JiangMRLC20}; ViT \cite{DBLP:conf/iclr/DosovitskiyB0WZ21} and SwinTransformer \cite{liu2021Swin}. As we can see, grid features extracted by SwinTransformer can achieve significant performance improvement compared with region features extracted by ResNet-101 and grid features extracted by ResNeXt-101 and ViT. In terms of $\mathcal{M}^2$ Transformer and X-Transformer, the backbone models of ResNet-101 and ResNeXt-101 have similar performance. The backbone model of SwinTransformer comprehensively improves scores of all metrics, which boosts the CIDEr score more than 3.7\% in $\mathcal{M}^2$ Transformer especially. Note that the backbone model with $N=3$ has a better performance than $N=6$ in X-Transformer, which indicates the superiority of SwinTransformer in image captioning and allows us to explore more tiny and efficient models and apply it in more actual scenes. In terms of standard Transformer, the backbone model of SwinTransformer reaches an excellent performance and is even better than $\mathcal{M}^2$ Transformer and X-Transformer in scores of METEOR, CIDEr and SPICE. In terms of our PureT, the backbone of SwinTransformer also achieves a better performance than ResNeXt-101. In general, in our extensive experiments, we find that the backbone models of CNN (e.g. Faster RCNN in conjunction with ResNet-101 or ResNeXt-101) are more suitable for using LSTM or Transformer with non-standard MSA (e.g. X-Transformer) as decoder, and the backbone of SwinTransformer is more suitable for using Transformer with standard MSA (e.g. $\mathcal{M}^2$ Transformer, standard Transformer and our PureT) as decoder. Therefore, we intend to explore a lighter and simpler Transformer-based model in our future work. \subsubsection{Influence of different Refining Encoder} \begin{table} \begin{center} \footnotesize \begin{tabular}{ccccccc} \hline Ref. Enc. & B-1 & B-4 & M & R & C & S \\ \hline w/o & 81.5 & 39.5 & 29.3 & 59.2 & 134.3 & 23.0 \\ $\mathcal{M}^2$ & 81.9 & 40.2 & 29.6 & 59.7 & 135.9 & 23.7 \\ X & 81.7 & 40.0 & 29.7 & 59.5 & 135.5 & 23.5 \\ PureT & \textbf{82.1} & \textbf{40.9} & \textbf{30.2} & \textbf{60.1} & \textbf{138.2} & \textbf{24.2} \\ \hline \end{tabular} \caption{Performance comparison of different Refining Encoder. w/o indicates deleting Refining Encoder, $\mathcal{M}^2$ and X indicate replacing Refining Encoder with encoders of $\mathcal{M}^2$ Transformer and X-Transformer respectively.} \label{table:ablation_exp_1} \end{center} \end{table} To further quantify the influence of Refining Encoder, we ablate the Refining Encoder by different configurations as shown in Table~\ref{table:ablation_exp_1}. We delete the Refining Encoder to confirm whether the Refining Encoder is a necessary module, and replace our proposed Refining Encoder with encoders of $\mathcal{M}^2$ Transformer and X-Transformer to verify the advantages of our Refining Encoder. As we can see, deleting Refining Encoder can also achieve good performance, which is better than most existing SOTAs in Table~\ref{table:offline}. But our proposed Refining Encoder or other encoders bring significant performance gain than deleting Refining Encoder, which denotes the importance of Refining Encoder. Our proposed Refining Encoder brings the maximum gain and achieves the best performance than other, which denotes that the effectiveness and advantages of our proposed Refining Encoder. \subsection{Visualization Analysis} Figure~\ref{fig:cap_example} proposes some example image captions generated by $\mathcal{M}^2$ Transformer (official model), standard Transformer and our PureT. Note that $\mathcal{M}^2$ Transformer adopts Faster R-CNN, standard Transformer and PureT adopt SwinTransformer as the encoder. Generally, our PureT is able to catch additional fine-grained information and generate more accurate and descriptive captions. To qualitatively evaluate the effect of our PureT, we give the visualization of attention heatmap on the image along caption generation process in Figure~\ref{fig:att_vis}. It can be observed that our model can attend to correct areas when generating words. When generating nominal words, such as ``zebras'', ``rainbow'', ``field'' and ``sky'', the attention heatmap is correctly transformed into the body area of the corresponding objects. In addition, our model focuses on the nearby areas of zebra heads when generating ``grazing'', which correctly captures the semantic information and confirms the advantages of our model. \section{Conclusion} In this paper, we propose a pure Transformer-based model, which adopts SwinTransformer as the backbone encoder and can be trained end-to-end from image to descriptions easily. Furthermore, we construct a refining encoder to refine both image grid features and global feature with the mutual guidance between them, which realizes the complementary advantages between local and global attention. We also fuse the refined global feature with previously generated words in the decoder to enhance the multi-modal interaction, which further improves the modeling capability. Experimental results on MSCOCO dataset demonstrate that our proposed model achieves a new state-of-the-art performance.
2,877,628,089,265
arxiv
\section{Introduction} \label{sec:introduction} \IEEEPARstart{R}{ecent} trend of the digitalization motivates the research interest of data-driven control~\cite{hou2013model} because of the wide access of data collected by the sensors. Instead of resorting to a first-principle model, the collected data are used either to identify a model~\cite{ljung1999system} or to construct a controller directly. The former approach is compatible with most control theory and therefore results in successful applications~\cite{lanzetti2019recurrent,kocijan2016modelling}. Featuring a controller desgin without any intermediate stage, the letter data-driven scheme attract more research interests and finds successful application in linear systems~\cite{campi2002virtual} and iterative control~\cite{hjalmarsson2002iterative,bristow2006survey}. It worth mentioning that in the community of reinforcement learning~\cite{sutton2018reinforcement}, the learning schemes can also be categorized as model-based methods and model-free methods. This work is developed on the basis of the Willems' fundamental lemma and the Koopman operator theory. In particular, the Willems' fundamental lemma characterizes the responses of deterministic linear time invariant(LTI) systems with measured trajectories under reasonable assumptions of controllability and persistent excitation. Based on the data-driven prediction enabled this lemma, predictive control scheme has been developed~\cite{coulson2019data,markovsky2007linear}. Beyond the predictive control, the Willems' fundamental lemma has also been adopted in feedback controller design~\cite{berberich2020robust,de2019formulas}. Within the LTI framework, the Willems' fundamental lemma is further extend to incorporate measurement noise~\cite{9103015,yin2020maximum} and process noise~\cite{berberich2020robust}. At the same time, a significant collection of works has tried to extend the applications of the Willems' fundamental lemma to nonlinear systems. In~\cite{berberich2020trajectory}, an extension to Hammerstein systems and Wiener systems is proposed based on an a priori knowledge of basis function. \cite{bisoffi2020data,rueda2020data,guo2020data} attempt to apply the Willems' fundamental lemma to the class of polynomial system. As pointed out in~\cite{rueda2020data}, the necessary and sufficient condition in the fundamental lemma is broken. An promising viewpoint regarding the quotient space is proposed in~\cite{lian2021nonlinear}, which clusters trajectories into equivalent class and extend the Willems' fundamental lemma into the reproducing kernel Hilbert space. The functional space viewpoint in~\cite{lian2021nonlinear} motivates us to apply the Willems' fundamental lemma in the function space. In particular, the Koopman operator theory is used. The Koopman operator theory is first introduced in the study of forward-complete autonomous sytems~\cite{Koopman255,Koopman315}, which is a linear composite operator even when the system is nonlinear. \cite{korda2018linear,surana2016linear} later introduced the applications of the Koopman operator in controller and observer design, followed by a wide range of research ranging from the model reduction~\cite{peitz2019koopman} to the global optimal control~\cite{villanueva2020towards}. Even though most algorithms based on the Koopman operator have numerical implementations that are similar to those studied in~\cite{berberich2020trajectory,bisoffi2020data,guo2020data,rueda2020data}, the Koopman operator establishes a totally different theoretical framework. In particular, the Koopman operator corresponds to a Heisenberg picture which models the evolution of the observable, while other aforementioned methods model the evolution of the state, corresponding to a Schrödinger picture~\cite{landau2013quantum}. The Koopman operator theory can alleviate the theoretical issue in studying the lifting function without resorting to the quotient space, and enables convergence analysis as that has done in~\cite{korda2018convergence}. The key component of a Koopman operator based method is the learning of the eigenfunctions or the lifting functions lying within the subspace spanned by the eigenfunctions. A standard framework of extended dynamic mode decomposition (EDMD) spans the lifting functions with a dictionary of basis function~\cite{williams2015data}, which suffers from the curse of dimensionality. To overcome this challenge, \cite{kawahara2016dynamic,klus2020eigendecompositions} applies kernel method to learn the Koopman operator in a non-parametric way, which is still not scalable to large dataset. In \cite{takeishi2017learning,lusch2018deep}, the lifting functions are approximated by neural networks. However, these aforementioned methods mainly consider one-step forward prediction either due to the formulation of the learning problem or due to numerical stability, which results in a relatively inaccurate long-term prediction. A link between the Koopman operator and the subspace identification is observed in~\cite{lian2019learning}, which enables a learning scheme of long-term prediction. However, this method still suffers from the lack of scalability and the numerical limitation of the subspace identification~\cite{qin2006overview}. In this work, we propose to incorporate the learning of a Koopman operator into the framework of the Willems' fundamental. The applications of Koopman operator in the Willems' fundamental lemma are mentioned in~\cite{coulson2019data,berberich2020trajectory} but have not been detailed. In this work, we show that by maximizing the linearity of an finite order approximation, a Koopman operator can be learned based on the sensitivity analysis of a parametric programming problem. The proposed learning scheme is capable of uncertainty quantification, where a new objective function is derived to account for the uncertainty. Meanwhile, we propose a control scheme that solves a bi-level optimization problem by a transformation into to a single level structure. The bi-level optimization formulation has been discussed in both~\cite{coulson2019data,dorfler2021bridging}, where a bi-level problem is relaxed to a multi-objective problem. The remainder of this paper is organized as follows: the preliminary knowledge is introduced in Section~\ref{sec: pre}, after which the training and the prediction based on the proposed scheme is elaborated in Section~\ref{sec: prediction}. We explain the proposed control framework in Section~\ref{sec: control} along with presenting the numerical simulation results of prediction and control in Section~\ref{sec: simulation}. \section*{Notations} $\norm{x}_p$ indicates the $\ell_p$ norm of vector $x$ and $\norm{x}_Q := x^\top Q x$ is the weighted norm with $Q$ being positive semi-definite. $\norm{A}_F$ and $\norm{A}_*$ denote the Frobenius norm and the nuclear norm of the matrix $A$ respectively. $\mathcal{N}\sim(\mu, \Sigma)$ is a Gaussian distribution with mean value $\mu$ and covariance matrix $\Sigma$. We use $\mathbb{Z}_{\geq 0}$ to represent a non-negative integer. $\textbf{w}:=\{w_k\}_{k=a}^{b}$ is a sequence of signal $\{w_a, \dots, w_b\}$ indexed by $k$. Specifically, the boldface is used to denote a sequence while the lightface denotes a measurement, \textit{e.g.} $\textbf{w}$ and $w_k$. Meanwhile, the subscript $d$ is reserved to denote the data collected offline. The superscript $^*$ is used to denote optimal solution of an optimization problem. $\otimes$ denotes the Kronecker product. \section{Preliminary}\label{sec: pre} In this section, the Willems' fundamental lemma and the Koopman operator theory will be introduced. Then, the sensitivity analysis of a parametric optimization, which is the enabler of the learning, is discussed. \subsection{Willems' Fundamental Lemma} Given a sequence of measurements $\{w_k\}_{k=0}^{T-1}$, its Hankel matrix of depth $L$ is defined as \begin{equation}\label{eq:hankel} H_L(\textbf{w}) := \begin{bmatrix} w_0 & w_1 & \dots & w_{T-L}\\ w_1 & w_2 & \dots & w_{T-L+1}\\ \vdots & \vdots & \ddots & \vdots \\ w_{L-1} & w_{L} & \dots & w_{T-1} \end{bmatrix}\;. \end{equation} Regarding a Hankel matrix $H_L(\textbf{w})$, the signal sequence $\textbf{w}$ is persitently exciting of order $L$ if $H_L(\textbf{w})$ is full row rank. The Willems' fundamental lemma utilizes the Hankel matrices to characterize the response of the following deterministic linear time invariant(LTI) system, dubbed $\mathfrak{B}(A,B,C,D)$, \begin{equation}\label{eq: linear_dynamics} \begin{aligned} x_{k+1}&=Ax_k+Bu_k\\ y_k &= Cx_k+Du_k \end{aligned}\;, \end{equation} where $A\in\mathbb{R}^{n_x\times n_x}, B\in\mathbb{R}^{n_x\times n_u}, C\in\mathbb{R}^{n_y\times n_x}, D\in\mathbb{R}^{n_y\times n_u}$ parametrize the system dynamics and the order of this system is denoted by $O(\mathfrak{B}(A,B,C,D)):=n_x$. The \textbf{Willems' fundamental lemma} is concluded as \begin{lemma}\label{lemma: fundamental lemma} (\cite[\textit{Theorem 1}]{willems2005note}, \cite[\textit{Lemma 2}]{de2019formulas}) Consider a controllable and observable system (\ref{eq: linear_dynamics}), if the input sequence $\textbf{u}_d=\{u_{d,k}\}_{k=0}^{T_d-1}$ is persistently exciting of order $O(\mathfrak{B}(A,B,C,D)) + L$, then \begin{enumerate} \item Any $L$-step input/output trajectory of system (\ref{eq: linear_dynamics}) can be expressed as \begin{equation*} \begin{bmatrix} H_L(\textbf{u}_d) \\ H_L(\textbf{y}_d) \end{bmatrix} g = \begin{bmatrix} u \\ y \end{bmatrix} \end{equation*} \item Any linear combination of the columns of the Hankel matrices, that is \begin{equation*} \begin{bmatrix} H_L(\textbf{u}_d) \\ H_L(\textbf{y}_d) \end{bmatrix} g \end{equation*} is a $L$-step input/output trajectory of (\ref{eq: linear_dynamics}) \end{enumerate} \end{lemma} This lemma enables data-driven simulation and control~\cite{markovsky2008data,coulson2019data}. To make an $N$-step prediction, the Hankel matrices composed of offline data is partitioned as \begin{equation*} \begin{aligned} \begin{bmatrix} U_p \\ U_f \end{bmatrix} := H_{T_{ini} + N}(\textbf{u}_d)\;,\; \begin{bmatrix} Y_p \\ Y_f \end{bmatrix} := H_{T_{ini} + N}(\textbf{y}_d)\;,\; \end{aligned} \end{equation*} where the first $T_{ini}$ row blocks are used to construct $U_p\;,Y_p$ while the remaining row blocks is assigned to $U_f\;,\;Y_f$. In the remainder of this paper, $n_c$ is reserved to denote the number of columns in the Hankel matrix. In particular, $T_{ini}$ is chosen to ensure the uniqueness of prediction and the rank of the observability matrix \begin{align*} \mathcal{O}_{T_{ini}}(A,C) := \begin{bmatrix} C^\top& (CA)^\top, &\dots,& (CA^{T_{ini}-1})^\top \end{bmatrix}^\top\; \end{align*} is of rank $O(\mathfrak{B}(A,B,C,D)=n_x$~\cite{markovsky2008data}. Without measurement noise, the $N$-step output prediction $\textbf{y}$ is defined by \begin{align}\label{eq:lin_pred} \begin{split} \textbf{y} &= Y_fg\\ \text{s.t.}\;\; \begin{bmatrix} U_p\\ Y_p \\ U_f \end{bmatrix} g&= \begin{bmatrix} \textbf{u}_{ini}\\ \textbf{y}_{ini} \\\textbf{u} \end{bmatrix}\;, \end{split} \end{align} where $\textbf{u}_{ini}$ and $\textbf{y}_{ini}$ are $T_{ini}$-step previous measurements of the inputs and the outputs. Accordingly, $\textbf{y}$ is the $N$-step response driven by input sequence $\textbf{u}$. Built on this prediction scheme, the data-enabled predictive control(DeePC)~\cite{coulson2019data} is \begin{equation}\label{eq: DeePC} \begin{aligned} \min_{g, \sigma_y, u, y} &(\sum_{k=0}^{N-1}\norm{y_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ & +\lambda_g\norm{g}_1+\lambda_y\norm{\sigma_y}_1\\ \text{s.t.} & \begin{bmatrix} U_p\\ Y_p \\ U_f \\ Y_f \end{bmatrix} g= \begin{bmatrix} \textbf{u}_{ini}\\ \textbf{y}_{ini} \\\textbf{u} \\ \textbf{y} \end{bmatrix} + \begin{bmatrix} 0 \\ \sigma_y \\0 \\0 \end{bmatrix}\\ & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & y_k \in \mathcal{Y}, \forall k \in {0, \dotsc, N-1}\;, \end{aligned} \end{equation} where $Q$ and $R$ are the weight penalizing outputs and inputs respectively and $\sigma_g$ is introduce to deal with measurement noise. $r\in \mathbb{R}^{pN}$ is the reference trajectory and $\mathcal{U,Y}$ are the feasible sets of inputs and outputs. $\norm{g}_1$ and $\norm{\sigma_y}_1$ are regularization terms. $\lambda_y, \lambda_g \in \mathbb{R}_{>0}$ are regularization parameters. These regularization terms have been interpreted under distributionally robust optimization framework~\cite{coulson2019regularized} and maximal likelihood framework~\cite{yin2020maximum}. \textit{Remark}: When a long input-output sequence is not available, the Hankel matrix $H_L(\textbf{w})$ can be replaced by a mosaic Hankel matrix\cite{van2020willems}. Given $M$ trajectories: \begin{equation*} \begin{aligned} &\textbf{w}=[\textbf{w}_1,\dotsc,\textbf{w}_M], \\ \text{where each trajectory } &\textbf{w}_i=(w_{i,1},\dotsc,w_{i,T_i}), w_{i,k}\in\mathbb{R}^q \end{aligned} \end{equation*} the mosaic Hankel matrix is defined as : \begin{equation*} H_L(\textbf{w}) = [H_L(\textbf{w}_1), \dots, H_L(\textbf{w}_M)] \end{equation*} \subsection{Koopman Operator}\label{sec:koopman} Given a discrete-time autonomous system \begin{align}\label{eq:nonlin_dyn} x_{k+1} = f(x_k)\;, \end{align} where $f$ models the nonlinear dynamics, a Koopman operator is a composite operator \begin{align*} \mathcal{K}\psi := \psi\circ f\;, \end{align*} which $\psi: \mathbb{R}^{n_x}\rightarrow \mathbb{R}$ is called observable. Unlike standard state space model, the Koopman operator models the evolution of a function driven by system dynamics $f$ and its existence is guaranteed for forward-complete system~\cite{bittracher2015pseudogenerators}. As the Koopman operator is an operator on a function space, $\mathcal{K}$ is in general infinite-dimensional, but critically it is linear even when the dynamics F are non-linear and as such, an observable $\phi$ is an eigenfunction associated with the eigenvalue $\lambda\in\mathbb{C}$ if $\mathcal{K}\phi = \lambda\phi$. From this we can see that the eigenfunctions (or linear combinations of the eigenfunctions) evolve linearly along the trajectories of our nonlinear system~\eqref{eq:nonlin_dyn} \begin{align} \phi(x_{k+1})=\phi(f(x_k))=(\mathcal{K}\phi)(x_k)=\lambda\phi(x_k)\;. \end{align} Given a collection of eigenfunctions $\{\phi_i\}_{i=1}^{n_\phi}$, any observable lying within the span of these eigenfunctions can be decomposed into $\psi = \sum_i c_i(\psi)\phi_i$, where $c_k(\psi)$ is called the Koopman modes of $\psi$. Then, we have \begin{align*} \mathcal{K} \psi = \sum_i c_i(\psi)\lambda_i\phi_i\;, \end{align*} with $\lambda_i$ denoting the eigenvalue of $\phi_i$. In the sequel, the subscript $_u$ is used to denote the components corresponding to a system with control inputs. Given a nonlinear dynamics with control input \begin{align*} x_{k+1} = f_u(x_k,u_k)\;, \end{align*} the Koopman operator can be defined in different ways~\cite{williams2016extending,proctor2018generalizing,korda2018linear}. In this work, we consider the framework in~\cite{korda2018linear}. More specifically, denote the infinite control sequence $\boldsymbol{u}:= \{u_k\}_{k=0}^{\infty}\in\mathit{l}(\mathcal{U})$, where $\mathit{l}(\mathcal{U})$ represents the space of all control sequence. The augmented state is \begin{align*} \chi = \begin{bmatrix} x\\\boldsymbol{u} \end{bmatrix}\;, \end{align*} upon which the system dynamic is augmented as $F: \mathbb{R}^{n_x}\times \mathit{l}(\mathcal{U})\rightarrow \mathbb{R}^{n_x}\times \mathit{l}(\mathcal{U})$ \begin{align}\label{eq:Koopman_control} F(\chi_k) = \begin{bmatrix} f_u(x_k,\boldsymbol{u}_k(0))\\\mathcal{S}\boldsymbol{u}_k \end{bmatrix}\;. \end{align} $\mathcal{S}$ is the left shift operator with $\mathcal{S\boldsymbol{u}}(i):= \boldsymbol{u}(i+1)$ and $\boldsymbol{u}(i)$ is the evalution of the $i$-th element of $\boldsymbol{u}$. In this setup, $\boldsymbol{u}$ can be considered as a sequence of mappings from index $i$ to actual output $u_i$. It is noteworthy to point out that this dynamical system~\ref{eq:Koopman_control} is infinite dimensional but autonomous. Hence, the aforementioned definition of the Koopman operator can be applied directly and the corresponding eigenfunctions are assumed to be spanned by the following dictionary of basis functions. \begin{align*} \{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u} := \{\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x),\boldsymbol{u}(0)^\top\}\;. \end{align*} If the evolution of this dictionary of basis functions is closed under the system dynamics, then we have \begin{align}\label{eq:lifted_space_state} \begin{split} z_{k+1} &= \mathcal{A}z_k+\mathcal{B} \boldsymbol{u}_k(0)\\ \boldsymbol{u}_{k+1}(0) &= \boldsymbol{u}_k(1)\;, \end{split} \end{align} where $z_k:=[\phi_{u,1}(x_k),\dots,\phi_{u,n_{\phi_u}}(x_k)]$ and $\mathcal{A,B}$ captures the Koopman operator. Similarly, any functions within the span of these basis functions can be recovered by the Koopman mode as \begin{align} \psi_u(x,\boldsymbol{u}(0)) = c_u^\top \begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix}\;, \end{align} with $c:=[c_{u,1},c_{u,2}\dots,c_{u,n_{\phi_u}+n_u}]$ the vector of Koopman modes. In particular, we are interested in the Koopman mode of the identity functions evaluated on the sytem outputs. Assumed that we have $n_y$ outputs, the evaluation of the $i$-th output is $I_{y,i}(y_k):= y_{k,i}$. With a bit of abuse of notation, the Koopman modes decomposition of outputs evaluation is \begin{align}\label{eq:lifted_sapce_output} y_k = \begin{bmatrix} I_{y,1}(y_k)\\I_{y,2}(y_k)\\\vdots\\I_{y,n_y}(y_k) \end{bmatrix}= \begin{bmatrix} c_{u,1}^\top\\c_{u,2}^\top\\\vdots\\c_{u,n_y}^\top \end{bmatrix}\begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix} := C_u\begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix}\;, \end{align} where $C_u$ stacks the Koopman modes of the output evaluations. \subsection{Differential Parametric Optimization}\label{sec:diff_opt} Sensitivity analysis investigates the smoothness of a parametric optimization problem, where the implicit function theorem~\cite{krantz2003introduction} is applied to the KKT system. This idea has been applied to deep learning~\cite{el2019implicit} and reinforcement learning~\cite{zanon2020safe}. Though the solution map is barely differentiable, the optimal value function is smoother than the solution map~\cite{fiacco2020mathematical}, which is the only tool used in this work. In general, the continuity of a general convex optimization problem is guaranteed by the uniform level boundness~\cite[Theorem 1.17]{rockafellar2009variational}, while a general nonlinear parametric optimization problem guarantees a lower semiconitinous value function under the assumption of local compactness~\cite{bank1982non}. For the sake of clarity, we elaborate this derivative by a standard quadratic programming (QP), please refer to~\cite{agrawal2019differentiating} for a general conic form. We use subscript $_q$ to avoid confusion. Consider the a parametric QP, $\mathcal{Q}(e_q):=e_q\rightarrow z_q^*$ with parameters $\{Q_q,\;,q_q\;H_q,\;h_q,\;E_q\}$ and $Q_q$ positive definite: \begin{equation} \begin{split} \label{eqn:quaratic_program} \underset{z_q}{\min}\ &\frac{1}{2}z_q^TQ_qz_q + q_q^Tz \\ \text{s.t.}\ & Hz_q \leqslant h_q, E_qz_q=e_q \end{split} \end{equation} The KKT conditions for the QP are: \begin{equation} \begin{split} \label{eqn:KKT_conditon} Q_qz^* + q_q + H_q^T\lambda^* + E_q^T\nu^* &= 0 \\ \text{diag}\left( \lambda^* \right) (H_qz^*-h_q) &= 0 \\ E_qz_q^* - e_q & = 0 \end{split} \end{equation} where $z_q^*, \nu^*, \lambda^*$ are the optimal primal and dual variables, $\text{diag}(x)$ builds a diagonal matrix composed of $x$. Then the differentials of KKT conditions can be computed as: \begin{equation} \label{eqn:KKT_diff} \begin{split} \left[\begin{array}{ccc} Q_q & H_q^T & E_q^T \\ D(\lambda^*)A & \text{diag}(H_qz^*-h_q) & 0 \\ E_q & 0 & 0 \end{array} \right] \left[ \begin{array}{c} dz_q \\ d\lambda \\ d\nu \end{array}\right] \\ = -\left[ \begin{array}{c} dQ_qz_q^* + dq_q + dH_q\lambda^* + dE_q^T\nu^* \\ \text{diag}(\lambda^*)dH_qz_q^* - \text{diag}(\lambda^*)db \\ dE_qz^* - de_q \end{array}\right] \end{split} \end{equation} The derivatives of $z^*$ with respect to the parameters ($Q_q,q_q,H_q,h_q,E_q$) and the function input $f$ are given by the solution to the linear system defined in Equation~\eqref{eqn:KKT_diff}. For example, the solution $dz$ of~\eqref{eqn:KKT_diff} gives the result of $\frac{\partial{z_q^*}}{\partial{Q_q}}$ if we set $dQ_q=I$ and the differentials of other parameters to 0. The gradient of optimal value $L(z^*)$ with respect to $Q$ is calculated accordingly as $\frac{\partial{L(z_q^*)}}{\partial{z_q^*}}\frac{\partial{z_q^*}}{\partial{Q_q}}$. \section{Koopman based Data-driven Prediction}\label{sec: prediction} In this section, the fundamental lemma is first introduced in the Koopman operator theory, which enables a training scheme by minimizing the prediction error with respect to the training dataset. The stochastic prediction scheme is thereby introduced to show the compatibility of probabilistic models, such as Bayesian neural networks and Gaussian processes. \subsection{Koopman Operator with the Fundamental Lemma} As discussed in Section~\ref{sec: pre}, the key component of a Koopman operator is the eigenfunctions or the linear subspace containing the eigenfunctions. Therefore, the learning of an Koopman operator is equivalent to find functions whose evolution of the function evaluation behaves like a linear system~\ref{eq: linear_dynamics}. Following corollary is the enabler of the proposed learning scheme \begin{cor}\label{cor:lin} An dynamical system of order $n_x$ can be parametrized as a linear system~\eqref{eq: linear_dynamics} if and only if the Fundamental lemma holds. \end{cor} \begin{proof} Necessary condition holds by Lemma~\ref{lemma: fundamental lemma} and the sufficient condition holds by the definition of linear systems. \end{proof} As discussed in Section~\ref{sec:koopman}, the outputs evaluations are assumed to be spanned by the following basis functions \begin{align*} \{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u} := \{\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x),\boldsymbol{u}(0)^\top\}\;. \end{align*} Then given a sequence of state evolution $\textbf{x}_d $ with its corresponding inputs-output sequence $\textbf{u}_d,\textbf{y}_d$, whose inputs are persistently excited of order $n_{\phi_u}-n_u+L$, Corollary~\ref{cor:lin} implies that $\{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u}$ is the desired collection basis functions if and only if $\forall\; x\in\mathbb{R}^{n_x}$ the outputs sequence $\textbf{y}$ driven by $\textbf{u}$, there exist $g\in \mathbb{R}^{n_c}$ \begin{align}\label{eq:condition_lifting} \begin{bmatrix} Z\\H_L(\textbf{u}_d)\\H_L(\textbf{y}_d) \end{bmatrix}g = \begin{bmatrix} z\\\textbf{u}\\\textbf{y} \end{bmatrix}\;, \end{align} where $z:=[\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x)]^\top$ and \begin{align*} Z:= \begin{bmatrix} \phi_{u,1}(x_0)&\phi_{u,1}(x_1)&\dots&\phi_{u,1}(x_{n_c})\\ \vdots &\ddots&\ddots&\vdots\\ \phi_{u,n_{\phi_u}}(x_0)& \phi_{u,n_{\phi_u}}(x_1)&\dots&\phi_{u,n_{\phi_u}}(x_{n_c})\\ \end{bmatrix} \end{align*} \subsection{Leaning the Koopman Basis Functions}\label{sec:learn_koop} Due to the previous dicussion, the learning of a Koopman operator is converted to learn basis functions that maximizes the satisfaction of the condition~\ref{eq:condition_lifting}. In practice the underlying state for the nonlinear system is not necessarily measured, we therefore make the following assumption \begin{assumption}\label{ass:meas} $x_k$ is measurable with respect to the previous $T_{ini}$ step input-output sequence $\{u_i,y_i\}_{i=k-T_{ini}+1}^k$. \end{assumption} This assumption implies that $x_k$ can be determined from $\{u_i,y_i\}_{i=k-T_{ini}+1}^k$ and therefore has similar utilization as the matrices $U_p,Y_p$ in problem~\eqref{eq: DeePC} and~\eqref{eq:lin_pred}. Assumed that we have a sequence of input-output data $\textbf{u}_d:=\{u_{d,i}\}_{i=0}^{n_d}$ and $\textbf{y}_d:=\{y_{d,i}\}_{i=0}^{n_d}$ consisting $n_d$ measurements, each of them is partitioned into two subsets, including $\textbf{u}_{d,l}:=\{u_{d,i}\}_{i=0}^{n_{d,t}}$, $\textbf{y}_{d,l}:=\{y_{d,i}\}_{i=0}^{n_{d,t}}$, $\textbf{u}_{d,t}:=\{u_{d,i}\}_{i=n_{d,t}+1}^{n_d}$ and $\textbf{y}_{d,l}:=\{y_{d,i}\}_{i=n_{d,t}+1}^{n_d}$. $n_{d,t}=n_c+T_{ini}+L-1$ is the number of datapoints in the first two sets. The subsets with subscript $_{d,l}$ are used to build the Hankel matrices charactering the Koopman operator while the remaining two subsets are used to learn the basis functions. Regarding the Assumption~\ref{ass:meas}, a differentiable learner is used to learn the basis functions, dubbed $\{\phi_{u,\theta}\}_{i=1}^{n_{\phi_u}}$, whose parameters are denoted by $\theta$. Neural networks~\cite{goodfellow2016deep} and Gaussian process~\cite{rasmussen2003gaussian} are recommended learners that have strong representation power. In particular, inducing variables can be considered as trainable parameters for a Gaussian process, please refer to~\cite{titsias2009variational,titsias2010bayesian} for more details. Enforcing the condition~\eqref{eq:condition_lifting} for $L$-step sequences, learning problem is formulated as follows: \begin{align}\label{eq:learn_koop} &\begin{split} \min_{\theta} &\sum\limits_{i=n_c}^{n_d} l(\textbf{y}_{d,i}- H_L(\textbf{y}_{d,l})g_i)\\ \text{s.t.}\;&\; \\ & g_i = \text{arg}\min_g P(\textbf{u}_{d,i},\textbf{y}_{d,i})\;, \end{split} \end{align} and \begin{align*} \begin{split} P(\textbf{u}_{d,i},\textbf{y}_{d,i}):& = \lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\\ \text{s.t.}\; &\; H_L(\textbf{u}_{d,l})g = \textbf{u}_{d,i}\\ &\; z = \phi_{u,\theta}(\{u_k,y_k\}_{k=i}^{i+T_{ini}-1}) \end{split}\;. \end{align*} In particular, $\textbf{u}_{d,i}:=[u_i,\dots,u_{i+L+T_{ini}-1}]$ and $\textbf{y}_{d,i}:=[y_i,\dots,y_{i+L+T_{ini}-1}], i\geq n_{d,t}+1$ are sequences of inputs and outputs of length $L+T_{ini}$. The matrix $Z$ is the evaluation of the basis functions \begin{align*} Z:= [\phi_{u,\theta}(\{u_i,y_i\}_{i=0}^{T_{ini}}),\dots,\phi_{u,\theta}(\{u_i,y_i\}_{i=n_c-1}^{T_{ini}+n_c-1})]\;. \end{align*} The constraint in the learning problem~\eqref{eq:learn_koop} is actually a prediction problem similar to~\eqref{eq:lin_pred} and, therefore, $l(\cdot)$ penalizes the prediction error. As one may notice, there are two relaxations in the learning problem~\eqref{eq:learn_koop} \begin{enumerate} \item To recover an output evaluation, an infinite set of basis functions may be required. This learning problem learns a finite order approximation of these probably infinite set. \item The condition~\eqref{eq:condition_lifting} is required to be satisfied for any states, however, the learning problem relax this condition to a set of sampled states. Therefore, the training set $\textbf{u}_{d,t}$ and $\textbf{y}_{d,t}$ should be large enough to represent the condition~\eqref{eq:condition_lifting}. \end{enumerate} \begin{figure*}[!htb] \centering \includegraphics[width = 0.8\textwidth]{Figures/illustration.png} \caption{Illustration of learning lifting function framework } \label{fig:illustration} \end{figure*} Figure~\ref{fig:illustration} shows the flow of the learning problem, where the dashed line indicates the direction of back-propagation. More specifically, the prediction problem is considered as a parametric optimization problem whose differentiation is discussed in Section~\ref{sec:diff_opt}. Finally, we end this subsection by further listing the benefits of the proposed scheme. \begin{itemize} \item Unlike general EDMD methods, which learns the matrices $\mathcal{A,B},C_u$ in~\eqref{eq:lifted_space_state} and~\ref{eq:lifted_sapce_output}. The proposed scheme get rid of the learning of the parameters. Learning of $\mathcal{A,B},C_u$ is ill-conditioned because the solution is not unique. \item The learning problem optimizes a multi-step forward prediction, meanwhile the utilization of the Willems' fundamental lemma guarantees a good numerical stability in the training scheme, which is a key challenge in training the fully-connected recurrent neural network~\cite{bengio1994learning}. \item The proposed scheme is scalable and can be parallelized. \item Unlike other nonlinear extension of the Willems' fundamental lemma, the proposed scheme does not require nonlinear mapping of future inputs and outputs in the prediction problem. \end{itemize} \subsection{Stochastic Prediction}\label{subsec: pred_stoch} As shown in the Section~\ref{sec:learn_koop}, the prediction problem plays a key role in the proposed scheme. If the chosen learner is deterministic, then the predicted output sequence $\tilde{\textbf{y}}$ driven by inputs $\tilde{\textbf{u}}$ is calculated by \begin{align}\label{eq:pred_koop} \tilde{\textbf{y}} &= H_L(\textbf{y}_{d,l})\tilde{g}&\\ \tilde{g} &= \text{arg}\min_g&\lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\nonumber\\ &&\text{s.t.}\;\; H_L(\textbf{u}_{d,l})g = \tilde{\textbf{u}}\nonumber\\ & &\; z = \phi_{u,\theta}(\tilde{\textbf{u}}_p,\tilde{\textbf{y}_p})\nonumber\;, \end{align} which main results in overfitting. Probabilistic learner is one solution to avoid overfitting, such as the aforementioned Gaussian process and the Bayesian neural networks. The output of a probabilistic learner is a distribution but not a deterministic point. In this section, we will show how this distributional outputs from a probabilistic learner can be used for prediction, which is also essential for the training of the Koopman operator. Two methods will be discussed, one is based on Monte-Carlo sampling while another one generates prediction by bounding the Wasserstein distance. \subsubsection{Monte-Carlo Prediction} Assuming the distribution of the probabilistic learner is $\mathbb{P}_{\phi_u}$, a Monte-Carlo method is applied to calculate the distribution of the prediction. In particular, the matrix $Z$ in problem~\eqref{eq:pred_koop} is sampled from $\mathbb{P}_{\phi_u}$, which gives a sample from the output distribution. By Monte-Carlo method, the output distribution can be approximated by the sampled outputs. Meanwhile, the loss function in the learning problem~\eqref{eq:learn_koop} is modified to expected cost. In conclusion, we have the following learning problem \begin{align*} &\begin{split} \min_{\theta} &\sum\limits_{i=n_c}^{n_d} \mathbb{E}\;l(\textbf{y}_{d,i}- H_L(\textbf{y}_{d,l})g_i)\\ \text{s.t.}\;&\; \\ & g_i \sim \text{arg}\min_g P(\textbf{u}_{d,i},\textbf{y}_{d,i})\;, \end{split} \end{align*} and \begin{align*} \begin{split} P(\textbf{u}_{d,i},\textbf{y}_{d,i}):& = \lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\\ \text{s.t.}\; &\; H_L(\textbf{u}_{d,l})g = \textbf{u}_{d,i}\\ &\; z \sim \mathbb{P}_{\phi_{u,\theta}}(\{u_k,y_k\}_{k=i}^{i+T_{ini}-1}) \end{split}\;, \end{align*} the $k$-th column of the matrix $Z$ follows the distribution $\mathbb{P}_{\phi_{u,\theta}}(\{u_i,y_i\}_{i=k-1}^{T_{ini}+k-2})$. The gradient of this learning problem is also approximated by a Monte-Carlo method. \subsubsection{Wasserstein Distanced based Prediction} Intuitively, the regularization term $\norm{Zg-z}_2^2$ in the prediction problem~\eqref{eq:pred_koop} can be considered as the distance between the mean of $Zg$ and $z$. To formulate a more rigor scheme based on a probabilistic learner, we propose to minimize the Wasserstein distance between $Zg$ and $z$. Above all, the entry of the probabilistic learner output is approximated by a Gausian distribution. Specifically, the $i$-th column of the matrix $Z$ is approximated by $\mathcal{N}(\mu_{i,l},\Sigma_{i,l})$, whose covariance matrix is diagonal \begin{align*} \Sigma_{i,l} = \begin{bmatrix} \sigma_{i,1,l}^2&&\\ &\ddots&\\ &&\sigma_{i,n_{\phi_u},l}^2 \end{bmatrix}\;, \end{align*} and we denotes the vector composed of the diagonal elements as $\boldsymbol{\sigma}_{i,l}^2$. Accordingly, the $i$-th element of vector $z$ is approximated by $\mathcal{N}(\mu_i,\sigma^2_i)$ and we denote $z\sim\mathcal{N}(\mu_z,\sigma_z)$. \begin{remark} \begin{itemize} \item It is noteworthy that if an Gaussian process is used to learn the basis function, no approximation is required as the output is already Gaussian. \item To better satisfy the condition of a diagonal covariance matrix $\Sigma_{i,l}$, it is recommended to replace the Hankel matrices with Page matrices. In comparison with definition in~\eqref{eq:hankel}, a depth $L$ Page matrix of a sequence $\textbf{w}$ is defined as \begin{equation*} \mathfrak{P}_L(\textbf{w}) := \begin{bmatrix} w_0 & w_L & \dots & w_{(M-1)L}\\ w_1 & w_{L+1} & \dots & w_{(M-1)L+1}\\ \vdots & \vdots & \ddots & \vdots \\ w_{L-1} & w_{2L - 1} & \dots & w_{ML-1} \end{bmatrix}\;. \end{equation*} \end{itemize} \end{remark} Based on the approximation, $Zg$ turns out to be a Gaussian distribution, which is denoted by $Zg\sim \mathcal{N}(\mu_{Zg},\Sigma_{Zg})$ for compactness. To enable a prediction scheme, we conclude the following lemma \begin{lemma} The second Wasserstein distance between $Zg$ and $z$ is bounded by \begin{align}\label{eq:wass_bound} W_2^2(Zg,z)\leq\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_* \end{align} \end{lemma} \begin{proof} The second Wasserstein distance between two Gaussian distribution~\cite{villani2008optimal} is \begin{equation}\label{eq: wasserstein_distance_1} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_z, \Sigma_z)) \\ = &\norm{\mu_{Zg} - \mu_z}_2^2 + \text{Tr}\left(\Sigma_{Zg} + \Sigma_z\right) \\ - &2\text{Tr}\left(\Sigma_{Zg}^{\frac{1}{2}}\Sigma_z\Sigma_{Zg}^{\frac{1}{2}})\right) \end{aligned}\;, \end{equation} where $\mu_{Zg}= \sum\limits_{i=1}^{n_c}\mu_{i,l}g_i$ with $g_i$ being the $i$-th entry of $g$. The covariance matrix of $Zg$ is calculated as follows \begin{align*} \Sigma_{Zg} = (g^\top\otimes I)\text{Cov}(\text{vec}(Z))(g\otimes I)\;, \end{align*} with \begin{align*} \text{Cov}(\text{vec}(Z)) = \begin{bmatrix} \Sigma_{1,l} & & \\ & \ddots & \\ & & \Sigma_{n_c,l} \end{bmatrix}\;, \end{align*} therefore, we conclude \begin{align}\label{eq: sigma_Zpg} \Sigma_{Zg} = \sum\limits_{i=1}^{n_c} g_i^2\Sigma_{i,l}\;. \end{align} Since $\Sigma_{z}$ is diagonal, $\Sigma_{Zg}\Sigma_{z} = \Sigma_{z}\Sigma_{Zg}$, \eqref{eq: wasserstein_distance_1} can be reformulated as: \begin{equation}\label{eq: wasserstein_distance_2} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_{z}, \Sigma_{z})) \\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}(\Sigma_{Zg} + \Sigma_{z} -2(\Sigma_{Zg}\Sigma_{z})^{1/2})\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}((\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2})^T(\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2}))\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2}}_F^2 \end{aligned} \end{equation} where $\norm{\cdot}_F$ denotes the Frobenius norm. This objective function has a clear interpretation. The first term quantifizes the distance between the mean of these two Gaussian distribution, while the second measures the discrepancy between the covariance matrices. If the derived $\Sigma_{Zg}$ in~\eqref{eq: sigma_Zpg} is substituted in~\eqref{eq: wasserstein_distance_2}, the evaluation of the resulting metric is numerically ill-conditioned. The Frobenius norm is therefore further relaxed with the Powers-Størmer’s inequality\cite{powers1970free}: \begin{equation} 2\text{Tr}(A^\alpha B^{1-\alpha}) \geq \text{Tr}(A + B - |A - B|), 0 \leq \alpha \leq 1 \end{equation} $A$, $B$ are positive semidefinite and $|A|$ is the positive square root of the matrix $A^*A$, we have: \begin{equation*} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_{z}, \Sigma_{z})) \\ \leq &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}(|\Sigma_{Zg} - \Sigma_{z}|)\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_* \end{aligned}\;, \end{equation*} with $\norm{\cdot}_*$ denoting the nuclear norm. \end{proof} Based on this lemma, the prediction problem with a probabilistic learner is reformulated as \begin{align}\label{eq:pred_koop_wass} \tilde{\textbf{y}} &= H_L(\textbf{y}_{d,l})\tilde{g}&\\ \tilde{g} &= \text{arg}\min_g&\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_*\nonumber\\ &&\text{s.t.}\;\; H_L(\textbf{u}_{d,l})g = \tilde{\textbf{u}}\nonumber\\ & &\; z = \phi_{u,\theta}(\tilde{\textbf{u}}_p,\tilde{\textbf{y}}_p)\nonumber\;, \end{align} \begin{remark}\label{rmk:huber} The upper bound~\eqref{eq:wass_bound} is non-smooth, where the absolute value evaluation is ill-condidtioned around $0$~\cite[Chapter 3]{beck2017first}. The absolute value is smoothed by an approach similar to a Huber loss~\cite[Chapter 2]{rockafellar2009variational}, which is defined as: \begin{equation*} L_{\delta}(a) = \begin{cases} \frac{1}{2}a^2 & \text{for} |a|<\delta\\ \delta(|a|-\frac{1}{2}\delta) & \text{otherwise} \end{cases} \end{equation*} The evaluation of the $i$-th diagonal elements in $|\Sigma_{Zg} - \Sigma_{z}|$ is then reformulated as \begin{equation*} \begin{aligned} &(|\Sigma_{Zg} - \Sigma_{z}|)_{ii} \\ = &\begin{cases} \frac{1}{2}(\sigma_{Zg, i}^2 - \sigma_{z, i}^2)^2 & \quad |\sigma_{Zg, i}^2 - \sigma_{z, i}^2|<\delta\\ \delta(|\sigma_{Zg, i}^2 - \sigma_{z, i}^2|-\frac{1}{2}\delta) & \quad\text{otherwise} \end{cases} \end{aligned}\;. \end{equation*} \end{remark} \section{Koopman-based Data-driven Predictive Control}\label{sec: control} The DeePC framework (\ref{eq: DeePC}) can be extended into nonlinear systems by integrating the equality constraints with (\ref{eq:condition_lifting}). However, DeePC formulation suffers from the problem that the prediction step interweaves with the control step. In another word, the algorithm may use a non-optimal prediction result for control. When the penalty factor $\lambda_y$ is not sufficient large and the system is initialized away from the reference, the algorithm tends to compensate the difference with a relative large $\sigma_y$, which will result in control failure. To tackle this problem, we propose a bi-level programming formulation\cite{dempe2002foundations}, where the prediction step is independent from control. \begin{subequations}\label{eq: bilevel formulation} \begin{equation}\label{eq: bilevel_control} \begin{aligned} \min_{\mathbf{u}, \mathbf{y}, g} &(\sum_{k=0}^{N-1}\norm{y_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ \text{subject to } & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & y_k \in \mathcal{Y}, \forall k \in {0, \dotsc, N-1}\\ &Y_fg = \mathbf{y}\\ \text{for some } &g \in \Phi(\mathbf{u})\\ \end{aligned} \end{equation} \begin{equation}\label{eq: bilevel_prediction} \begin{aligned} \Phi(\mathbf{u}) = \text{arg}\min_{g}& \lambda_g\norm{g}_2^2+\lambda_y\norm{ Zg - z}_2^2\\ \text{subject to} & \begin{bmatrix} U_p \\ U_f \end{bmatrix} g= \begin{bmatrix} \mathbf{u}_{ini} \\ \mathbf{u} \end{bmatrix} \\ \text{with parameter } & z = \phi_{u,\theta}(\textbf{u}_{ini},\textbf{y}_{ini}) \end{aligned} \end{equation} \end{subequations} The bi-level problem introduces a hierarchical structure where the upper level problem (\ref{eq: bilevel_control}) indicates the control step and the lower level problem (\ref{eq: bilevel_prediction}) functions as the prediction step. Note that compared to DeePC (\ref{eq: DeePC}), in (\ref{eq: bilevel_prediction}) the squares of the two norm are used so that the objective function of the lower level problem remains smooth. A usually used approach to solve a bi-level problem is to transform it into a single level problem. Applying optimality conditions and introducing optimal value function are the two main categories of transformation approaches if the bi-level problem fulfills certain conditions \cite[Chapter 5]{dempe2002foundations}. Here we present the result by replacing the lower level problem (\ref{eq: bilevel_prediction}) with its KKT conditions: \begin{equation}\label{eq: single level} \begin{aligned} \min_{g, \mathbf{u}, \mathbf{y}, \mu_1, \mu_2} &(\sum_{k=0}^{N-1}\norm{x_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ \text{subject to} & \begin{bmatrix} U_p \\ U_f \\ Y_f \end{bmatrix} g= \begin{bmatrix} \mathbf{u}_{ini}\\ \mathbf{u} \\ \mathbf{y} \end{bmatrix} \\ 2g^\top + 2(&Zg - z)^\top Z + \mu_1U_p + \mu_2U_f = 0 \\ & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & x_k \in \mathcal{X}, \forall k \in {0, \dotsc, N-1} \end{aligned} \end{equation} This equivalent single level problem is solvable by many optimization toolboxes. \begin{remark} Solving a bi-level problem is in general NP-hard. To the best of our knowledge, there is no valid approach to solve a general bi-level optimization problem where the lower level problem is non-convex. It is worth mentioning that the wasserstein distanced based prediction problem presented in Section~\ref{subsec: pred_stoch} is non-convex. We therefore leave the control formulation integrated with wasserstein distanced based prediction as a future work. \end{remark} The algorithm is summarized as follows: \begin{algorithm}[htbp] \label{alg: Koopman based DeePC} \SetAlgoLined \For{$t=T_{ini},\dotsc$}{ Set $z = \phi_{u,\theta}(\textbf{u}_{ini},\textbf{y}_{ini})$ \; Solve (\ref{eq: single level}) to obtain an optimal input sequence $\boldsymbol{u}^*(0)$ \; Set $u_t = \boldsymbol{u}^*(0)$ \; Apply $u_t$ to the system and measure $y_t$ } \caption{Koopman based DeePC} \end{algorithm} where $\boldsymbol{u}^*(0)$ denotes the first elements of $\boldsymbol{u}^*$. \section{Simulation results}\label{sec: simulation} In this section, the prediction results on a Van der Pol oscillator based on Monte-Carlo prediction and wasserstein distanced based prediction are firstly illustrated. Then a numerical experiment of controlling a bilinear motor is presented. We finally demonstrate the potential of our proposed scheme in the large-scale problems with an example of controlling the nonlinear Korteweg-de Vries equation. The source code of the numerical examples can be accessed through \url{https://github.com/RencciW/DataDrivenControlCode}. \subsection{Stochastic prediction} We show the results of prediction of trajectories from a Van der Pol oscillator \begin{equation*} \begin{aligned} \dot{x} = \begin{bmatrix} x_2 \\ \mu(1-x_1^2)x_2-x_1+u \end{bmatrix} \end{aligned} \end{equation*} with $\mu = 1$. we train a 5-layer network (2, 12, 22, 12 and 12) with 1100 data points sampled from 100 random trajectories generated by Van der Pol oscillator. Each layer excluding the input layer is added with a dropout layer. Choose ReLU as activation function for each hidden layer and Adam as optimizer with learning rate $10^{-3}$. Set the dropout rate equals to $0.2$. The code is implemented with PyTorch \cite{paszke2017automatic}. From each of 100 trajectories, we sample 3 trajectory fragments for the construction of Hankel matrix. For a better comparison with following results, in test phase, we choose 3 trajectory fragments from each of 24 trajectories to formulate the Hankel matrix with $T_{ini}=1$ and $N=10$. Test the trained network on 50 data points sampled from trajectories, which are independent from the data used for training and hankel matrix formulation. Forward the same data 120 times to the network and compute the mean value and standard derivation of the prediction. The results are shown below. The light blue color indicates two times the standard derivation. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/original_hankel_x_1.png} \caption{Prediction result ($x_1$) } \end{subfigure} \vfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/original_hankel_x_2.png} \caption{Prediction result ($x_2$)} \end{subfigure} \caption{Prediction result using Koopman operator learned by MC dropout} \label{fig: original_hankel_x} \end{figure} \subsection{Prediction based on Wasserstein distance} We test the new loss function with Van der Pol data. From each of 24 different trajectories, choose one trajectory fragment to formulate the Page matrix. We firstly lift the data for Page matrix construction with the dropout neural network we trained from last subsection. The mean value and the standard derivation of the lifted data are computed for estimation. Then send the data to prediction problem to obtain an optimizer $g^*$. Predict the future trajectory with $x=X_fg^*$, $X_f$ is the Hankel matrix block for prediction. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/nuclear_smooth_x_1.png} \caption{Prediction result $(x_1)$} \end{subfigure} \vfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/nuclear_smooth_x_2.png} \caption{Prediction result $(x_2)$} \end{subfigure} \caption{Prediction result using loss function derived from Wasserstein distance} \label{fig: nuclear_x} \end{figure} We compute the mean squared error (MSE) of the prediction based on proposed Wasserstein loss and original quadratic loss at the $9_{th}$ time step. The comparison is summarized in following table: \begin{table}[!htb] \centering \begin{tabular}{l|c c} \hline & $x_1$ & $x_2$\\ \hline Wasserstein loss & 0.0817 & 0.0855\\ Original loss & 3.8707 & 1.3909\\ \hline \end{tabular} \caption{MSE comparison between prediction result computed with different loss functions} \label{tab: loss comparison} \end{table} It is clear that the Wasserstein loss outperforms the original quadratic loss when uncertainty of lifting function is considered. \subsection{Control with Koopman-based DeePC} \subsubsection{Control of a bilinear motor} We firstly compare the control algorithm with the algorithm Koopman operator-based MPC controller (K-MPC) proposed in \cite{korda2018linear} by controlling a bilinear model of a DC motor. \cite{DANIELBERHE1997203} \begin{equation*} \begin{aligned} \dot{x}_1 &= - (R_a/L_a)x_1 - (k_m/L_a)x_2u + u_a/L_a\\ \dot{x}_2 &=-(B/J)x_2 + (k_m/J)x_1u-\tau_1/J\\ y &= x_2 \end{aligned} \end{equation*} where $x_1$ is the rotor current, $x_2$ the angular velocity an the control input $u$ is the stator current and the output $y$ is the angular velocity. The parameters are $L_a=0.314, R_a=12.345, k_m=0.253, J = 0.00441, B = 0.00732, \tau_1=1.47, u_a=60$. The physical constraints on the control input are $u\in [-1, 1]$. We use 40 trajectories with time horizon $0.25s$ to construct a mosaic Hankel matrix. All trajectories are randomly initialized on the unit box $[-1, 1]^2$. The control input obeys to a uniform distribution over the interval $[-1, 1]$. Choose 40 thin plate spline radial basis function with centers selected randomly with uniform distribution over $[-1, 1]^3$ as lifting functions. Since the system states are not directly measurable, we choose the number of delays $n_d=1$. We define $C =[1, 0, \dotsc, 0]$, $Q= Q_{N_p}=10$, $R=0.01$. The prediction horizon $N = 10$, which implies $0.1s$. Since the system is linear in the lifting space, choose $T_{ini} = 1$. The reference is designed as $r(t) = 0.5cos(2\pi t/3)$. Introduce constraints on output $y \in [-0.4, 0.4]$. We simulate for $3s$ and compare the result with a model-based method K-MPC proposed in \cite{korda2018linear} \begin{figure}[htbp] \centerline{\includegraphics[width = 0.4\textwidth]{Figures/motor_input.jpg}} \caption{Feedback control input of a bilinear motor} \label{fig: motor_input} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width = 0.4\textwidth]{Figures/motor.jpg}} \caption{Angular velocity of a bilinear motor} \label{fig: motor_output} \end{figure} As shown in the figure \ref{fig: motor_input} and \ref{fig: motor_output}, the algorithm is capable of following the reference without violating constraints, although compared to K-MPC, the input computed by our method will vibrate gently when the input trajectory is non-smooth. \subsubsection{Control of nonlinear Korteweg–de Vries equation} Our next simulation is to control the nonlinear Korteweg–de Vries(KdV) equation which models the propagation of acoustic waves in aplasma or shallow-water wave \cite{miura1976korteweg}. The equation is given as: \begin{equation*} \frac{\partial y(t, x)}{\partial t} + y(t, x)\frac{\partial y(t, x)}{\partial x} + \frac{\partial ^3y(t, x)}{\partial x^3} = u(t, x) \end{equation*} where $y(t, x)$ is the unknown function and $u(t, x)$ the control input. $x\in[-\pi, \pi]$ The space is descretized into 128 points and the time step $\Delta t = 0.02s$. The input is assumed to be of the form $u(t, x)= \sum_{i = 1}^3 u_i(t)v_i(x)$ where $v_i$ consists of 3 spacial basis functions: $v_i(x) = e^{-25(x-\pi/2)^2}$ with $c_1=-\pi/2$, $c_2=0$, $c_3=\pi/2$. The input is constrained to $[-1, 1]$. We initial the system by convexly combining 3 fixed spatial profiles $y_0^1=e^{-(x-\pi/2)^2}$, $y_0^2=-sin^2(x/2)$, $y_0^3=e^{-(x+\pi/2)^2}$. We choose the states itself, the elementwise square of the state, the elementwise product of the states with its periodic shift as the lifting functions. $Q$ is the identity matrix and $R$ is zero matrix. The prediction horizon $N = 5$, which implies $0.1s$. $T_{ini}$ remains equal to $1$. Formulate the Hankel matrix with 63 trajectories, each of which is simulated for $0.5s$. \begin{figure}[htbp]\label{fig: input_KdV} \centerline{\includegraphics[width = 0.38\textwidth]{Figures/KdV_input.jpg}} \caption{Feedback control input of KdV} \label{fig: kdV_input} \end{figure} \begin{figure}[htbp]\label{fig: KdV_motor} \centerline{\includegraphics[width = 0.38\textwidth]{Figures/KdV_output.jpg}} \caption{Tracking result} \label{fig: KdV_output} \end{figure} Despite of large dimension, the algorithm is still capable of computing the optimum input in an acceptable time and tracking the reference. \section{Conclusion}\label{sec: conclusion} In this work, we extend a data-driven predictive control method into nonlinear systems. The underlying idea is to lift the system with Koopman operator into infinite dimensional space where the system evolves linearly along the nonlinear system trajectories. Approximation of nonlinear lifting functions based on a purely data-driven framework is proposed, along with considerations on the uncertainty of the approximation, which enabling a novel data-driven simulation scheme based on wasserstein distance. \section{Introduction} \label{sec:introduction} \IEEEPARstart{R}{ecent} trend of the digitalization motivates the research interest of data-driven control~\cite{hou2013model} because of the wide access of data collected by the sensors. Instead of resorting to a first-principle model, the collected data are used either to identify a model~\cite{ljung1999system} or to construct a controller directly. The former approach is compatible with most control theory and therefore results in successful applications~\cite{lanzetti2019recurrent,kocijan2016modelling}. Featuring a controller desgin without any intermediate stage, the letter data-driven scheme attract more research interests and finds successful application in linear systems~\cite{campi2002virtual} and iterative control~\cite{hjalmarsson2002iterative,bristow2006survey}. It worth mentioning that in the community of reinforcement learning~\cite{sutton2018reinforcement}, the learning schemes can also be categorized as model-based methods and model-free methods. This work is developed on the basis of the Willems' fundamental lemma and the Koopman operator theory. In particular, the Willems' fundamental lemma characterizes the responses of deterministic linear time invariant(LTI) systems with measured trajectories under reasonable assumptions of controllability and persistent excitation. Based on the data-driven prediction enabled this lemma, predictive control scheme has been developed~\cite{coulson2019data,markovsky2007linear}. Beyond the predictive control, the Willems' fundamental lemma has also been adopted in feedback controller design~\cite{berberich2020robust,de2019formulas}. Within the LTI framework, the Willems' fundamental lemma is further extend to incorporate measurement noise~\cite{9103015,yin2020maximum} and process noise~\cite{berberich2020robust}. At the same time, a significant collection of works has tried to extend the applications of the Willems' fundamental lemma to nonlinear systems. In~\cite{berberich2020trajectory}, an extension to Hammerstein systems and Wiener systems is proposed based on an a priori knowledge of basis function. \cite{bisoffi2020data,rueda2020data,guo2020data} attempt to apply the Willems' fundamental lemma to the class of polynomial system. As pointed out in~\cite{rueda2020data}, the necessary and sufficient condition in the fundamental lemma is broken. An promising viewpoint regarding the quotient space is proposed in~\cite{lian2021nonlinear}, which clusters trajectories into equivalent class and extend the Willems' fundamental lemma into the reproducing kernel Hilbert space. The functional space viewpoint in~\cite{lian2021nonlinear} motivates us to apply the Willems' fundamental lemma in the function space. In particular, the Koopman operator theory is used. The Koopman operator theory is first introduced in the study of forward-complete autonomous sytems~\cite{Koopman255,Koopman315}, which is a linear composite operator even when the system is nonlinear. \cite{korda2018linear,surana2016linear} later introduced the applications of the Koopman operator in controller and observer design, followed by a wide range of research ranging from the model reduction~\cite{peitz2019koopman} to the global optimal control~\cite{villanueva2020towards}. Even though most algorithms based on the Koopman operator have numerical implementations that are similar to those studied in~\cite{berberich2020trajectory,bisoffi2020data,guo2020data,rueda2020data}, the Koopman operator establishes a totally different theoretical framework. In particular, the Koopman operator corresponds to a Heisenberg picture which models the evolution of the observable, while other aforementioned methods model the evolution of the state, corresponding to a Schrödinger picture~\cite{landau2013quantum}. The Koopman operator theory can alleviate the theoretical issue in studying the lifting function without resorting to the quotient space, and enables convergence analysis as that has done in~\cite{korda2018convergence}. The key component of a Koopman operator based method is the learning of the eigenfunctions or the lifting functions lying within the subspace spanned by the eigenfunctions. A standard framework of extended dynamic mode decomposition (EDMD) spans the lifting functions with a dictionary of basis function~\cite{williams2015data}, which suffers from the curse of dimensionality. To overcome this challenge, \cite{kawahara2016dynamic,klus2020eigendecompositions} applies kernel method to learn the Koopman operator in a non-parametric way, which is still not scalable to large dataset. In \cite{takeishi2017learning,lusch2018deep}, the lifting functions are approximated by neural networks. However, these aforementioned methods mainly consider one-step forward prediction either due to the formulation of the learning problem or due to numerical stability, which results in a relatively inaccurate long-term prediction. A link between the Koopman operator and the subspace identification is observed in~\cite{lian2019learning}, which enables a learning scheme of long-term prediction. However, this method still suffers from the lack of scalability and the numerical limitation of the subspace identification~\cite{qin2006overview}. In this work, we propose to incorporate the learning of a Koopman operator into the framework of the Willems' fundamental. The applications of Koopman operator in the Willems' fundamental lemma are mentioned in~\cite{coulson2019data,berberich2020trajectory} but have not been detailed. In this work, we show that by maximizing the linearity of an finite order approximation, a Koopman operator can be learned based on the sensitivity analysis of a parametric programming problem. The proposed learning scheme is capable of uncertainty quantification, where a new objective function is derived to account for the uncertainty. Meanwhile, we propose a control scheme that solves a bi-level optimization problem by a transformation into to a single level structure. The bi-level optimization formulation has been discussed in both~\cite{coulson2019data,dorfler2021bridging}, where a bi-level problem is relaxed to a multi-objective problem. The remainder of this paper is organized as follows: the preliminary knowledge is introduced in Section~\ref{sec: pre}, after which the training and the prediction based on the proposed scheme is elaborated in Section~\ref{sec: prediction}. We explain the proposed control framework in Section~\ref{sec: control} along with presenting the numerical simulation results of prediction and control in Section~\ref{sec: simulation}. \section*{Notations} $\norm{x}_p$ indicates the $\ell_p$ norm of vector $x$ and $\norm{x}_Q := x^\top Q x$ is the weighted norm with $Q$ being positive semi-definite. $\norm{A}_F$ and $\norm{A}_*$ denote the Frobenius norm and the nuclear norm of the matrix $A$ respectively. $\mathcal{N}\sim(\mu, \Sigma)$ is a Gaussian distribution with mean value $\mu$ and covariance matrix $\Sigma$. We use $\mathbb{Z}_{\geq 0}$ to represent a non-negative integer. $\textbf{w}:=\{w_k\}_{k=a}^{b}$ is a sequence of signal $\{w_a, \dots, w_b\}$ indexed by $k$. Specifically, the boldface is used to denote a sequence while the lightface denotes a measurement, \textit{e.g.} $\textbf{w}$ and $w_k$. Meanwhile, the subscript $d$ is reserved to denote the data collected offline. The superscript $^*$ is used to denote optimal solution of an optimization problem. $\otimes$ denotes the Kronecker product. \section{Preliminary}\label{sec: pre} In this section, the Willems' fundamental lemma and the Koopman operator theory will be introduced. Then, the sensitivity analysis of a parametric optimization, which is the enabler of the learning, is discussed. \subsection{Willems' Fundamental Lemma} Given a sequence of measurements $\{w_k\}_{k=0}^{T-1}$, its Hankel matrix of depth $L$ is defined as \begin{equation}\label{eq:hankel} H_L(\textbf{w}) := \begin{bmatrix} w_0 & w_1 & \dots & w_{T-L}\\ w_1 & w_2 & \dots & w_{T-L+1}\\ \vdots & \vdots & \ddots & \vdots \\ w_{L-1} & w_{L} & \dots & w_{T-1} \end{bmatrix}\;. \end{equation} Regarding a Hankel matrix $H_L(\textbf{w})$, the signal sequence $\textbf{w}$ is persitently exciting of order $L$ if $H_L(\textbf{w})$ is full row rank. The Willems' fundamental lemma utilizes the Hankel matrices to characterize the response of the following deterministic linear time invariant(LTI) system, dubbed $\mathfrak{B}(A,B,C,D)$, \begin{equation}\label{eq: linear_dynamics} \begin{aligned} x_{k+1}&=Ax_k+Bu_k\\ y_k &= Cx_k+Du_k \end{aligned}\;, \end{equation} where $A\in\mathbb{R}^{n_x\times n_x}, B\in\mathbb{R}^{n_x\times n_u}, C\in\mathbb{R}^{n_y\times n_x}, D\in\mathbb{R}^{n_y\times n_u}$ parametrize the system dynamics and the order of this system is denoted by $O(\mathfrak{B}(A,B,C,D)):=n_x$. The \textbf{Willems' fundamental lemma} is concluded as \begin{lemma}\label{lemma: fundamental lemma} (\cite[\textit{Theorem 1}]{willems2005note}, \cite[\textit{Lemma 2}]{de2019formulas}) Consider a controllable and observable system (\ref{eq: linear_dynamics}), if the input sequence $\textbf{u}_d=\{u_{d,k}\}_{k=0}^{T_d-1}$ is persistently exciting of order $O(\mathfrak{B}(A,B,C,D)) + L$, then \begin{enumerate} \item Any $L$-step input/output trajectory of system (\ref{eq: linear_dynamics}) can be expressed as \begin{equation*} \begin{bmatrix} H_L(\textbf{u}_d) \\ H_L(\textbf{y}_d) \end{bmatrix} g = \begin{bmatrix} u \\ y \end{bmatrix} \end{equation*} \item Any linear combination of the columns of the Hankel matrices, that is \begin{equation*} \begin{bmatrix} H_L(\textbf{u}_d) \\ H_L(\textbf{y}_d) \end{bmatrix} g \end{equation*} is a $L$-step input/output trajectory of (\ref{eq: linear_dynamics}) \end{enumerate} \end{lemma} This lemma enables data-driven simulation and control~\cite{markovsky2008data,coulson2019data}. To make an $N$-step prediction, the Hankel matrices composed of offline data is partitioned as \begin{equation*} \begin{aligned} \begin{bmatrix} U_p \\ U_f \end{bmatrix} := H_{T_{ini} + N}(\textbf{u}_d)\;,\; \begin{bmatrix} Y_p \\ Y_f \end{bmatrix} := H_{T_{ini} + N}(\textbf{y}_d)\;,\; \end{aligned} \end{equation*} where the first $T_{ini}$ row blocks are used to construct $U_p\;,Y_p$ while the remaining row blocks is assigned to $U_f\;,\;Y_f$. In the remainder of this paper, $n_c$ is reserved to denote the number of columns in the Hankel matrix. In particular, $T_{ini}$ is chosen to ensure the uniqueness of prediction and the rank of the observability matrix \begin{align*} \mathcal{O}_{T_{ini}}(A,C) := \begin{bmatrix} C^\top& (CA)^\top, &\dots,& (CA^{T_{ini}-1})^\top \end{bmatrix}^\top\; \end{align*} is of rank $O(\mathfrak{B}(A,B,C,D)=n_x$~\cite{markovsky2008data}. Without measurement noise, the $N$-step output prediction $\textbf{y}$ is defined by \begin{align}\label{eq:lin_pred} \begin{split} \textbf{y} &= Y_fg\\ \text{s.t.}\;\; \begin{bmatrix} U_p\\ Y_p \\ U_f \end{bmatrix} g&= \begin{bmatrix} \textbf{u}_{ini}\\ \textbf{y}_{ini} \\\textbf{u} \end{bmatrix}\;, \end{split} \end{align} where $\textbf{u}_{ini}$ and $\textbf{y}_{ini}$ are $T_{ini}$-step previous measurements of the inputs and the outputs. Accordingly, $\textbf{y}$ is the $N$-step response driven by input sequence $\textbf{u}$. Built on this prediction scheme, the data-enabled predictive control(DeePC)~\cite{coulson2019data} is \begin{equation}\label{eq: DeePC} \begin{aligned} \min_{g, \sigma_y, u, y} &(\sum_{k=0}^{N-1}\norm{y_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ & +\lambda_g\norm{g}_1+\lambda_y\norm{\sigma_y}_1\\ \text{s.t.} & \begin{bmatrix} U_p\\ Y_p \\ U_f \\ Y_f \end{bmatrix} g= \begin{bmatrix} \textbf{u}_{ini}\\ \textbf{y}_{ini} \\\textbf{u} \\ \textbf{y} \end{bmatrix} + \begin{bmatrix} 0 \\ \sigma_y \\0 \\0 \end{bmatrix}\\ & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & y_k \in \mathcal{Y}, \forall k \in {0, \dotsc, N-1}\;, \end{aligned} \end{equation} where $Q$ and $R$ are the weight penalizing outputs and inputs respectively and $\sigma_g$ is introduce to deal with measurement noise. $r\in \mathbb{R}^{pN}$ is the reference trajectory and $\mathcal{U,Y}$ are the feasible sets of inputs and outputs. $\norm{g}_1$ and $\norm{\sigma_y}_1$ are regularization terms. $\lambda_y, \lambda_g \in \mathbb{R}_{>0}$ are regularization parameters. These regularization terms have been interpreted under distributionally robust optimization framework~\cite{coulson2019regularized} and maximal likelihood framework~\cite{yin2020maximum}. \textit{Remark}: When a long input-output sequence is not available, the Hankel matrix $H_L(\textbf{w})$ can be replaced by a mosaic Hankel matrix\cite{van2020willems}. Given $M$ trajectories: \begin{equation*} \begin{aligned} &\textbf{w}=[\textbf{w}_1,\dotsc,\textbf{w}_M], \\ \text{where each trajectory } &\textbf{w}_i=(w_{i,1},\dotsc,w_{i,T_i}), w_{i,k}\in\mathbb{R}^q \end{aligned} \end{equation*} the mosaic Hankel matrix is defined as : \begin{equation*} H_L(\textbf{w}) = [H_L(\textbf{w}_1), \dots, H_L(\textbf{w}_M)] \end{equation*} \subsection{Koopman Operator}\label{sec:koopman} Given a discrete-time autonomous system \begin{align}\label{eq:nonlin_dyn} x_{k+1} = f(x_k)\;, \end{align} where $f$ models the nonlinear dynamics, a Koopman operator is a composite operator \begin{align*} \mathcal{K}\psi := \psi\circ f\;, \end{align*} which $\psi: \mathbb{R}^{n_x}\rightarrow \mathbb{R}$ is called observable. Unlike standard state space model, the Koopman operator models the evolution of a function driven by system dynamics $f$ and its existence is guaranteed for forward-complete system~\cite{bittracher2015pseudogenerators}. As the Koopman operator is an operator on a function space, $\mathcal{K}$ is in general infinite-dimensional, but critically it is linear even when the dynamics F are non-linear and as such, an observable $\phi$ is an eigenfunction associated with the eigenvalue $\lambda\in\mathbb{C}$ if $\mathcal{K}\phi = \lambda\phi$. From this we can see that the eigenfunctions (or linear combinations of the eigenfunctions) evolve linearly along the trajectories of our nonlinear system~\eqref{eq:nonlin_dyn} \begin{align} \phi(x_{k+1})=\phi(f(x_k))=(\mathcal{K}\phi)(x_k)=\lambda\phi(x_k)\;. \end{align} Given a collection of eigenfunctions $\{\phi_i\}_{i=1}^{n_\phi}$, any observable lying within the span of these eigenfunctions can be decomposed into $\psi = \sum_i c_i(\psi)\phi_i$, where $c_k(\psi)$ is called the Koopman modes of $\psi$. Then, we have \begin{align*} \mathcal{K} \psi = \sum_i c_i(\psi)\lambda_i\phi_i\;, \end{align*} with $\lambda_i$ denoting the eigenvalue of $\phi_i$. In the sequel, the subscript $_u$ is used to denote the components corresponding to a system with control inputs. Given a nonlinear dynamics with control input \begin{align*} x_{k+1} = f_u(x_k,u_k)\;, \end{align*} the Koopman operator can be defined in different ways~\cite{williams2016extending,proctor2018generalizing,korda2018linear}. In this work, we consider the framework in~\cite{korda2018linear}. More specifically, denote the infinite control sequence $\boldsymbol{u}:= \{u_k\}_{k=0}^{\infty}\in\mathit{l}(\mathcal{U})$, where $\mathit{l}(\mathcal{U})$ represents the space of all control sequence. The augmented state is \begin{align*} \chi = \begin{bmatrix} x\\\boldsymbol{u} \end{bmatrix}\;, \end{align*} upon which the system dynamic is augmented as $F: \mathbb{R}^{n_x}\times \mathit{l}(\mathcal{U})\rightarrow \mathbb{R}^{n_x}\times \mathit{l}(\mathcal{U})$ \begin{align}\label{eq:Koopman_control} F(\chi_k) = \begin{bmatrix} f_u(x_k,\boldsymbol{u}_k(0))\\\mathcal{S}\boldsymbol{u}_k \end{bmatrix}\;. \end{align} $\mathcal{S}$ is the left shift operator with $\mathcal{S\boldsymbol{u}}(i):= \boldsymbol{u}(i+1)$ and $\boldsymbol{u}(i)$ is the evalution of the $i$-th element of $\boldsymbol{u}$. In this setup, $\boldsymbol{u}$ can be considered as a sequence of mappings from index $i$ to actual output $u_i$. It is noteworthy to point out that this dynamical system~\ref{eq:Koopman_control} is infinite dimensional but autonomous. Hence, the aforementioned definition of the Koopman operator can be applied directly and the corresponding eigenfunctions are assumed to be spanned by the following dictionary of basis functions. \begin{align*} \{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u} := \{\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x),\boldsymbol{u}(0)^\top\}\;. \end{align*} If the evolution of this dictionary of basis functions is closed under the system dynamics, then we have \begin{align}\label{eq:lifted_space_state} \begin{split} z_{k+1} &= \mathcal{A}z_k+\mathcal{B} \boldsymbol{u}_k(0)\\ \boldsymbol{u}_{k+1}(0) &= \boldsymbol{u}_k(1)\;, \end{split} \end{align} where $z_k:=[\phi_{u,1}(x_k),\dots,\phi_{u,n_{\phi_u}}(x_k)]$ and $\mathcal{A,B}$ captures the Koopman operator. Similarly, any functions within the span of these basis functions can be recovered by the Koopman mode as \begin{align} \psi_u(x,\boldsymbol{u}(0)) = c_u^\top \begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix}\;, \end{align} with $c:=[c_{u,1},c_{u,2}\dots,c_{u,n_{\phi_u}+n_u}]$ the vector of Koopman modes. In particular, we are interested in the Koopman mode of the identity functions evaluated on the sytem outputs. Assumed that we have $n_y$ outputs, the evaluation of the $i$-th output is $I_{y,i}(y_k):= y_{k,i}$. With a bit of abuse of notation, the Koopman modes decomposition of outputs evaluation is \begin{align}\label{eq:lifted_sapce_output} y_k = \begin{bmatrix} I_{y,1}(y_k)\\I_{y,2}(y_k)\\\vdots\\I_{y,n_y}(y_k) \end{bmatrix}= \begin{bmatrix} c_{u,1}^\top\\c_{u,2}^\top\\\vdots\\c_{u,n_y}^\top \end{bmatrix}\begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix} := C_u\begin{bmatrix} z\\\boldsymbol{u}(0) \end{bmatrix}\;, \end{align} where $C_u$ stacks the Koopman modes of the output evaluations. \subsection{Differential Parametric Optimization}\label{sec:diff_opt} Sensitivity analysis investigates the smoothness of a parametric optimization problem, where the implicit function theorem~\cite{krantz2003introduction} is applied to the KKT system. This idea has been applied to deep learning~\cite{el2019implicit} and reinforcement learning~\cite{zanon2020safe}. Though the solution map is barely differentiable, the optimal value function is smoother than the solution map~\cite{fiacco2020mathematical}, which is the only tool used in this work. In general, the continuity of a general convex optimization problem is guaranteed by the uniform level boundness~\cite[Theorem 1.17]{rockafellar2009variational}, while a general nonlinear parametric optimization problem guarantees a lower semiconitinous value function under the assumption of local compactness~\cite{bank1982non}. For the sake of clarity, we elaborate this derivative by a standard quadratic programming (QP), please refer to~\cite{agrawal2019differentiating} for a general conic form. We use subscript $_q$ to avoid confusion. Consider the a parametric QP, $\mathcal{Q}(e_q):=e_q\rightarrow z_q^*$ with parameters $\{Q_q,\;,q_q\;H_q,\;h_q,\;E_q\}$ and $Q_q$ positive definite: \begin{equation} \begin{split} \label{eqn:quaratic_program} \underset{z_q}{\min}\ &\frac{1}{2}z_q^TQ_qz_q + q_q^Tz \\ \text{s.t.}\ & Hz_q \leqslant h_q, E_qz_q=e_q \end{split} \end{equation} The KKT conditions for the QP are: \begin{equation} \begin{split} \label{eqn:KKT_conditon} Q_qz^* + q_q + H_q^T\lambda^* + E_q^T\nu^* &= 0 \\ \text{diag}\left( \lambda^* \right) (H_qz^*-h_q) &= 0 \\ E_qz_q^* - e_q & = 0 \end{split} \end{equation} where $z_q^*, \nu^*, \lambda^*$ are the optimal primal and dual variables, $\text{diag}(x)$ builds a diagonal matrix composed of $x$. Then the differentials of KKT conditions can be computed as: \begin{equation} \label{eqn:KKT_diff} \begin{split} \left[\begin{array}{ccc} Q_q & H_q^T & E_q^T \\ D(\lambda^*)A & \text{diag}(H_qz^*-h_q) & 0 \\ E_q & 0 & 0 \end{array} \right] \left[ \begin{array}{c} dz_q \\ d\lambda \\ d\nu \end{array}\right] \\ = -\left[ \begin{array}{c} dQ_qz_q^* + dq_q + dH_q\lambda^* + dE_q^T\nu^* \\ \text{diag}(\lambda^*)dH_qz_q^* - \text{diag}(\lambda^*)db \\ dE_qz^* - de_q \end{array}\right] \end{split} \end{equation} The derivatives of $z^*$ with respect to the parameters ($Q_q,q_q,H_q,h_q,E_q$) and the function input $f$ are given by the solution to the linear system defined in Equation~\eqref{eqn:KKT_diff}. For example, the solution $dz$ of~\eqref{eqn:KKT_diff} gives the result of $\frac{\partial{z_q^*}}{\partial{Q_q}}$ if we set $dQ_q=I$ and the differentials of other parameters to 0. The gradient of optimal value $L(z^*)$ with respect to $Q$ is calculated accordingly as $\frac{\partial{L(z_q^*)}}{\partial{z_q^*}}\frac{\partial{z_q^*}}{\partial{Q_q}}$. \section{Koopman based Data-driven Prediction}\label{sec: prediction} In this section, the fundamental lemma is first introduced in the Koopman operator theory, which enables a training scheme by minimizing the prediction error with respect to the training dataset. The stochastic prediction scheme is thereby introduced to show the compatibility of probabilistic models, such as Bayesian neural networks and Gaussian processes. \subsection{Koopman Operator with the Fundamental Lemma} As discussed in Section~\ref{sec: pre}, the key component of a Koopman operator is the eigenfunctions or the linear subspace containing the eigenfunctions. Therefore, the learning of an Koopman operator is equivalent to find functions whose evolution of the function evaluation behaves like a linear system~\ref{eq: linear_dynamics}. Following corollary is the enabler of the proposed learning scheme \begin{cor}\label{cor:lin} An dynamical system of order $n_x$ can be parametrized as a linear system~\eqref{eq: linear_dynamics} if and only if the Fundamental lemma holds. \end{cor} \begin{proof} Necessary condition holds by Lemma~\ref{lemma: fundamental lemma} and the sufficient condition holds by the definition of linear systems. \end{proof} As discussed in Section~\ref{sec:koopman}, the outputs evaluations are assumed to be spanned by the following basis functions \begin{align*} \{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u} := \{\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x),\boldsymbol{u}(0)^\top\}\;. \end{align*} Then given a sequence of state evolution $\textbf{x}_d $ with its corresponding inputs-output sequence $\textbf{u}_d,\textbf{y}_d$, whose inputs are persistently excited of order $n_{\phi_u}-n_u+L$, Corollary~\ref{cor:lin} implies that $\{\phi_u(x,\boldsymbol{u})\}_{i=1}^{n_{\phi_u}+n_u}$ is the desired collection basis functions if and only if $\forall\; x\in\mathbb{R}^{n_x}$ the outputs sequence $\textbf{y}$ driven by $\textbf{u}$, there exist $g\in \mathbb{R}^{n_c}$ \begin{align}\label{eq:condition_lifting} \begin{bmatrix} Z\\H_L(\textbf{u}_d)\\H_L(\textbf{y}_d) \end{bmatrix}g = \begin{bmatrix} z\\\textbf{u}\\\textbf{y} \end{bmatrix}\;, \end{align} where $z:=[\phi_{u,1}(x),\dots,\phi_{u,n_{\phi_u}}(x)]^\top$ and \begin{align*} Z:= \begin{bmatrix} \phi_{u,1}(x_0)&\phi_{u,1}(x_1)&\dots&\phi_{u,1}(x_{n_c})\\ \vdots &\ddots&\ddots&\vdots\\ \phi_{u,n_{\phi_u}}(x_0)& \phi_{u,n_{\phi_u}}(x_1)&\dots&\phi_{u,n_{\phi_u}}(x_{n_c})\\ \end{bmatrix} \end{align*} \subsection{Leaning the Koopman Basis Functions}\label{sec:learn_koop} Due to the previous dicussion, the learning of a Koopman operator is converted to learn basis functions that maximizes the satisfaction of the condition~\ref{eq:condition_lifting}. In practice the underlying state for the nonlinear system is not necessarily measured, we therefore make the following assumption \begin{assumption}\label{ass:meas} $x_k$ is measurable with respect to the previous $T_{ini}$ step input-output sequence $\{u_i,y_i\}_{i=k-T_{ini}+1}^k$. \end{assumption} This assumption implies that $x_k$ can be determined from $\{u_i,y_i\}_{i=k-T_{ini}+1}^k$ and therefore has similar utilization as the matrices $U_p,Y_p$ in problem~\eqref{eq: DeePC} and~\eqref{eq:lin_pred}. Assumed that we have a sequence of input-output data $\textbf{u}_d:=\{u_{d,i}\}_{i=0}^{n_d}$ and $\textbf{y}_d:=\{y_{d,i}\}_{i=0}^{n_d}$ consisting $n_d$ measurements, each of them is partitioned into two subsets, including $\textbf{u}_{d,l}:=\{u_{d,i}\}_{i=0}^{n_{d,t}}$, $\textbf{y}_{d,l}:=\{y_{d,i}\}_{i=0}^{n_{d,t}}$, $\textbf{u}_{d,t}:=\{u_{d,i}\}_{i=n_{d,t}+1}^{n_d}$ and $\textbf{y}_{d,l}:=\{y_{d,i}\}_{i=n_{d,t}+1}^{n_d}$. $n_{d,t}=n_c+T_{ini}+L-1$ is the number of datapoints in the first two sets. The subsets with subscript $_{d,l}$ are used to build the Hankel matrices charactering the Koopman operator while the remaining two subsets are used to learn the basis functions. Regarding the Assumption~\ref{ass:meas}, a differentiable learner is used to learn the basis functions, dubbed $\{\phi_{u,\theta}\}_{i=1}^{n_{\phi_u}}$, whose parameters are denoted by $\theta$. Neural networks~\cite{goodfellow2016deep} and Gaussian process~\cite{rasmussen2003gaussian} are recommended learners that have strong representation power. In particular, inducing variables can be considered as trainable parameters for a Gaussian process, please refer to~\cite{titsias2009variational,titsias2010bayesian} for more details. Enforcing the condition~\eqref{eq:condition_lifting} for $L$-step sequences, learning problem is formulated as follows: \begin{align}\label{eq:learn_koop} &\begin{split} \min_{\theta} &\sum\limits_{i=n_c}^{n_d} l(\textbf{y}_{d,i}- H_L(\textbf{y}_{d,l})g_i)\\ \text{s.t.}\;&\; \\ & g_i = \text{arg}\min_g P(\textbf{u}_{d,i},\textbf{y}_{d,i})\;, \end{split} \end{align} and \begin{align*} \begin{split} P(\textbf{u}_{d,i},\textbf{y}_{d,i}):& = \lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\\ \text{s.t.}\; &\; H_L(\textbf{u}_{d,l})g = \textbf{u}_{d,i}\\ &\; z = \phi_{u,\theta}(\{u_k,y_k\}_{k=i}^{i+T_{ini}-1}) \end{split}\;. \end{align*} In particular, $\textbf{u}_{d,i}:=[u_i,\dots,u_{i+L+T_{ini}-1}]$ and $\textbf{y}_{d,i}:=[y_i,\dots,y_{i+L+T_{ini}-1}], i\geq n_{d,t}+1$ are sequences of inputs and outputs of length $L+T_{ini}$. The matrix $Z$ is the evaluation of the basis functions \begin{align*} Z:= [\phi_{u,\theta}(\{u_i,y_i\}_{i=0}^{T_{ini}}),\dots,\phi_{u,\theta}(\{u_i,y_i\}_{i=n_c-1}^{T_{ini}+n_c-1})]\;. \end{align*} The constraint in the learning problem~\eqref{eq:learn_koop} is actually a prediction problem similar to~\eqref{eq:lin_pred} and, therefore, $l(\cdot)$ penalizes the prediction error. As one may notice, there are two relaxations in the learning problem~\eqref{eq:learn_koop} \begin{enumerate} \item To recover an output evaluation, an infinite set of basis functions may be required. This learning problem learns a finite order approximation of these probably infinite set. \item The condition~\eqref{eq:condition_lifting} is required to be satisfied for any states, however, the learning problem relax this condition to a set of sampled states. Therefore, the training set $\textbf{u}_{d,t}$ and $\textbf{y}_{d,t}$ should be large enough to represent the condition~\eqref{eq:condition_lifting}. \end{enumerate} \begin{figure*}[!htb] \centering \includegraphics[width = 0.8\textwidth]{Figures/illustration.png} \caption{Illustration of learning lifting function framework } \label{fig:illustration} \end{figure*} Figure~\ref{fig:illustration} shows the flow of the learning problem, where the dashed line indicates the direction of back-propagation. More specifically, the prediction problem is considered as a parametric optimization problem whose differentiation is discussed in Section~\ref{sec:diff_opt}. Finally, we end this subsection by further listing the benefits of the proposed scheme. \begin{itemize} \item Unlike general EDMD methods, which learns the matrices $\mathcal{A,B},C_u$ in~\eqref{eq:lifted_space_state} and~\ref{eq:lifted_sapce_output}. The proposed scheme get rid of the learning of the parameters. Learning of $\mathcal{A,B},C_u$ is ill-conditioned because the solution is not unique. \item The learning problem optimizes a multi-step forward prediction, meanwhile the utilization of the Willems' fundamental lemma guarantees a good numerical stability in the training scheme, which is a key challenge in training the fully-connected recurrent neural network~\cite{bengio1994learning}. \item The proposed scheme is scalable and can be parallelized. \item Unlike other nonlinear extension of the Willems' fundamental lemma, the proposed scheme does not require nonlinear mapping of future inputs and outputs in the prediction problem. \end{itemize} \subsection{Stochastic Prediction}\label{subsec: pred_stoch} As shown in the Section~\ref{sec:learn_koop}, the prediction problem plays a key role in the proposed scheme. If the chosen learner is deterministic, then the predicted output sequence $\tilde{\textbf{y}}$ driven by inputs $\tilde{\textbf{u}}$ is calculated by \begin{align}\label{eq:pred_koop} \tilde{\textbf{y}} &= H_L(\textbf{y}_{d,l})\tilde{g}&\\ \tilde{g} &= \text{arg}\min_g&\lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\nonumber\\ &&\text{s.t.}\;\; H_L(\textbf{u}_{d,l})g = \tilde{\textbf{u}}\nonumber\\ & &\; z = \phi_{u,\theta}(\tilde{\textbf{u}}_p,\tilde{\textbf{y}_p})\nonumber\;, \end{align} which main results in overfitting. Probabilistic learner is one solution to avoid overfitting, such as the aforementioned Gaussian process and the Bayesian neural networks. The output of a probabilistic learner is a distribution but not a deterministic point. In this section, we will show how this distributional outputs from a probabilistic learner can be used for prediction, which is also essential for the training of the Koopman operator. Two methods will be discussed, one is based on Monte-Carlo sampling while another one generates prediction by bounding the Wasserstein distance. \subsubsection{Monte-Carlo Prediction} Assuming the distribution of the probabilistic learner is $\mathbb{P}_{\phi_u}$, a Monte-Carlo method is applied to calculate the distribution of the prediction. In particular, the matrix $Z$ in problem~\eqref{eq:pred_koop} is sampled from $\mathbb{P}_{\phi_u}$, which gives a sample from the output distribution. By Monte-Carlo method, the output distribution can be approximated by the sampled outputs. Meanwhile, the loss function in the learning problem~\eqref{eq:learn_koop} is modified to expected cost. In conclusion, we have the following learning problem \begin{align*} &\begin{split} \min_{\theta} &\sum\limits_{i=n_c}^{n_d} \mathbb{E}\;l(\textbf{y}_{d,i}- H_L(\textbf{y}_{d,l})g_i)\\ \text{s.t.}\;&\; \\ & g_i \sim \text{arg}\min_g P(\textbf{u}_{d,i},\textbf{y}_{d,i})\;, \end{split} \end{align*} and \begin{align*} \begin{split} P(\textbf{u}_{d,i},\textbf{y}_{d,i}):& = \lambda_g\norm{g}_2^2 +\lambda_y\norm{Zg-z}_2^2\\ \text{s.t.}\; &\; H_L(\textbf{u}_{d,l})g = \textbf{u}_{d,i}\\ &\; z \sim \mathbb{P}_{\phi_{u,\theta}}(\{u_k,y_k\}_{k=i}^{i+T_{ini}-1}) \end{split}\;, \end{align*} the $k$-th column of the matrix $Z$ follows the distribution $\mathbb{P}_{\phi_{u,\theta}}(\{u_i,y_i\}_{i=k-1}^{T_{ini}+k-2})$. The gradient of this learning problem is also approximated by a Monte-Carlo method. \subsubsection{Wasserstein Distanced based Prediction} Intuitively, the regularization term $\norm{Zg-z}_2^2$ in the prediction problem~\eqref{eq:pred_koop} can be considered as the distance between the mean of $Zg$ and $z$. To formulate a more rigor scheme based on a probabilistic learner, we propose to minimize the Wasserstein distance between $Zg$ and $z$. Above all, the entry of the probabilistic learner output is approximated by a Gausian distribution. Specifically, the $i$-th column of the matrix $Z$ is approximated by $\mathcal{N}(\mu_{i,l},\Sigma_{i,l})$, whose covariance matrix is diagonal \begin{align*} \Sigma_{i,l} = \begin{bmatrix} \sigma_{i,1,l}^2&&\\ &\ddots&\\ &&\sigma_{i,n_{\phi_u},l}^2 \end{bmatrix}\;, \end{align*} and we denotes the vector composed of the diagonal elements as $\boldsymbol{\sigma}_{i,l}^2$. Accordingly, the $i$-th element of vector $z$ is approximated by $\mathcal{N}(\mu_i,\sigma^2_i)$ and we denote $z\sim\mathcal{N}(\mu_z,\sigma_z)$. \begin{remark} \begin{itemize} \item It is noteworthy that if an Gaussian process is used to learn the basis function, no approximation is required as the output is already Gaussian. \item To better satisfy the condition of a diagonal covariance matrix $\Sigma_{i,l}$, it is recommended to replace the Hankel matrices with Page matrices. In comparison with definition in~\eqref{eq:hankel}, a depth $L$ Page matrix of a sequence $\textbf{w}$ is defined as \begin{equation*} \mathfrak{P}_L(\textbf{w}) := \begin{bmatrix} w_0 & w_L & \dots & w_{(M-1)L}\\ w_1 & w_{L+1} & \dots & w_{(M-1)L+1}\\ \vdots & \vdots & \ddots & \vdots \\ w_{L-1} & w_{2L - 1} & \dots & w_{ML-1} \end{bmatrix}\;. \end{equation*} \end{itemize} \end{remark} Based on the approximation, $Zg$ turns out to be a Gaussian distribution, which is denoted by $Zg\sim \mathcal{N}(\mu_{Zg},\Sigma_{Zg})$ for compactness. To enable a prediction scheme, we conclude the following lemma \begin{lemma} The second Wasserstein distance between $Zg$ and $z$ is bounded by \begin{align}\label{eq:wass_bound} W_2^2(Zg,z)\leq\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_* \end{align} \end{lemma} \begin{proof} The second Wasserstein distance between two Gaussian distribution~\cite{villani2008optimal} is \begin{equation}\label{eq: wasserstein_distance_1} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_z, \Sigma_z)) \\ = &\norm{\mu_{Zg} - \mu_z}_2^2 + \text{Tr}\left(\Sigma_{Zg} + \Sigma_z\right) \\ - &2\text{Tr}\left(\Sigma_{Zg}^{\frac{1}{2}}\Sigma_z\Sigma_{Zg}^{\frac{1}{2}})\right) \end{aligned}\;, \end{equation} where $\mu_{Zg}= \sum\limits_{i=1}^{n_c}\mu_{i,l}g_i$ with $g_i$ being the $i$-th entry of $g$. The covariance matrix of $Zg$ is calculated as follows \begin{align*} \Sigma_{Zg} = (g^\top\otimes I)\text{Cov}(\text{vec}(Z))(g\otimes I)\;, \end{align*} with \begin{align*} \text{Cov}(\text{vec}(Z)) = \begin{bmatrix} \Sigma_{1,l} & & \\ & \ddots & \\ & & \Sigma_{n_c,l} \end{bmatrix}\;, \end{align*} therefore, we conclude \begin{align}\label{eq: sigma_Zpg} \Sigma_{Zg} = \sum\limits_{i=1}^{n_c} g_i^2\Sigma_{i,l}\;. \end{align} Since $\Sigma_{z}$ is diagonal, $\Sigma_{Zg}\Sigma_{z} = \Sigma_{z}\Sigma_{Zg}$, \eqref{eq: wasserstein_distance_1} can be reformulated as: \begin{equation}\label{eq: wasserstein_distance_2} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_{z}, \Sigma_{z})) \\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}(\Sigma_{Zg} + \Sigma_{z} -2(\Sigma_{Zg}\Sigma_{z})^{1/2})\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}((\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2})^T(\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2}))\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg}^{1/2} - \Sigma_{z}^{1/2}}_F^2 \end{aligned} \end{equation} where $\norm{\cdot}_F$ denotes the Frobenius norm. This objective function has a clear interpretation. The first term quantifizes the distance between the mean of these two Gaussian distribution, while the second measures the discrepancy between the covariance matrices. If the derived $\Sigma_{Zg}$ in~\eqref{eq: sigma_Zpg} is substituted in~\eqref{eq: wasserstein_distance_2}, the evaluation of the resulting metric is numerically ill-conditioned. The Frobenius norm is therefore further relaxed with the Powers-Størmer’s inequality\cite{powers1970free}: \begin{equation} 2\text{Tr}(A^\alpha B^{1-\alpha}) \geq \text{Tr}(A + B - |A - B|), 0 \leq \alpha \leq 1 \end{equation} $A$, $B$ are positive semidefinite and $|A|$ is the positive square root of the matrix $A^*A$, we have: \begin{equation*} \begin{aligned} &W_2^2(\mathcal{N}(\mu_{Zg}, \Sigma_{Zg}), \mathcal{N}(\mu_{z}, \Sigma_{z})) \\ \leq &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \text{Tr}(|\Sigma_{Zg} - \Sigma_{z}|)\\ = &\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_* \end{aligned}\;, \end{equation*} with $\norm{\cdot}_*$ denoting the nuclear norm. \end{proof} Based on this lemma, the prediction problem with a probabilistic learner is reformulated as \begin{align}\label{eq:pred_koop_wass} \tilde{\textbf{y}} &= H_L(\textbf{y}_{d,l})\tilde{g}&\\ \tilde{g} &= \text{arg}\min_g&\norm{\mu_{Zg} - \mu_{z}}_2^2 + \norm{\Sigma_{Zg} - \Sigma_{z}}_*\nonumber\\ &&\text{s.t.}\;\; H_L(\textbf{u}_{d,l})g = \tilde{\textbf{u}}\nonumber\\ & &\; z = \phi_{u,\theta}(\tilde{\textbf{u}}_p,\tilde{\textbf{y}}_p)\nonumber\;, \end{align} \begin{remark}\label{rmk:huber} The upper bound~\eqref{eq:wass_bound} is non-smooth, where the absolute value evaluation is ill-condidtioned around $0$~\cite[Chapter 3]{beck2017first}. The absolute value is smoothed by an approach similar to a Huber loss~\cite[Chapter 2]{rockafellar2009variational}, which is defined as: \begin{equation*} L_{\delta}(a) = \begin{cases} \frac{1}{2}a^2 & \text{for} |a|<\delta\\ \delta(|a|-\frac{1}{2}\delta) & \text{otherwise} \end{cases} \end{equation*} The evaluation of the $i$-th diagonal elements in $|\Sigma_{Zg} - \Sigma_{z}|$ is then reformulated as \begin{equation*} \begin{aligned} &(|\Sigma_{Zg} - \Sigma_{z}|)_{ii} \\ = &\begin{cases} \frac{1}{2}(\sigma_{Zg, i}^2 - \sigma_{z, i}^2)^2 & \quad |\sigma_{Zg, i}^2 - \sigma_{z, i}^2|<\delta\\ \delta(|\sigma_{Zg, i}^2 - \sigma_{z, i}^2|-\frac{1}{2}\delta) & \quad\text{otherwise} \end{cases} \end{aligned}\;. \end{equation*} \end{remark} \section{Koopman-based Data-driven Predictive Control}\label{sec: control} The DeePC framework (\ref{eq: DeePC}) can be extended into nonlinear systems by integrating the equality constraints with (\ref{eq:condition_lifting}). However, DeePC formulation suffers from the problem that the prediction step interweaves with the control step. In another word, the algorithm may use a non-optimal prediction result for control. When the penalty factor $\lambda_y$ is not sufficient large and the system is initialized away from the reference, the algorithm tends to compensate the difference with a relative large $\sigma_y$, which will result in control failure. To tackle this problem, we propose a bi-level programming formulation\cite{dempe2002foundations}, where the prediction step is independent from control. \begin{subequations}\label{eq: bilevel formulation} \begin{equation}\label{eq: bilevel_control} \begin{aligned} \min_{\mathbf{u}, \mathbf{y}, g} &(\sum_{k=0}^{N-1}\norm{y_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ \text{subject to } & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & y_k \in \mathcal{Y}, \forall k \in {0, \dotsc, N-1}\\ &Y_fg = \mathbf{y}\\ \text{for some } &g \in \Phi(\mathbf{u})\\ \end{aligned} \end{equation} \begin{equation}\label{eq: bilevel_prediction} \begin{aligned} \Phi(\mathbf{u}) = \text{arg}\min_{g}& \lambda_g\norm{g}_2^2+\lambda_y\norm{ Zg - z}_2^2\\ \text{subject to} & \begin{bmatrix} U_p \\ U_f \end{bmatrix} g= \begin{bmatrix} \mathbf{u}_{ini} \\ \mathbf{u} \end{bmatrix} \\ \text{with parameter } & z = \phi_{u,\theta}(\textbf{u}_{ini},\textbf{y}_{ini}) \end{aligned} \end{equation} \end{subequations} The bi-level problem introduces a hierarchical structure where the upper level problem (\ref{eq: bilevel_control}) indicates the control step and the lower level problem (\ref{eq: bilevel_prediction}) functions as the prediction step. Note that compared to DeePC (\ref{eq: DeePC}), in (\ref{eq: bilevel_prediction}) the squares of the two norm are used so that the objective function of the lower level problem remains smooth. A usually used approach to solve a bi-level problem is to transform it into a single level problem. Applying optimality conditions and introducing optimal value function are the two main categories of transformation approaches if the bi-level problem fulfills certain conditions \cite[Chapter 5]{dempe2002foundations}. Here we present the result by replacing the lower level problem (\ref{eq: bilevel_prediction}) with its KKT conditions: \begin{equation}\label{eq: single level} \begin{aligned} \min_{g, \mathbf{u}, \mathbf{y}, \mu_1, \mu_2} &(\sum_{k=0}^{N-1}\norm{x_k-r_{t+k}}_Q^2+\norm{u_k}_R^2)\\ \text{subject to} & \begin{bmatrix} U_p \\ U_f \\ Y_f \end{bmatrix} g= \begin{bmatrix} \mathbf{u}_{ini}\\ \mathbf{u} \\ \mathbf{y} \end{bmatrix} \\ 2g^\top + 2(&Zg - z)^\top Z + \mu_1U_p + \mu_2U_f = 0 \\ & u_k \in \mathcal{U}, \forall k \in {0, \dotsc, N-1}\\ & x_k \in \mathcal{X}, \forall k \in {0, \dotsc, N-1} \end{aligned} \end{equation} This equivalent single level problem is solvable by many optimization toolboxes. \begin{remark} Solving a bi-level problem is in general NP-hard. To the best of our knowledge, there is no valid approach to solve a general bi-level optimization problem where the lower level problem is non-convex. It is worth mentioning that the wasserstein distanced based prediction problem presented in Section~\ref{subsec: pred_stoch} is non-convex. We therefore leave the control formulation integrated with wasserstein distanced based prediction as a future work. \end{remark} The algorithm is summarized as follows: \begin{algorithm}[htbp] \label{alg: Koopman based DeePC} \SetAlgoLined \For{$t=T_{ini},\dotsc$}{ Set $z = \phi_{u,\theta}(\textbf{u}_{ini},\textbf{y}_{ini})$ \; Solve (\ref{eq: single level}) to obtain an optimal input sequence $\boldsymbol{u}^*(0)$ \; Set $u_t = \boldsymbol{u}^*(0)$ \; Apply $u_t$ to the system and measure $y_t$ } \caption{Koopman based DeePC} \end{algorithm} where $\boldsymbol{u}^*(0)$ denotes the first elements of $\boldsymbol{u}^*$. \section{Simulation results}\label{sec: simulation} In this section, the prediction results on a Van der Pol oscillator based on Monte-Carlo prediction and wasserstein distanced based prediction are firstly illustrated. Then a numerical experiment of controlling a bilinear motor is presented. We finally demonstrate the potential of our proposed scheme in the large-scale problems with an example of controlling the nonlinear Korteweg-de Vries equation. The source code of the numerical examples can be accessed through \url{https://github.com/RencciW/DataDrivenControlCode}. \subsection{Stochastic prediction} We show the results of prediction of trajectories from a Van der Pol oscillator \begin{equation*} \begin{aligned} \dot{x} = \begin{bmatrix} x_2 \\ \mu(1-x_1^2)x_2-x_1+u \end{bmatrix} \end{aligned} \end{equation*} with $\mu = 1$. we train a 5-layer network (2, 12, 22, 12 and 12) with 1100 data points sampled from 100 random trajectories generated by Van der Pol oscillator. Each layer excluding the input layer is added with a dropout layer. Choose ReLU as activation function for each hidden layer and Adam as optimizer with learning rate $10^{-3}$. Set the dropout rate equals to $0.2$. The code is implemented with PyTorch \cite{paszke2017automatic}. From each of 100 trajectories, we sample 3 trajectory fragments for the construction of Hankel matrix. For a better comparison with following results, in test phase, we choose 3 trajectory fragments from each of 24 trajectories to formulate the Hankel matrix with $T_{ini}=1$ and $N=10$. Test the trained network on 50 data points sampled from trajectories, which are independent from the data used for training and hankel matrix formulation. Forward the same data 120 times to the network and compute the mean value and standard derivation of the prediction. The results are shown below. The light blue color indicates two times the standard derivation. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/original_hankel_x_1.png} \caption{Prediction result ($x_1$) } \end{subfigure} \vfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/original_hankel_x_2.png} \caption{Prediction result ($x_2$)} \end{subfigure} \caption{Prediction result using Koopman operator learned by MC dropout} \label{fig: original_hankel_x} \end{figure} \subsection{Prediction based on Wasserstein distance} We test the new loss function with Van der Pol data. From each of 24 different trajectories, choose one trajectory fragment to formulate the Page matrix. We firstly lift the data for Page matrix construction with the dropout neural network we trained from last subsection. The mean value and the standard derivation of the lifted data are computed for estimation. Then send the data to prediction problem to obtain an optimizer $g^*$. Predict the future trajectory with $x=X_fg^*$, $X_f$ is the Hankel matrix block for prediction. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/nuclear_smooth_x_1.png} \caption{Prediction result $(x_1)$} \end{subfigure} \vfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/nuclear_smooth_x_2.png} \caption{Prediction result $(x_2)$} \end{subfigure} \caption{Prediction result using loss function derived from Wasserstein distance} \label{fig: nuclear_x} \end{figure} We compute the mean squared error (MSE) of the prediction based on proposed Wasserstein loss and original quadratic loss at the $9_{th}$ time step. The comparison is summarized in following table: \begin{table}[!htb] \centering \begin{tabular}{l|c c} \hline & $x_1$ & $x_2$\\ \hline Wasserstein loss & 0.0817 & 0.0855\\ Original loss & 3.8707 & 1.3909\\ \hline \end{tabular} \caption{MSE comparison between prediction result computed with different loss functions} \label{tab: loss comparison} \end{table} It is clear that the Wasserstein loss outperforms the original quadratic loss when uncertainty of lifting function is considered. \subsection{Control with Koopman-based DeePC} \subsubsection{Control of a bilinear motor} We firstly compare the control algorithm with the algorithm Koopman operator-based MPC controller (K-MPC) proposed in \cite{korda2018linear} by controlling a bilinear model of a DC motor. \cite{DANIELBERHE1997203} \begin{equation*} \begin{aligned} \dot{x}_1 &= - (R_a/L_a)x_1 - (k_m/L_a)x_2u + u_a/L_a\\ \dot{x}_2 &=-(B/J)x_2 + (k_m/J)x_1u-\tau_1/J\\ y &= x_2 \end{aligned} \end{equation*} where $x_1$ is the rotor current, $x_2$ the angular velocity an the control input $u$ is the stator current and the output $y$ is the angular velocity. The parameters are $L_a=0.314, R_a=12.345, k_m=0.253, J = 0.00441, B = 0.00732, \tau_1=1.47, u_a=60$. The physical constraints on the control input are $u\in [-1, 1]$. We use 40 trajectories with time horizon $0.25s$ to construct a mosaic Hankel matrix. All trajectories are randomly initialized on the unit box $[-1, 1]^2$. The control input obeys to a uniform distribution over the interval $[-1, 1]$. Choose 40 thin plate spline radial basis function with centers selected randomly with uniform distribution over $[-1, 1]^3$ as lifting functions. Since the system states are not directly measurable, we choose the number of delays $n_d=1$. We define $C =[1, 0, \dotsc, 0]$, $Q= Q_{N_p}=10$, $R=0.01$. The prediction horizon $N = 10$, which implies $0.1s$. Since the system is linear in the lifting space, choose $T_{ini} = 1$. The reference is designed as $r(t) = 0.5cos(2\pi t/3)$. Introduce constraints on output $y \in [-0.4, 0.4]$. We simulate for $3s$ and compare the result with a model-based method K-MPC proposed in \cite{korda2018linear} \begin{figure}[htbp] \centerline{\includegraphics[width = 0.4\textwidth]{Figures/motor_input.jpg}} \caption{Feedback control input of a bilinear motor} \label{fig: motor_input} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width = 0.4\textwidth]{Figures/motor.jpg}} \caption{Angular velocity of a bilinear motor} \label{fig: motor_output} \end{figure} As shown in the figure \ref{fig: motor_input} and \ref{fig: motor_output}, the algorithm is capable of following the reference without violating constraints, although compared to K-MPC, the input computed by our method will vibrate gently when the input trajectory is non-smooth. \subsubsection{Control of nonlinear Korteweg–de Vries equation} Our next simulation is to control the nonlinear Korteweg–de Vries(KdV) equation which models the propagation of acoustic waves in aplasma or shallow-water wave \cite{miura1976korteweg}. The equation is given as: \begin{equation*} \frac{\partial y(t, x)}{\partial t} + y(t, x)\frac{\partial y(t, x)}{\partial x} + \frac{\partial ^3y(t, x)}{\partial x^3} = u(t, x) \end{equation*} where $y(t, x)$ is the unknown function and $u(t, x)$ the control input. $x\in[-\pi, \pi]$ The space is descretized into 128 points and the time step $\Delta t = 0.02s$. The input is assumed to be of the form $u(t, x)= \sum_{i = 1}^3 u_i(t)v_i(x)$ where $v_i$ consists of 3 spacial basis functions: $v_i(x) = e^{-25(x-\pi/2)^2}$ with $c_1=-\pi/2$, $c_2=0$, $c_3=\pi/2$. The input is constrained to $[-1, 1]$. We initial the system by convexly combining 3 fixed spatial profiles $y_0^1=e^{-(x-\pi/2)^2}$, $y_0^2=-sin^2(x/2)$, $y_0^3=e^{-(x+\pi/2)^2}$. We choose the states itself, the elementwise square of the state, the elementwise product of the states with its periodic shift as the lifting functions. $Q$ is the identity matrix and $R$ is zero matrix. The prediction horizon $N = 5$, which implies $0.1s$. $T_{ini}$ remains equal to $1$. Formulate the Hankel matrix with 63 trajectories, each of which is simulated for $0.5s$. \begin{figure}[htbp]\label{fig: input_KdV} \centerline{\includegraphics[width = 0.38\textwidth]{Figures/KdV_input.jpg}} \caption{Feedback control input of KdV} \label{fig: kdV_input} \end{figure} \begin{figure}[htbp]\label{fig: KdV_motor} \centerline{\includegraphics[width = 0.38\textwidth]{Figures/KdV_output.jpg}} \caption{Tracking result} \label{fig: KdV_output} \end{figure} Despite of large dimension, the algorithm is still capable of computing the optimum input in an acceptable time and tracking the reference. \section{Conclusion}\label{sec: conclusion} In this work, we extend a data-driven predictive control method into nonlinear systems. The underlying idea is to lift the system with Koopman operator into infinite dimensional space where the system evolves linearly along the nonlinear system trajectories. Approximation of nonlinear lifting functions based on a purely data-driven framework is proposed, along with considerations on the uncertainty of the approximation, which enabling a novel data-driven simulation scheme based on wasserstein distance.
2,877,628,089,266
arxiv
\section{Introduction} \label{sec:introduction} Optimal transport and Wasserstein distance~\cite{Villani-09,peyre2020computational} have become popular tools in machine learning and data science. For example, optimal transport has been utilized in generative modeling tasks to generate realistic images~\cite{arjovsky2017wasserstein,tolstikhin2018wasserstein}, in domain adaptation applications to transfer knowledge from source to target domains ~\cite{courty2017joint,bhushan2018deepjdot}, in clustering applications to capture the heterogeneity of data~\cite{ho2017multilevel}, and in other applications~\cite{le2021lamda, xu2021vocabulary, yang2020predicting}. Despite having appealing performance, Wasserstein distance has been known to suffer from high computational complexity, namely, its computational complexity is at the order of $\mathcal{O}(m^3 \log m)$~\cite{pele2009} when the probability measures have at most $m$ supports. In addition, Wasserstein distance also suffers from the curse of dimensionality, namely, its sample complexity is at the order of $\mathcal{O}(n^{-1/d})$~\cite{Fournier_2015} where $n$ is the sample size. A popular line of works to improve the speed of computation and the sample complexity of the Wasserstein distance is by adding an entropic regularization term to the Wasserstein distance~\cite{cuturi2013sinkhorn}. This variant is known as entropic regularized optimal transport (or equivalently entropic regularized Wasserstein). By using the entropic version, we can approximate the value of Wasserstein distance with the computational complexities being at the order of $\mathcal{O}(n^2)$~\cite{altschuler2017near, lin2019efficient, Lin-2019-Efficiency, Lin-2020-Revisiting} (up to some polynomial orders of approximation errors). Furthermore, the sample complexity of the entropic version had also been shown to be at the order of $\mathcal{O}(n^{-1/2})$~\cite{Mena_2019}, which indicates that it does not suffer from the curse of dimensionality. \vspace{0.5 em} \noindent Another useful line of works to improve both the computational and sample complexities of the Wasserstein distance is based on the closed-form solution of optimal transport in one dimension. A notable distance along this direction is sliced Wasserstein (SW) distance~\cite{bonneel2015sliced}. Due to the fast computational complexity $\mathcal{O}(m \log_2 m)$ and no curse of dimensionality $\mathcal{O}(n^{-1/2})$, the sliced Wasserstein has been applied successfully in several applications, such as generative modeling~\cite{wu2019sliced,deshpande2018generative,kolouri2018sliced}, domain adaptation~\cite{lee2019sliced}, and clustering~\cite{kolouri2018slicedgmm}. The sliced Wasserstein is defined between two probability measures that have supports belonging to a vector space, e.g, $\mathbb{R}^d$. As defined in~\cite{bonneel2015sliced}, the sliced Wasserstein is written as the expectation of one-dimensional Wasserstein distance between two projected measures over the uniform distribution on the unit sphere. Due to the intractability of the expectation, Monte Carlo samples from the uniform distribution over the unit sphere are used to approximate the sliced Wasserstein distance. The number of samples is often called the number of projections and it is denoted as $L$. On the computational side, the computation of sliced Wasserstein can be decomposed into two steps. In the first step, $L$ projecting directions are first sampled and then are stacked as a matrix (the projection matrix). After that, the projection matrix is multiplied by the two data matrices resulting in two matrices that represent $L$ one-dimensional projected probability measures. In the second step, $L$ one-dimensional Wasserstein distances are computed between the two corresponding projected measures with the same projecting direction. Finally, the average of those distances is yielded as the value of the sliced Wasserstein. \vspace{0.5em} \noindent Despite being applied widely in tasks that deal with probability measures over images~\cite{wu2019sliced,deshpande2018generative}, the conventional formulation of sliced Wasserstein is not well-defined to the nature of images. In particular, an image is not a vector but is a tensor. Therefore, a probability measure over images should be defined over the space of tensors instead of images. The conventional formulation leads to an extra step in using the sliced Wasserstein on the domain of images which is vectorization. Namely, all images (supports of two probability measures) are transformed into vectors by a deterministic one-one mapping which is the "reshape" operator. This extra step does not keep the spatial structures of the supports, which are crucial information of images. Furthermore, the vectorization step also poses certain challenges to design efficient ways of projecting (slicing) samples to one dimension based on prior knowledge about the domain of samples. Finally, prior empirical investigations indicate that there are several slices in the conventional Wasserstein collapsing the two probability measures to the Dirac Delta at zero~\cite{deshpande2018generative,deshpande2019max,kolouri2019generalized}. Therefore, these slices do not contribute to the overall discrepancy. These works suggest that the space of projecting directions in the conventional sliced Wasserstein (the unit hyper-sphere) is potentially not optimal, at least for images. \vspace{0.5em} \noindent \textbf{Contribution.} To address these issues of the sliced Wasserstein over images, we propose to replace the conventional formulation of the sliced Wasserstein with a new formulation that is defined on the space of probability measures over tensors. Moreover, we also propose a novel slicing process by changing the conventional matrix multiplication to the convolution operators~\cite{fukushima1982neocognitron,goodfellow2016deep}. In summary, our main contributions are two-fold: \begin{enumerate} \item We leverage the benefits of the convolution operators on images, including their efficient parameter sharing and memory saving as well as their superior performance in several tasks on images~\cite{krizhevsky2012imagenet,he2016deep}, to introduce efficient slicing methods on sliced Wasserstein, named \emph{convolution slicers}. With the convolution slicers, we derive a novel variant of sliced Wasserstein, named \emph{convolution sliced Wasserstein} (CSW). We investigate the metricity of CSW, its sample and computational complexities, and its connection to other variants of sliced Wasserstein. \item We then illustrate the favorable performance of CSW in comparing probability measures over images. In particular, we show that CSW provides an almost identical discrepancy between MNIST's digits compared to that of the SW while having much less slicing memory. Furthermore, we compare SW and CSW in training deep generative models on standard benchmark images datasets, including CIFAR10, CelebA, STL10, and CeleA-HQ. By considering quality of the trained generative models, training speed, and training memory of CSW and SW, we observe that the CSW has more favorable performance than the vanilla SW. \end{enumerate} \vspace{0.5 em} \noindent \textbf{Organization.} The remainder of the paper is organized as follows. We first provide background about Wasserstein distance, the conventional slicing process in the sliced Wasserstein distance, and the convolution operator in Section~\ref{sec:background}. In Section~\ref{sec:csw}, we propose the convolution slicing and the convolution sliced Wasserstein, and analyze some of its theoretical properties. The discussion on related works is given in Section~\ref{sec:relatedworks}. Section~\ref{sec:experiments} contains the application of CSW to generative models, qualitative experimental results, and quantitative experimental results on standard benchmarks. We conclude the paper In Section~\ref{sec:conclusion}. Finally, we defer the proofs of key results and extra materials in the Appendices. \vspace{0.5 em} \noindent \textbf{Notation.} For any $d \geq 2$, $\sphere{d}:=\{\theta \in \mathbb{R}^{d}\mid ||\theta||_2^2 =1\}$ denotes the $d$ dimensional unit hyper-sphere in $\mathcal{L}_2$ norm, and $\mathcal{U}(\sphere{d})$ is the uniform measure over $\sphere{d}$. Moreover, $\delta$ denotes the Dirac delta function. For $p\geq 1$, $\mathcal{P}_p(\mathbb{R}^d)$ is the set of all probability measures on $\mathbb{R}^d$ that have finite $p$-moments. For $\mu,\nu \in \mathcal{P}_p(\mathbb{R}^d)$, $\Pi(\mu,\nu):=\{\pi \in \mathcal{P}_p(\mathbb{R}^d \times \mathbb{R}^d) \mid \int_{\mathbb{R}^d} \pi(x,y) dx = \nu, \int_{\mathbb{R}^d} \pi(x,y) dy = \mu \}$ is the set of transportation plans between $\mu$ and $\nu$. For $m\geq 1$, we denotes $\mu^{\otimes m }$ as the product measure which has the support is $m$ random variables follows $\mu$. For a vector $X \in \mathbb{R}^{dm}$, $X:=(x_1,\ldots,x_m)$, $P_{X}$ denotes the empirical measures $\frac{1}{m} \sum_{i=1}^m \delta_{x_i}$. For any two sequences $a_{n}$ and $b_{n}$, the notation $a_{n} = \mathcal{O}(b_{n})$ means that $a_{n} \leq C b_{n}$ for all $n \geq 1$ where $C$ is some universal constant. \section{Background} \label{sec:background} In this section, we first review the definitions of the Wasserstein distance, the conventional slicing, and the sliced Wasserstein distance, and discuss its limitation. We then review the convolution and the padding operators on images. \vspace{0.5 em} \noindent \textbf{Sliced Wasserstein:} For any $p \geq 1$ and dimension $d' \geq 1$, we first define the Wasserstein-$p$ distance~\cite{Villani-09, peyre2019computational} between two probability measures $\mu \in \mathcal{P}_p(\mathbb{R}^{d'})$ and $\nu \in \mathcal{P}_p(\mathbb{R}^{d'})$, which is given by: \begin{align} \label{eq:Wasserstein} \text{W}_p(\mu,\nu) : = \Big{(} \inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^{d'} \times \mathbb{R}^{d'}} \| x - y\|_p^{p} d \pi(x,y) \Big{)}^{\frac{1}{p}}. \end{align} When $d'=1$, the Wasserstein distance has a closed form which is $W_p(\mu,\nu) = ( \int_0^1 |F_\mu^{-1}(z) - F_{\nu}^{-1}(z)|^{p} dz )^{1/p}$ where $F_{\mu}$ and $F_{\nu}$ are the cumulative distribution function (CDF) of $\mu$ and $\nu$ respectively. Given this closed-form property of Wasserstein distance in one dimension, the sliced Wasserstein distance~\cite{bonneel2015sliced} between $\mu$ and $\nu$ had been introduced and admitted the following formulation: \begin{align} \label{eq:SW} \text{SW}_p(\mu,\nu) : = \left(\int_{\mathbb{S}^{d-1}} \text{W}_p^p (\theta \sharp \mu,\theta \sharp \nu) d\theta \right)^{\frac{1}{p}}, \end{align} where $\theta \sharp \mu$ is the push-forward probability measure of $\mu$ through the function $T_\theta: \mathbb{R}^{d'} \to \mathbb{R}$ with $T_\theta(x) = \theta^\top x$. For each $\theta \in \mathbb{S}^{d'- 1}$, $\text{W}_p^p (\theta \sharp \mu,\theta \sharp \nu)$ can be computed in linear time $\mathcal{O}(m \log_2 m)$ where $m$ is the number of supports of $\mu$ and $\nu$. However, the integration over the unit sphere in the sliced Wasserstein distance is intractable to compute. Therefore, Monte Carlo scheme is employed to approximate the integration, namely, $\theta_1,\ldots,\theta_L \sim \mathcal{U}(\sphere{d'})$ are drawn uniformly from the unit sphere and the approximation of the sliced Wasserstein distance is given by: \begin{align} \label{eq:hatSW} \text{SW}_p (\mu,\nu) \approx \left(\frac{1}{L}\sum_{i=1}^L \text{W}_p^p (\theta_i \sharp \mu,\theta_i \sharp \nu) \right)^{\frac{1}{p}}. \end{align} In practice, the number of projections $L$ should be chosen to be sufficiently large compared to the dimension $d'$. It can be undesirable since the computational complexity of SW is linear with $L$. \vspace{0.5 em} \noindent \textbf{Sliced Wasserstein on Images:} Now, we focus on two probability measures over images: $\mu,\nu \in \mathcal{P}_p(\mathbb{R}^{c \times d \times d})$ for number of channels $c \geq 1$ and dimension $d \geq 1$. In this case, the sliced Wasserstein between $\mu$ and $\nu$ is defined as: \begin{align} \label{eq:SWimage} \text{SW}_p(\mu,\nu) = \text{SW}_p(\mathcal{R}\sharp \mu,\mathcal{R}\sharp \nu), \end{align} where $\mathcal{R}: \mathbb{R}^{c \times d \times d }\to \mathbb{R}^{cd^2}$ is a deterministic one-to-one "reshape" mapping. \vspace{0.5 em} \noindent \textbf{The slicing process:} The slicing of sliced Wasserstein distance on probability measures over images consists of two steps: vectorization and projection. For better understanding, we visualize an example of projecting a probability measure $\mu \in \mathbb{R}^{c\times \times d}$ that has $n$ supports in Figure~\ref{fig:sw}. In short, supports of $\mu$ are transformed into vectors in $\mathbb{R}^{cd^2}$ and are stacked as a matrix of size $n \times cd^2$. A projection matrix of size $L\times cd^2$ is then sampled and has each column as a random vector following the uniform measure over the unit hyper-sphere. Finally, the multiplication of those two matrices returns $L$ projected probability measures of $n$ supports in one dimension. \vspace{0.5 em} \noindent \textbf{Limitation of the conventional slicing:} First of all, images contain spatial relations across channels and local information. Therefore, transforming images into vectors makes it challenging to obtain that information. Second, vectorization leads to the usage of projecting directions from the unit hyper-sphere, which can have several directions that do not have good discriminative power. Finally, sampling projecting directions in high-dimension is also time-consuming and memory-consuming. As a consequence, avoiding the vectorization step can improve the efficiency of the whole process. \begin{figure*}[!t] \begin{center} \begin{tabular}{c} \widgraph{0.9\textwidth}{figures/SW.pdf} \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{The conventional slicing process of sliced Wasserstein distance. The images $X_{1}, \ldots, X_{n} \in \mathbb{R}^{c \times d \times d}$ are first flattened into vectors in $\mathbb{R}^{cd^2}$ and then the Radon transform is applied to these vectors to lead to sliced Wasserstein~(\ref{eq:SWimage}) on images. } } \label{fig:sw} \vskip -0.1in \end{figure*} \vspace{0.5 em} \noindent \textbf{Convolution operator:} We now define the convolution operator on tensors~\cite{fukushima1982neocognitron}, which will be used as an alternative way of projecting images to one dimension in the sliced Wasserstein. The definition of the convolution operator with stride and dilation is as follows. \begin{definition} (Convolution) \label{def:conv} Given the number of channels $c\geq 1$, the dimension $d\geq 1$, the stride size $s\geq 1$, the dilation size $b\geq 1$, the size of kernel $k\geq 1$, the convolution of a tensor $X \in \mathbb{R}^{c \times d \times d}$ with a kernel size $K \in \mathbb{R}^{c \times k \times k}$ is \begin{align} X \stackrel{s,b}{*} K = Y, \quad Y \in \mathbb{R}^{1 \times d' \times d'}, \nonumber \end{align} where $d' = \frac{d-b(k-1)-1}{s}+1$. For $i=1,\ldots,d'$ and $j=1,\ldots,d'$, $Y_{1,i,j}$ is defined as: \begin{align*} Y_{1,i,j} = \sum_{h=1}^{c} \sum_{i'=0}^{k-1} \sum_{j'=0}^{k-1} X_{h,s(i-1)+bi'+1,s(j-1)+bj'+1}\cdot K_{h,i'+1,j'+1}. \end{align*} \end{definition} \vspace{0.5em} \noindent From its definition, we can check that the computational complexity of the convolution operator is $\mathcal{O}\left( c\left(\frac{d-b(k-1)-1}{s}+1\right)^2 k^2 \right)$. \section{Convolution Sliced Wasserstein} \label{sec:csw} In this section, we will define a convolution slicer that maps a tensor to a scalar by convolution operators. Moreover, we discuss the convolution slicer and some of its specific forms including the convolution-base slicer, the convolution-stride slicer, the convolution-dilation slicer, and their non-linear extensions. After that, we derive the convolution sliced Wasserstein (CSW), a family of variants of sliced Wasserstein, that utilizes a convolution slicer as the projecting method. Finally, we discuss some theoretical properties of CSW, namely, its metricity, its computational complexity, its sample complexity, and its connection to other variants of sliced Wasserstein. \subsection{Convolution Slicer} \label{subsec:cslicing} We first start with the definition of the convolution slicer, which plays an important role in defining convolution sliced Wasserstein. \begin{definition} (Convolution Slicer) \label{def:cslicer} For $N\geq 1$, given a sequence of kernels $K^{(1)} \in \mathbb{R}^{c^{(1)}\times d^{(1)} \times d^{(1)}},\ldots,$\\$K^{(N)} \in \mathbb{R}^{c^{(N)}\times d^{(N)} \times d^{(N)}}$, a \emph{convolution slicer} $\mathcal{S}(\cdot|K^{(1)},\ldots,K^{(N)})$ on $\mathbb{R}^{c \times d \times d}$ is a composition of $N$ convolution functions with kernels $K^{(1)},\ldots, K^{(N)}$ (with stride or dilation if needed) such that: \begin{align*} \mathcal{S}(X|K^{(1)},\ldots, K^{(N)}) \in \mathbb{R} \quad \forall X \in \mathbb{R}^{c \times d \times d}. \end{align*} \end{definition} \vspace{0.5 em} \noindent As indicated in Definition~\ref{def:cslicer}, the idea of the convolution slicer is to progressively map a given data $X$ to a one-dimensional subspace through a sequence of convolution kernels, which capture spatial relations across channels as well as local information of the data. It is starkly different from the vectorization step in standard sliced Wasserstein on images~(\ref{eq:SWimage}). \vspace{0.5 em} \noindent Now, we will specify three particular types of convolution slicers based on using linear function on the convolution operator, named convolution-base, convolution-stride, and convolution-dilation slicers. We first start with the definition of the convolution-base slicer. \begin{definition} \label{def:linearslicer} (Convolution-base Slicer) Given $X \in \mathbb{R}^{c \times d \times d}$ ($d \geq 2$), \begin{enumerate} \item When $d$ is even, $N = [\log_2 d]$, sliced kernels are defined as $K^{(1)}\in \mathbb{R}^{c \times (2^{-1}d+1) \times (2^{-1}d+1)}$ and $K^{(h)}\in \mathbb{R}^{1 \times (2^{-h}d+1) \times (2^{-h}d+1)}$ for $h =2,\ldots,N-1$, and $K^{(N)}\in \mathbb{R}^{1 \times a \times a}$ where $a= \frac{d}{2^{N-1}}$. Then, the \emph{convolution-base slicer} $\mathcal{CS}\text{-b}(X|K^{(1)},\ldots,K^{(N)})$ is defined as: \begin{align} \mathcal{CS}\text{-b}(X|K^{(1)},\ldots,K^{(N)}) = X^{(N)}, \quad X^{(h)} = \begin{cases} X &h=0\\ X^{(h-1)} \conv{1,1}K^{(h)} & 1 \leq h \leq N, \end{cases} \end{align} \item When $d$ is odd, the \emph{convolution-base slicer} $\mathcal{CS}\text{-b}(X|K^{(1)},\ldots,K^{(N)})$ takes the form: \begin{align} \mathcal{CS}\text{-b}(X|K^{(1)},\ldots,K^{(N)}) = \mathcal{CS}\text{-b} (X \conv{1,1} K^{(1)}|K^{(2)},\ldots,K^{(N)}), \end{align} where $K^{(1)} \in \mathbb{R}^{c \times 2 \times 2}$ and $K^{(2)}, \ldots, K^{(N)}$ are the corresponding sliced kernels that are defined on the dimension $d-1$. \end{enumerate} \end{definition} \vspace{0.5em} \noindent The idea of the convolution-base slicer in Definition~\ref{def:linearslicer} is to reduce the width and the height of the image by half after each convolution operator. If the width and the height of the image are odd, the first convolution operator is to reduce the size of the image by one via convolution with kernels of size $2\times 2$, and then the same procedure as that of the even case is applied. We would like to remark that the conventional slicing of sliced Wasserstein in Section~\ref{sec:background} is equivalent to a convolution-base slicer $\mathcal{S}(\cdot|K^{(1)})$ where $K^{(1)} \in \mathbb{R}^{c\times d\times d}$ that satisfies the constraint $\sum_{h=1}^c \sum_{i=1}^d\sum_{j=1}^d K^{(1)2}_{h,i,j}=1$. \vspace{0.5 em} \noindent We now discuss the second variant of the convolution slicer, named convolution-stride slicer, where we further incorporate stride into the convolution operators. Its definition is as follows. \begin{definition} \label{def:csslicer} (Convolution-stride Slicer) Given $X \in \mathbb{R}^{c \times d \times d}$ ($d \geq 2$), \begin{enumerate} \item When $d$ is even, $N = [\log_2 d]$, sliced kernels are defined as $K^{(1)}\in \mathbb{R}^{c \times 2 \times 2}$ and $K^{(h)}\in \mathbb{R}^{1 \times 2 \times 2}$ for $h =2,\ldots,N-1$, and $K^{(N)}\in \mathbb{R}^{1 \times a \times a}$ where $a= \frac{d}{2^{N-1}}$. Then, the \emph{convolution-stride slicer} $\mathcal{CS}\text{-s}(X|K^{(1)},\ldots,K^{(N)})$ is defined as: \begin{align} \mathcal{CS}\text{-s}(X|K^{(1)},\ldots,K^{(N)}) = X^{(N)}, \quad X^{(h)} = \begin{cases} X &h=0\\ X^{(h-1)} \conv{2,1}K^{(h)} & 1 \leq h \leq N-1, \\ X^{(h-1)} \conv{1,1}K^{(h)} & h=N, \end{cases} \end{align} \item When $d$ is odd, the \emph{convolution-stride slicer} $\mathcal{CS}\text{-s}(X|K^{(1)},\ldots,K^{(N)})$ takes the form: \begin{align} \mathcal{CS}\text{-s}(X|K^{(1)},\ldots,K^{(N)}) = \mathcal{CS}\text{-s} (X \conv{1,1} K^{(1)}|K^{(2)},\ldots,K^{(N)}), \end{align} where $K^{(1)} \in \mathbb{R}^{c \times 2 \times 2}$ and $K^{(2)}, \ldots, K^{(N)}$ are the corresponding sliced kernels that are defined on the dimension $d-1$. \end{enumerate} \end{definition} \vspace{0.5em} \noindent Similar to the convolution-base slicer in Definition~\ref{def:linearslicer}, the convolution-stride slicer reduces the width and the height of the image by half after each convolution operator. We use the same procedure of reducing the height and the width of the image by one when the height and the width of the image are odd. The benefit of the convolution-stride slicer is that the size of its kernels does not depend on the width and the height of images as that of the convolution-base slicer. This difference improves the computational complexity and time complexity of the convolution-stride slicer over those of the convolution-base slicer (cf. Proposition~\ref{proposition:space_time_complexities}). \vspace{0.5em} \noindent Now, we discuss the next variant of convolution slicer, named convolution-dilation slicer, where we include dilation with appropriate size into the convolution operators. \begin{definition} \label{def:csdslicer} (Convolution-dilation Slicer) Given $X \in \mathbb{R}^{c \times d \times d}$ ($d \geq 2$), \begin{enumerate} \item When $d$ is even, $N = [\log_2 d]$, sliced kernels are defined as $K^{(1)}\in \mathbb{R}^{c \times 2 \times 2}$ and $K^{(h)}\in \mathbb{R}^{1 \times 2 \times 2}$ for $h =2,\ldots,N-1$, and $K^{(N)}\in \mathbb{R}^{1 \times a \times a}$ where $a= \frac{d}{2^{N-1}}$. Then, the \emph{convolution-dilation slicer} $\mathcal{CS}\text{-d}(X|K^{(1)},\ldots,K^{(N)})$ is defined as: \begin{align} \mathcal{CS}\text{-d}(X|K^{(1)},\ldots,K^{(N)}) = X^{(N)}, \quad X^{(h)} = \begin{cases} X &h=0\\ X^{(h-1)} \conv{1,2}K^{(h)} & 1 \leq h \leq N-1, \\ X^{(h-1)} \conv{1,1}K^{(h)} & h=N, \end{cases} \end{align} \item When $d$ is odd, the \emph{convolution-dilation slicer} $\mathcal{CS}\text{-d}(X|K^{(1)},\ldots,K^{(N)})$ takes the form: \begin{align} \mathcal{CS}\text{-d}(X|K^{(1)},\ldots,K^{(N)}) = \mathcal{CS}\text{-d} (X \conv{1,1} K^{(1)}|K^{(2)},\ldots,K^{(N)}), \end{align} where $K^{(1)} \in \mathbb{R}^{c \times 2 \times 2}$ and $K^{(2)}, \ldots, K^{(N)}$ are the corresponding sliced kernels that are defined on the dimension $d-1$. \end{enumerate} \end{definition} \vspace{0.5em} \noindent As with the previous slicers, the convolution-dilation slicer also reduces the width and the height of the image by half after each convolution operator and it uses the same procedure for the odd dimension cases. The design of kernels' size of the convolution-dilation slicer is the same as that of the convolution-stride slicer. However, the convolution-dilation slicer has a bigger receptive field in each convolution operator which might be appealing when the information of the image is presented by a big block of pixels. \vspace{0.5 em} \noindent \textbf{Computational and projection memories complexities of the convolution slicers:} We now establish the computational and projection memory complexities of convolution-base, convolution-stride, and convolution-dilation slicers in the following proposition. We would like to recall that the projection memory complexity is the memory that is needed to store a slice (convolution kernels). \begin{proposition} \label{proposition:space_time_complexities} (a) When $d$ is even, the computational and projection memory complexities of convolution-base slicer are respectively at the order of $\mathcal{O}(cd^4)$ and $\mathcal{O}(c d^2)$. When $d$ is odd, these complexities are at the order of $\mathcal{O}(cd^2 + d^4)$ and $\mathcal{O}(c + d^2)$. \noindent (b) The computational and projection memory complexities of convolution-stride slicer are respectively at the order of $\mathcal{O}(cd^2)$ and $\mathcal{O}(c + [\log_{2} d])$. \noindent (c) The computational and projection memory complexities of convolution-dilation slicer are respectively at the order of $\mathcal{O}(cd^2)$ and $\mathcal{O}(c + [\log_{2} d])$. \end{proposition} \noindent Proof of Proposition~\ref{proposition:space_time_complexities} is in Appendix~\ref{subsec:proof:proposition:space_time_complexities}. We recall that the computational complexity and the projection memory complexity of the conventional slicing in sliced Wasserstein are $\mathcal{O}(cd^2)$ and $\mathcal{O}(cd^2)$. We can observe that the convolution-base slicer has a worse computational complexity than the conventional slicing while having the same projection memory complexity. Since the size of kernels does not depend on the size of images, the convolution-stride slicer and the convolution-dilation slicer have the same computational complexity as the conventional slicing $\mathcal{O}(cd^2)$. However, their projection memory complexities are cheaper than conventional slicing, namely, $\mathcal{O}(c+ [\log_{2} d])$ compared to $\mathcal{O}(cd^2)$. \begin{figure*}[!t] \begin{center} \begin{tabular}{c} \widgraph{1\textwidth}{figures/CSW.pdf} \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{The convolution slicing process (using the convolution slicer). The images $X_{1}, \ldots, X_{n} \in \mathbb{R}^{c \times d \times d}$ are directly mapped to a scalar by a sequence of convolution functions which have kernels as random tensors. This slicing process leads to the convolution sliced Wasserstein~(\ref{eq:csw}) on images. } } \label{fig:csw} \vskip -0.1in \end{figure*} \vspace{0.5 em} \noindent \textbf{Non-linear convolution-base slicer: } The composition of convolution functions in the linear convolution slicer and its linear variants is still a linear function, which may not be effective when the data lie in a complex and highly non-linear low-dimensional subspace. A natural generalization of linear convolution slicers to enhance the ability of the slicers to capture the non-linearity of the data is to apply a non-linear activation function after convolution operators. This enables us to define a non-linear slicer in Definition~\ref{def:nonlinearslicer} in Appendix~\ref{sec:addmarterial}. The non-linear slicer can be seen as a defining function in generalized Radon Transform~\cite{radon20051} which was used previously in generalized sliced Wasserstein~\cite{kolouri2019generalized}. \subsection{Convolution Sliced Wasserstein} \label{subsec:csw} With the definition of convolution slicers on hand, we now state our general definition of convolution sliced Wasserstein. An illustration of the convolution sliced Wasserstein is given in Figure~\ref{fig:csw}. \begin{definition} \label{def:csw} For any $p \geq 1$, the \emph{convolution sliced Wasserstein} (CSW) of order $p >0$ between two given probability measures $\mu, \nu \in \mathcal{P}_p(\mathbb{R}^{c \times d \times d})$ is given by: \begin{align} \label{eq:csw} & \text{CSW}_p (\mu,\nu) : = \nonumber \\ & \left(\mathbb{E}_{K^{(1)}\sim \mathcal{U}(\mathcal{K}^{(1)}),\ldots, K^{(N)}\sim \mathcal{U}(\mathcal{K}^{(N)})} \left[W^p_p\left(\mathcal{S}(\cdot|K^{(1)}, \ldots, K^{(N)}) \sharp \mu, \mathcal{S}(\cdot|K^{(1)}, \ldots, K^{(N)})\sharp \nu\right)\right]\right)^{\frac{1}{p}}, \end{align} where $\mathcal{S}(\cdot|K^{(1)}, \ldots, K^{(N)})$ is a convolution slicer with $K^{(i)} \in \mathbb{R}^{c^{(i)}\times k^{(i)} \times k^{(i)}}$ for any $i \in [N]$ and $\mathcal{U}(\mathcal{K}^{(i)})$ is the uniform distribution with the realizations being in the set $\mathcal{K}^{(i)}$ which is defined as $\mathcal{K}^{(i)}:=\left\{K^{(i)} \in \mathbb{R}^{c^{(i)}\times k^{(i)} \times k^{(i)}}| \sum_{h=1}^{c^{(i)}}\sum_{i'=1}^{k^{(i)}}\sum_{j'=1}^{k^{(i)}}K^{(i)2}_{h,i',j'} =1\right\}$, namely, the set $\mathcal{K}^{(i)}$ consists of tensors $K^{(i)}$ whose squared $\ell_{2}$ norm is 1. \end{definition} \noindent When we specifically consider the convolution slicer as convolution-base slicer ($\mathcal{CS}\text{-b}$), convolution-stride slicer ($\mathcal{CS}\text{-s}$), and convolution-dilation slicer ($\mathcal{CS}\text{-d}$), we have the corresponding notions of convolution-base sliced Wasserstein (\text{CSW}-b), convolution-stride sliced Wasserstein (\text{CSW}-s), and convolution-dilation sliced Wasserstein (\text{CSW}-d). \vspace{0.5em} \noindent \textbf{Monte Carlo estimation and implementation:} Similar to the conventional sliced Wasserstein, the expectation with respect to kernels $K^{(1)}, \ldots, K^{(N)}$ uniformly drawn from the sets $\mathcal{K}^{(1)}, \ldots, \mathcal{K}^{(N)}$ in the convolution sliced Wasserstein is intractable to compute. Therefore, we also make use of Monte Carlo method to approximate the expectation, which leads to the following approximation of the convolution sliced Wasserstein: \begin{align} \text{CSW}_p (\mu,\nu) \approx \frac{1}{L} \sum_{i = 1}^{L} W^p_p\left(\mathcal{S}(\cdot|K^{(1)}_i, \ldots, K^{(N)}_i) \sharp \mu, \mathcal{S}(\cdot|K^{(1)}_i, \ldots, K^{(N)}_i)\sharp \nu\right), \label{eq:Monte_Carlo_approx_CSW} \end{align} where $K^{(\ell)}_i$ are uniform samples from the sets $\mathcal{K}^{(\ell)}$ for any $\ell \in [N]$ and $i \in [L]$. Since each of the convolution slicer $\mathcal{S}(\cdot|K^{(1)}_i, \ldots, K^{(N)}_i)$ is in one dimension, we can utilize the closed-form expression of Wasserstein metric in one dimension to compute $W_p\left(\mathcal{S}(\cdot|K^{(1)}_i, \ldots, K^{(N)}_i) \sharp \mu, \mathcal{S}(\cdot|K^{(1)}_{i}, \ldots, K^{(N)}_{i})\sharp \nu\right)$ with a complexity of $\mathcal{O}(m \log_2 m)$ for each $i \in [L]$ where $m$ is the maximum number of supports of $\mu$ and $\nu$. Therefore, the total computational complexity of computing the Monte Carlo approximation~(\ref{eq:Monte_Carlo_approx_CSW}) is $\mathcal{O}(L m \log_2 m)$ when the probability measures $\mu$ and $\nu$ have at most $m$ supports. It is comparable to the computational complexity of sliced Wasserstein on images~(\ref{eq:SWimage}) where we directly vectorize the images and apply the Radon transform to these flatten images. Finally, for the implementation, we would like to remark that $L$ convolution slicers in equation~(\ref{eq:Monte_Carlo_approx_CSW}) can be computed \textit{independently} and \textit{parallelly} using the group convolution implementation which is supported in almost all libraries. \vspace{0.5 em} \noindent \textbf{Properties of convolution sliced Wasserstein:} We first have the following result for the metricity of the convolution sliced Wasserstein. \begin{theorem} \label{theorem:metricity_convolution_sliced} For any $p \geq 1$, the convolution sliced Wasserstein $\text{CSW}_p(.,.)$ is a pseudo-metric on the space of probability measures on $\mathbb{R}^{c \times d \times d}$, namely, it is symmetric, satisfies the triangle inequality, and $\text{CSW}_{p}(\mu, \nu) = 0 \notiff \mu = \nu$. \end{theorem} \noindent Proof of Theorem~\ref{theorem:metricity_convolution_sliced} is in Appendix~\ref{subsec:proof:theorem:metricity_convolution_sliced}. Our next result establishes the connection between the convolution sliced Wasserstein and max-sliced Wasserstein and Wasserstein distances. \begin{proposition} \label{proposition:connection_sliced} For any $p \geq 1$, we find that \begin{align*} \text{CSW}_p (\mu,\nu) \leq \text{max-SW}_p(\mu,\nu) \leq W_{p}(\mu, \nu), \end{align*} where $\text{max-SW}_p(\mu,\nu) : = \max_{\theta \in \mathbb{R}^{cd^2}: \|\theta\| \leq 1} \text{W}_p (\theta \sharp \mu,\theta \sharp \nu)$ is max-sliced Wasserstein metric of order $p$ between $\mu$ and $\nu$. \end{proposition} \vspace{0.5em} \noindent Proof of Proposition~\ref{proposition:connection_sliced} is in Appendix~\ref{subsec:proof:proposition:connection_sliced}. Given the bounds in Proposition~\ref{proposition:connection_sliced}, we demonstrate that the convolution sliced Wasserstein does not suffer from the curse of dimensionality for the inference purpose, namely, the sample complexity for the empirical distribution from i.i.d. samples to approximate their underlying distribution is at the order of $\mathcal{O}(n^{-1/2})$. \begin{proposition} \label{proposition:rate_convolution} Assume that $P$ is a probability measure supported on compact set of $\mathbb{R}^{c \times d \times d}$. Let $X_{1}, X_{2}, \ldots, X_{n}$ be i.i.d. samples from $P$ and we denote $P_{n} = \frac{1}{n} \sum_{i = 1}^{n} \delta_{X_{i}}$ as the empirical measure of these data. Then, for any $p \geq 1$, there exists a universal constant $C > 0$ such that \begin{align*} \mathbb{E} [\text{CSW}_p (P_{n},P)] \leq C \sqrt{\frac{(cd^2 + 1) \log n}{n}}, \end{align*} where the outer expectation is taken with respect to the data $X_{1}, X_{2}, \ldots, X_{n}$. \end{proposition} \noindent Proof of Proposition~\ref{proposition:rate_convolution} is in Appendix~\ref{subsec:proof:proposition:rate_convolution}. The result of Proposition~\ref{proposition:rate_convolution} indicates that the sample complexity of the convolution sliced Wasserstein is comparable to that of the sliced Wasserstein on images~(\ref{eq:SWimage}), which is at the order of $\mathcal{O}(n^{-1/2})$~\cite{Bobkov_2019}, and better than that of the Wasserstein metric, which is at the order of $\mathcal{O}(n^{-1/(2cd^2)})$~\cite{Fournier_2015}. \vspace{0.5 em} \noindent \textbf{Extension to non-linear convolution sliced Wasserstein:} In Appendix~\ref{sec:addmarterial}, we provide a non-linear version of the convolution sliced Wasserstein, named non-linear convolution sliced Wasserstein. The high-level idea of the non-linear version is to incorporate non-linear activation functions to the convolution-base, convolution-stride, and convolution-dilation slicers. The inclusion of non-linear activation functions is to enhance the ability of slicers to capture the non-linearity of the data. By plugging these non-linear convolution slicers into the general definition of the convolution sliced Wasserstein in Definition~\ref{def:csw}, we obtain the non-linear variants of convolution sliced Wasserstein. Details of these variants are in Appendix~\ref{sec:addmarterial}. \section{Related Works} \label{sec:relatedworks} Sliced Wasserstein is used for the pooling mechanism for aggregating a set of features in~\cite{naderializadeh2021pooling}. Sliced Wasserstein gradient flows are investigated in~\cite{liutkus2019sliced,bonet2021sliced}. Variational inference based on sliced Wasserstein is carried out in~\cite{yi2021sliced}. Similarly, sliced Wasserstein is used for approximate Bayesian computation in~\cite{nadjahi2020approximate}. Statistical guarantees of training generative models with sliced Wasserstein is derived in~\cite{nadjahi2019asymptotic}. Other frameworks for generative modeling using sliced Wasserstein are sliced iterative normalizing flows~\cite{dai2021sliced} and run-sort-rerun for fine-tuning pre-trained model~\cite{lezama2021run}. Differentially private sliced Wasserstein is proposed in~\cite{rakotomamonjy2021differentially}. Approximating Wasserstein distance based on one-dimensional transportation plans from orthogonal projecting directions is introduced in~\cite{rowland2019orthogonal}. To reduce the projection complexity of sliced Wasserstein, a biased approximation based on the concentration of Gaussian projections is proposed in~\cite{nadjahi2021fast}. Augmenting probability measures to a higher-dimensional space for a better linear separation is used in augmented sliced Wasserstein~\cite{chen2022augmented}. Projected Robust Wasserstein (PRW) metrics that find the best orthogonal linear projecting operator onto $k>1$ dimensional space and Riemannian optimization techniques for solving it are proposed in~\cite{paty2019subspace, lin2020projection,huang2021riemannian}. Sliced Gromov Wasserstein, a fast sliced version of Gromov Wasserstein, is proposed in~\cite{titouan2019sliced}. The slicing technique is also be applied in approximating mutual information~\cite{goldfeld2021sliced}. We would like to recall that all the above works assume working with vector spaces and need to use vectorization when dealing with images. \section{Experiments} \label{sec:experiments} In this section, we focus on comparing the sliced Wasserstein (SW) (with the conventional slicing), the convolution-base sliced Wasserstein (CSW-b), the convolution sliced Wasserstein with stride (CSW-s), and the convolution sliced Wassersstein with dilation (CSW-d) on tasks that involve probability measures over images. In particular, we first show the values of the SW and the CSW variants between probability measures over digits of the MNIST dataset~\cite{lecun1998gradient}. We then compare SW and several variants of CSW in training generative models on image datasets such as CIFAR10 (32x32)~\cite{krizhevsky2009learning}, STL10 (96x96)~\cite{coates2011analysis}, CelebA (64x64), and CelebA-HQ (128x128)~\cite{liu2015faceattributes}. We recall that the number of projections in SW and CSW's variants is denoted as $L$. \subsection{Comparing Measures over MNIST's digits} In the MNIST dataset, there are 60000 images of size $28 \times 28$ of digits from 0 to 9. We compute SW between two empirical probability measures over images of every two digits, e.g., 1 and 2, 1 and 3, and so on. To compare on the same digit, e.g, 1, we split images of the same digit into two disjoint sets and then compute the SW between the corresponding empirical probability measures. \begin{table}[!h] \centering \caption{\footnotesize{Values of SW and CSW variants between probability measures over digits images on MNIST with $L=100$}} \scalebox{0.65}{ \begin{tabular}{cc|c|c|c|c|c|c|c|c|c|c} \toprule &&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{1}&\multicolumn{1}{c|}{2}&\multicolumn{1}{c|}{3}&\multicolumn{1}{c|}{4}&\multicolumn{1}{c|}{5}&\multicolumn{1}{c|}{6}&\multicolumn{1}{c|}{7}&\multicolumn{1}{c|}{8}&\multicolumn{1}{c}{9} \\ \midrule \multirow{4}{*}{0} &SW&0.58$\pm$0.01&23.19$\pm$0.88&15.81$\pm$0.88&15.31$\pm$0.83&17.25$\pm$0.57&12.45$\pm$0.91&16.44$\pm$0.8&17.71$\pm$0.71&15.8$\pm$1.12&18.14$\pm$0.94 \\ &CSW-b&0.83$\pm$0.03&32.33$\pm$3.02&24.86$\pm$2.11&25.73$\pm$2.43&24.71$\pm$2.55&18.6$\pm$1.76&21.86$\pm$1.71&25.6$\pm$1.72&27.24$\pm$2.36&24.93$\pm$0.92 \\ &CSW-s&0.59$\pm$0.04&24.13$\pm$2.36&16.95$\pm$1.21&15.21$\pm$2.02&19.2$\pm$1.33&13.33$\pm$1.85&18.0$\pm$1.57&18.04$\pm$2.21&15.51$\pm$2.21&17.99$\pm$2.64 \\ &CSW-d&0.59$\pm$0.01&22.65$\pm$1.47&16.15$\pm$1.28&16.79$\pm$0.79&17.91$\pm$0.65&12.6$\pm$1.28&17.81$\pm$1.28&18.53$\pm$1.54&14.85$\pm$1.76&16.93$\pm$0.97 \\ \midrule \multirow{4}{*}{1} &SW&22.36$\pm$0.92&0.45$\pm$0.0&16.48$\pm$1.24&16.26$\pm$0.48&16.58$\pm$0.79&15.53$\pm$0.37&16.95$\pm$1.04&15.71$\pm$0.8&14.59$\pm$0.45&15.82$\pm$0.67 \\ &CSW-b&34.71$\pm$1.82&0.65$\pm$0.02&24.19$\pm$2.05&25.62$\pm$1.61&27.75$\pm$1.6&23.7$\pm$1.92&28.07$\pm$0.58&27.05$\pm$2.75&23.84$\pm$1.37&25.44$\pm$0.93 \\ &CSW-s&22.59$\pm$3.07&0.45$\pm$0.03&16.04$\pm$1.25&17.2$\pm$0.8&16.25$\pm$1.13&15.7$\pm$1.3&17.37$\pm$1.37&15.87$\pm$0.76&15.85$\pm$0.96&17.08$\pm$0.96 \\ &CSW-d&23.48$\pm$1.47&0.46$\pm$0.01&16.41$\pm$0.73&16.39$\pm$0.74&16.93$\pm$0.99&15.01$\pm$0.74&16.85$\pm$1.02&16.48$\pm$0.99&15.22$\pm$0.78&15.76$\pm$0.8 \\ \midrule \multirow{4}{*}{2} &SW&16.03$\pm$0.84&16.4$\pm$0.29&0.62$\pm$0.02&12.9$\pm$0.53&12.98$\pm$1.39&12.83$\pm$0.39&11.11$\pm$0.31&16.41$\pm$0.54&11.35$\pm$0.79&14.61$\pm$0.75 \\ &CSW-b&24.7$\pm$0.84&24.57$\pm$1.05&0.89$\pm$0.05&19.56$\pm$1.07&19.09$\pm$0.48&20.65$\pm$1.91&17.95$\pm$0.94&20.9$\pm$1.96&16.98$\pm$1.21&18.81$\pm$0.66 \\ &CSW-s&16.38$\pm$1.76&16.3$\pm$0.87&0.64$\pm$0.03&11.92$\pm$0.89&14.81$\pm$2.17&11.42$\pm$1.09&11.3$\pm$0.85&15.27$\pm$1.29&10.58$\pm$1.38&14.84$\pm$2.31 \\ &CSW-d&16.22$\pm$0.98&17.09$\pm$0.93&0.6$\pm$0.01&13.22$\pm$0.37&13.81$\pm$0.73&11.92$\pm$0.5&12.13$\pm$1.0&16.3$\pm$0.93&11.82$\pm$1.26&15.26$\pm$1.45 \\ \midrule \multirow{4}{*}{3} &SW&15.89$\pm$0.82&15.7$\pm$0.63&12.6$\pm$0.96&0.57$\pm$0.01&15.04$\pm$0.93&8.89$\pm$0.57&14.96$\pm$1.34&14.8$\pm$0.46&9.85$\pm$0.62&13.52$\pm$0.77 \\ &CSW-b&26.62$\pm$1.65&25.43$\pm$3.13&18.57$\pm$1.66&0.87$\pm$0.05&22.38$\pm$2.45&14.11$\pm$1.52&23.83$\pm$2.36&24.15$\pm$1.44&17.0$\pm$1.84&19.68$\pm$1.21 \\ &CSW-s&16.71$\pm$1.88&16.25$\pm$1.41&12.31$\pm$1.55&0.6$\pm$0.01&13.7$\pm$0.91&8.97$\pm$1.41&15.69$\pm$1.04&14.94$\pm$1.41&10.91$\pm$0.63&14.07$\pm$1.26 \\ &CSW-d&15.23$\pm$1.83&16.37$\pm$1.05&13.19$\pm$0.79&0.58$\pm$0.02&15.0$\pm$0.91&9.21$\pm$0.61&16.14$\pm$0.32&15.64$\pm$1.24&11.1$\pm$0.76&13.93$\pm$0.6 \\ \midrule \multirow{4}{*}{4} &SW&17.02$\pm$1.0&16.82$\pm$0.86&12.61$\pm$0.55&14.75$\pm$0.99&0.58$\pm$0.01&11.39$\pm$0.44&12.07$\pm$0.51&10.51$\pm$0.56&12.43$\pm$0.78&6.78$\pm$0.47 \\ &CSW-b&26.86$\pm$2.04&26.44$\pm$1.75&18.91$\pm$2.74&22.08$\pm$1.47&0.83$\pm$0.03&18.51$\pm$1.15&18.49$\pm$1.35&18.95$\pm$1.67&17.29$\pm$2.19&10.54$\pm$0.69 \\ &CSW-s&16.2$\pm$2.1&15.65$\pm$1.16&13.94$\pm$1.92&15.23$\pm$1.32&0.58$\pm$0.03&11.29$\pm$2.18&12.33$\pm$1.05&11.07$\pm$0.9&12.39$\pm$1.71&7.84$\pm$0.79 \\ &CSW-d&17.34$\pm$1.77&17.28$\pm$1.27&13.08$\pm$1.54&15.3$\pm$0.67&0.57$\pm$0.01&12.0$\pm$0.52&13.23$\pm$0.44&11.98$\pm$0.71&11.39$\pm$0.75&7.26$\pm$0.51 \\ \midrule \multirow{4}{*}{5} &SW&11.77$\pm$0.36&14.55$\pm$0.93&12.64$\pm$0.47&8.7$\pm$0.71&10.68$\pm$1.3&0.64$\pm$0.01&11.83$\pm$0.83&12.54$\pm$0.2&8.99$\pm$0.78&10.4$\pm$0.75 \\ &CSW-b&20.55$\pm$1.98&25.31$\pm$2.14&19.68$\pm$0.92&13.55$\pm$1.5&18.43$\pm$1.22&0.91$\pm$0.02&16.55$\pm$1.0&17.45$\pm$0.8&14.4$\pm$1.07&15.85$\pm$1.21 \\ &CSW-s&13.04$\pm$0.61&15.15$\pm$1.18&12.65$\pm$0.94&8.27$\pm$1.01&11.83$\pm$0.85&0.62$\pm$0.01&12.04$\pm$1.0&12.36$\pm$1.48&8.64$\pm$0.55&10.8$\pm$0.97 \\ &CSW-d&11.79$\pm$1.28&15.31$\pm$1.15&13.54$\pm$1.22&8.82$\pm$1.07&12.33$\pm$0.75&0.62$\pm$0.04&12.45$\pm$0.79&13.02$\pm$0.81&9.18$\pm$0.54&10.73$\pm$0.85 \\ \midrule \multirow{4}{*}{6} &SW&15.97$\pm$0.87&16.84$\pm$1.4&11.52$\pm$0.53&15.56$\pm$0.66&12.09$\pm$0.63&11.98$\pm$0.82&0.65$\pm$0.01&16.69$\pm$1.63&12.52$\pm$0.42&13.84$\pm$0.93 \\ &CSW-b&25.66$\pm$2.37&26.39$\pm$0.68&15.93$\pm$0.91&22.98$\pm$3.47&18.8$\pm$1.9&17.0$\pm$1.66&0.91$\pm$0.02&23.31$\pm$2.45&17.62$\pm$0.99&18.73$\pm$0.84 \\ &CSW-s&17.84$\pm$1.85&17.61$\pm$1.92&11.49$\pm$0.42&14.07$\pm$1.43&12.25$\pm$1.23&11.74$\pm$0.35&0.66$\pm$0.01&15.71$\pm$1.03&13.33$\pm$0.68&12.55$\pm$1.4 \\ &CSW-d&16.95$\pm$1.45&17.15$\pm$1.12&11.47$\pm$0.79&15.71$\pm$1.24&11.91$\pm$0.37&12.63$\pm$0.94&0.67$\pm$0.02&16.36$\pm$1.29&13.15$\pm$1.0&14.35$\pm$0.92 \\ \midrule \multirow{4}{*}{7} &SW&17.55$\pm$1.35&16.65$\pm$0.79&15.3$\pm$0.83&15.47$\pm$0.73&11.39$\pm$0.77&12.4$\pm$0.54&16.04$\pm$1.19&0.61$\pm$0.01&13.66$\pm$1.12&8.16$\pm$0.06 \\ &CSW-b&27.36$\pm$2.07&28.35$\pm$1.32&22.24$\pm$1.59&23.56$\pm$1.2&18.46$\pm$2.75&19.32$\pm$1.68&25.38$\pm$1.94&0.94$\pm$0.04&22.63$\pm$1.67&14.71$\pm$0.52 \\ &CSW-s&16.74$\pm$2.14&15.81$\pm$1.23&17.72$\pm$2.26&14.75$\pm$0.83&13.6$\pm$2.24&13.45$\pm$1.94&15.37$\pm$1.44&0.64$\pm$0.05&12.92$\pm$0.77&8.95$\pm$1.3 \\ &CSW-d&18.21$\pm$1.44&16.31$\pm$1.55&16.3$\pm$1.05&14.97$\pm$0.76&11.45$\pm$0.35&12.82$\pm$1.54&16.9$\pm$0.95&0.69$\pm$0.04&13.3$\pm$0.59&8.72$\pm$0.48 \\ \midrule \multirow{4}{*}{8} &SW&16.16$\pm$1.14&15.09$\pm$0.96&11.02$\pm$0.54&10.02$\pm$0.79&11.45$\pm$0.69&8.46$\pm$0.75&13.41$\pm$0.29&14.33$\pm$1.27&0.65$\pm$0.02&10.62$\pm$0.35 \\ &CSW-b&26.49$\pm$2.12&21.76$\pm$0.63&15.73$\pm$1.07&17.16$\pm$1.58&18.25$\pm$1.36&14.5$\pm$0.94&18.87$\pm$1.68&21.36$\pm$1.76&0.97$\pm$0.04&15.85$\pm$0.81 \\ &CSW-s&17.19$\pm$1.17&14.26$\pm$1.07&11.01$\pm$0.79&10.32$\pm$1.02&11.86$\pm$1.4&8.75$\pm$0.63&13.23$\pm$0.96&13.72$\pm$1.3&0.66$\pm$0.04&10.65$\pm$1.02 \\ &CSW-d&15.42$\pm$1.31&15.41$\pm$0.87&11.06$\pm$0.43&10.56$\pm$0.44&12.51$\pm$1.74&8.98$\pm$0.61&13.87$\pm$1.29&14.77$\pm$0.67&0.65$\pm$0.03&11.09$\pm$1.06 \\ \midrule \multirow{4}{*}{9} &SW&17.94$\pm$1.19&15.68$\pm$0.64&13.83$\pm$1.05&12.72$\pm$0.48&7.37$\pm$0.66&10.62$\pm$0.92&13.54$\pm$0.48&8.24$\pm$0.31&10.66$\pm$0.38&0.59$\pm$0.02 \\ &CSW-b&26.67$\pm$3.65&26.0$\pm$1.95&20.52$\pm$1.24&19.68$\pm$1.14&10.39$\pm$0.42&16.36$\pm$2.03&19.24$\pm$0.99&14.95$\pm$1.29&15.71$\pm$1.44&0.84$\pm$0.04 \\ &CSW-s&16.73$\pm$1.84&16.04$\pm$1.28&14.31$\pm$1.66&13.22$\pm$1.43&7.42$\pm$0.45&10.32$\pm$0.65&13.74$\pm$2.08&8.64$\pm$0.8&10.52$\pm$1.33&0.6$\pm$0.02 \\ &CSW-d&17.58$\pm$1.17&15.43$\pm$1.09&13.98$\pm$0.51&13.55$\pm$1.43&7.18$\pm$0.37&10.89$\pm$0.58&13.94$\pm$1.11&8.58$\pm$0.42&11.68$\pm$0.62&0.6$\pm$0.02 \\ \bottomrule \end{tabular} } \label{tab:MNIST_L100} \end{table} \vspace{0.5em} \noindent \textbf{Meaningful measures of discrepancy:} We approximate the SW and the CSW's variants by a finite number of projections, namely, $L=1$, $L=10$, and $L=100$. We show the mean of approximated values of $L=100$ over 5 different runs and the corresponding standard deviation in Table~\ref{tab:MNIST_L100}. According to the table, we observe that SW and CSW's variants can preserve discrepancy between digits well. In particular, the discrepancies between probability measures of the same digit are relatively small compared to the discrepancies between probability measures of different digits. Moreover, we see that the values of CSW-s and CSW-d are closed to the values of SW on the same pairs of digits. We also show similar tables for $L=1$ and $L=10$ in Tables~\ref{tab:MNIST_L1}-\ref{tab:MNIST_L10} in Appendix~\ref{sec:add_experiments}. From these tables, we observe that the number of projections can affect the stability of both SW and CSW's variants. \begin{table}[!t] \centering \caption{\footnotesize{Summary of FID and IS scores of methods on CIFAR10 (32x32), CelebA (64x64), STL10 (96x96), and CelebA-HQ (128x128).}} \scalebox{0.9}{ \begin{tabular}{l|cc|c|cc|c} \toprule \multirow{2}{*}{Method}& \multicolumn{2}{c|}{CIFAR10 (32x32)}&\multicolumn{1}{c|}{CelebA (64x64)}&\multicolumn{2}{c|}{STL10 (96x96)}&\multicolumn{1}{c|}{CelebA-HQ (128x128)}\\ \cmidrule{2-7} & FID ($\downarrow$) &IS ($\uparrow$)& FID ($\downarrow$) & FID ($\downarrow$) &IS ($\uparrow$)& FID ($\downarrow$) \\ \midrule SW (L=1)&87.97&3.59&128.81&170.96&3.68&\textbf{275.44}\\ CSW-b (L=1)&84.38&4.28&85.83&173.33&\textbf{3.89}&315.91 \\ CSW-s (L=1)&80.10&4.31&\textbf{66.52}&\textbf{168.9}3&3.75 &303.57\\ CSW-d (L=1)&\textbf{63.94}&\textbf{4.89} &89.37&212.61&2.48&321.06\\ \midrule SW (L=100)&53.67&5.74&20.08&100.35&8.14&51.80\\ CSW-b (L=100)&49.78&5.78 &18.96&\textbf{91.75}&8.11 &53.05\\ CSW-s (L=100)&\textbf{43.88}&\textbf{6.13} & \textbf{13.76}&97.08&\textbf{8.20}&\textbf{32.94}\\ CSW-d (L=100)&47.16&5.90&14.96&102.58&7.53&41.01 \\ \midrule SW (L=1000)&43.11&6.09&14.92 &84.78&9.06& 28.19\\ CSW-b (L=1000)&43.17&6.07&14.75&86.98&9.11 &29.69\\ CSW-s (L=1000)&\textbf{35.40}&\textbf{6.64}&\textbf{12.55} &\textbf{77.24}&9.31&\textbf{22.25}\\ CSW-d (L=1000)&41.34&6.33 &13.24&83.36&\textbf{9.42} & 25.93\\ \bottomrule \end{tabular} } \label{tab:summary} \end{table} \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \widgraph{0.3\textwidth}{CIFAR/fid_cifar.pdf} & \widgraph{0.3\textwidth}{CIFAR/is_cifar.pdf} & \widgraph{0.3\textwidth}{CelebA/fid_celeba.pdf} \\ \widgraph{0.3\textwidth}{STL/fid_stl.pdf} & \widgraph{0.3\textwidth}{STL/is_stl.pdf} & \widgraph{0.3\textwidth}{CelebAHQ/fid_celebahq.pdf} \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{FID scores and IS scores over epochs of different training losses on datasets. We observe that CSW's variants usually help the generative models converge faster. } } \label{fig:iterCIFAR} \vskip -0.1in \end{figure*} \vspace{0.5em} \noindent \textbf{Projection memory for slicers:} For SW, the conventional slicing requires $L\cdot 784$ float variables for $L$ projecting directions of $28\cdot 28$ dimension. On the other hand, CSW only needs $L \cdot 338$ float variables since each projecting direction is represented as three kernels $K^{(1)} \in \mathbb{R}^{15\times 15}$, $K^{(2)} \in \mathbb{R}^{8\times 8}$, and $K^{(3)} \in \mathbb{R}^{7\times 7}$. More importantly, CSW-s and CSW-d require only $L\cdot 57$ float variables since they are represented by three kernels $K^{(1)} \in \mathbb{R}^{2\times 2}$, $K^{(2)} \in \mathbb{R}^{2\times 2}$, and $K^{(3)} \in \mathbb{R}^{7\times 7}$. From this experiment, we can see that using the whole unit-hypersphere as the space of projecting directions can be sub-optimal when dealing with images. \subsection{Generative Models} We follow the framework of sliced Wasserstein generator in~\cite{deshpande2018generative}. In particular, we parameterize the model distribution $p_\phi (x) \in \mathcal{P}(\mathbb{R}^{c \times d \times d})$ and $p_\phi (x) = G_\phi \sharp \epsilon$ where $\epsilon$ is the standard multivariate Gaussian of 128 dimension and $G_\phi$ is a neural network with Resnet architecture~\cite{he2016deep}. Since the ground truth metric between images is unknown, we need a discriminator as a type of ground metric learning. We denote the discriminator as a function $T_{\beta_2} \circ T_{\beta_1}$ where $T_{\beta_1}:\mathbb{R}^{c \times d \times d } \to \mathbb{R}^{c' \times d' \times d'}$ and $T_{\beta_2}: \mathbb{R}^{c' \times d' \times d'} \to \mathbb{R}$. In greater detail, $T_{\beta_1}$ maps the original images to their corresponding features maps and $T_{\beta_2}$ maps their features maps to their corresponding discriminative scores. Let the data distribution is $\mu$, our training objectives are: \begin{align*} &\min_{\beta_1,\beta_2} \left(\mathbb{E}_{x \sim \mu} [\min (0,-1+ T_{\beta_2}(T_{\beta_1}(x)))] + \mathbb{E}_{z \sim \epsilon} [\min(0, -1-T_{\beta_2}(T_{\beta_1} (G_\phi(z))))] \right), \\ &\min_{\phi} \mathbb{E}_{X \sim \mu^{\otimes m}, Y \sim \epsilon^{\otimes m}} \mathcal{D}(T_{\beta_1} \sharp P_X, T_{\beta_1}\sharp G_\phi \sharp P_Y), \end{align*} where $m \geq 1$ is the mini-batch size and $\mathcal{D}(\cdot,\cdot)$ is the SW or CSW's variants. \begin{table}[!t] \centering \caption{\footnotesize{Computational time and memory of methods (reported in the number of iterations per a second and megabytes (MB).}} \scalebox{0.8}{ \begin{tabular}{l|cc|cc|cc|cc} \toprule \multirow{2}{*}{Method}& \multicolumn{2}{c|}{CIFAR10 (32x32)}&\multicolumn{2}{c|}{CelebA (64x64)}&\multicolumn{2}{c|}{STL10 (96x96)}&\multicolumn{2}{c}{CelebA-HQ} (128x128)\\ \cmidrule{2-9} & Iters/s ($\uparrow$) &Mem ($\downarrow$)& Iters/s ($\uparrow$) &Mem ($\downarrow$)& Iters/s ($\uparrow$) &Mem ($\downarrow$)& Iters/s ($\uparrow$) &Mem ($\downarrow$)\\ \midrule SW (L=1)& 18.98&2071& 6.21& 8003 & 9.59& 4596 & 10.35& 4109\\ SW (L=100)& 18.53&2080&6.16& 8015 & 9.47&4601 &10.22 & 4117\\ SW (L=1000)& 18.15&2169&6.10 & 8102 &9.13 & 4647 &10.17 &4202\\ \midrule CSW-b (L=1)& 18.43&2070 & 6.21& 8003& 9.56& 4596 & 10.33 & 4109\\ CSW-b (L=100)& 18.35&2077& 6.15& 8009 &9.40 & 4598 &10.19 & 4110\\ CSW-b (L=1000)& 18.06&2117 & 6.10&8049 &9.07 &4613 & 10.12& 4134\\ \midrule CSW-s (d) (L=1)& 18.69&2070 &6.21&8003 &9.56 & 4596 &10.33 & 4109\\ CSW-s (d) (L=100)& 18.50&2073&6.16 & 8005 &9.41 & 4597& 10.20&4109\\ CSW-s (d) (L=1000)& 18.10&2098 &6.10 &8029 &9.10 & 4603& 10.12 &4114\\ \bottomrule \end{tabular} } \label{tab:timeandmem} \end{table} \begin{figure*}[!h] \begin{center} \begin{tabular}{ccc} \widgraph{0.26\textwidth}{CIFAR/sw_1.png} & \widgraph{0.26\textwidth}{CIFAR/sw_100.png} & \widgraph{0.26\textwidth}{CIFAR/sw_1000.png} \\ SW ($L=1$) & SW ($L=100$) & SW ($L=1000$) \\ \widgraph{0.26\textwidth}{CIFAR/csws_1.png} & \widgraph{0.26\textwidth}{CIFAR/csws_100.png} & \widgraph{0.26\textwidth}{CIFAR/csws_1000.png} \\ CSW-s ($L=1$) & CSW-s ($L=100$) & CSW-s ($L=1000$) \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{Random generated images of SW and CSW-s on CIFAR10. } } \label{fig:cifar} \vskip -0.1in \end{figure*} \vspace{0.5em} \noindent We train the above model on standard benchmarks such as CIFAR10 (32x32)~\cite{krizhevsky2009learning}, STL10 (96x96)~\cite{coates2011analysis}, CelebA (64x64), and CelebAHQ (128x128)~\cite{liu2015faceattributes}. To compare models, we use the FID score~\cite{heusel2017gans} and the Inception score (IS)~\cite{salimans2016improved}. The detailed settings about architectures, hyperparameters, and evaluation of FID and IS are given in Appendix~\ref{sec:settings}. We first show the FID scores and IS scores of generative models trained by SW and CSW's variants with the number of projections $L \in \{1,100,100\}$ in Table~\ref{tab:summary}. In the table, we report the performance of models at the last training epoch. We do not report the IS scores on CelebA and CelebA-HQ since the IS scores are not suitable for face images. We then demonstrate the FID scores and IS scores across training epochs in Figure~\ref{fig:iterCIFAR} for investigating the convergence of generative models trained by SW and CSW's variants. After that, we report the training time and training memory of SW and CSW variants in Table~\ref{tab:timeandmem}. Finally, we show some randomly generated images from SW's models and CSW-s' models in Figures~\ref{fig:cifar}-\ref{fig:celebahq}. Additional experimental results with CSW-b and CSW-d are given in Appendix~\ref{sec:add_experiments}. \vspace{0.5em} \noindent \textbf{Summary of FID scores and IS scores:} According to Table~\ref{tab:summary}, on CIFAR10, CSW-d gives the lowest values of FID scores and IS scores when $L=1$ while CSW-s gives the lowest FID scores when $L=100$ and $L=1000$. Compared to CSW-s, CSW-d and CSW yield higher FID scores and lower IS scores. However, CSW-d and CSW are still better than SW. On CelebA, CSW-s perform the best in all settings. On STL10, CSW's variants are also better than the vanilla SW, however, it is hard to figure out which is the best variant. On CelebA-HQ, SW gives the lowest FID score when $L=1$. In contrast, when $L=100$ and $L=1000$, CSW-s is the best choice for training the generative model. Since the FID scores of $L=1$ are very high on CelebA-HQ and STL10, the scores are not very meaningful for comparing SW and CSW's variants. For all models, increasing $L$ leads to better generative quality. Overall, we observe that CSW's variants enhance the performance of the generative models well. \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \widgraph{0.26\textwidth}{CelebA/sw_1.png} & \widgraph{0.26\textwidth}{CelebA/sw_100.png} & \widgraph{0.26\textwidth}{CelebA/sw_1000.png} \\ SW ($L=1$) & SW ($L=100$) & SW ($L=1000$) \\ \widgraph{0.26\textwidth}{CelebA/csws_1.png} & \widgraph{0.26\textwidth}{CelebA/csws_100.png} & \widgraph{0.26\textwidth}{CelebA/csws_1000.png} \\ CSW-s ($L=1$) & CSW-s ($L=100$) & CSW-s ($L=1000$) \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{Random generated images of SW and CSW-s on CelebA. } } \label{fig:celeba} \vskip -0.1in \end{figure*} \vspace{0.5em} \noindent \textbf{FID scores and IS scores across epochs:} From Figure~\ref{fig:iterCIFAR}, we observe that CSW's variants help the generative models converge faster than SW when $L=100$ and $L=1000$. Increasing the number of projections from $100$ to $1000$, the generative models from both SW and CSW's variants become better. Overall, CSW-s is the best option for training generative models among CSW's variants since its FID curves are the lowest and its IS curves are the highest. \vspace{0.5em} \noindent \textbf{Training time and training memory:} We report in Table~\ref{tab:timeandmem} the training speed in the number of iterations per second and the training memory in megabytes (MBs). We would like to recall that the time complexity and the projection memory complexity of CSW-s and CSW-d are the same. Therefore, we measure the training time and the training memory of CSW-s as the result for both CSW-s and CSW-d. We can see that increasing the number of projections $L$ costs more memory and also slows down the training speed. However, the rate of increasing memory of CSW is smaller than SW. For CSW-s and CSW-d, the extent of saving memory is even better. As an example, $L=1000$ in CSW-s and CSW-d costs less memory than SW with $L=100$ while the performance is better (see Table~\ref{tab:summary}). In terms of training time, CSW-s and CSW-d are comparable to SW and they can be computed faster than CSW. We refer the readers to Section~\ref{sec:csw} for a detailed discussion about the computational time and projection memory complexity of CSW's variants. \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \widgraph{0.26\textwidth}{STL/sw_1.png} & \widgraph{0.26\textwidth}{STL/sw_100.png} & \widgraph{0.26\textwidth}{STL/sw_1000.png} \\ SW ($L=1$) & SW ($L=100$) & SW ($L=1000$) \\ \widgraph{0.26\textwidth}{STL/csws_1.png} & \widgraph{0.26\textwidth}{STL/csws_100.png} & \widgraph{0.26\textwidth}{STL/csws_1000.png} \\ CSW-s ($L=1$) & CSW-s ($L=100$) & CSW-s ($L=1000$) \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{Random generated images of SW and CSW-s on STL10. } } \label{fig:stl} \vskip -0.1in \end{figure*} \begin{figure*}[!h] \begin{center} \begin{tabular}{ccc} \widgraph{0.3\textwidth}{CelebAHQ/sw_1.png} & \widgraph{0.3\textwidth}{CelebAHQ/sw_100.png} & \widgraph{0.3\textwidth}{CelebAHQ/sw_1000.png} \\ SW ($L=1$) & SW ($L=100$) & SW ($L=1000$) \\ \widgraph{0.3\textwidth}{CelebAHQ/csws_1.png} & \widgraph{0.3\textwidth}{CelebAHQ/csws_100.png} & \widgraph{0.3\textwidth}{CelebAHQ/csws_1000.png} \\ CSW-s ($L=1$) & CSW-s ($L=100$) & CSW-s ($L=1000$) \end{tabular} \end{center} \vskip -0.2in \caption{ \footnotesize{Random generated images of SW and CSW-s on CelebA-HQ. } } \label{fig:celebahq} \vskip -0.1in \end{figure*} \vspace{0.5em} \noindent \textbf{Generated images:} We show randomly generated images on CIFAR10, CelebA, STL10, and CelebA-HQ in Figures~\ref{fig:cifar}-\ref{fig:celebahq} as qualitative comparison between SW and CSW-s. From the figures, we can see that generated images of CSW-s is more realistic than ones of SW. The difference is visually clear when the number of projections $L$ is small e.g., $L=1$ and $L=100$. When $L=1000$, we can still figure out that CSW-s is better than SW by looking at the sharpness of the generated images. Also, we can visually observe the improvement of SW and CSW-s when increasing the number of projections. In summary, the qualitative results are consistent with the quantitative results (FID scores and IS scores) in Table~\ref{tab:summary}. For the generated images of CSW-b and CSW-d, we show them in Figures~\ref{fig:cifar_appendix}-\ref{fig:celebahq_appendix}. \vspace{0.5em} \noindent \textbf{Non-linear convolution sliced Wasserstein:} We also compare non-linear extensions of SW and CSW variants in training generative models on CIFAR10 in Appendix~\ref{sec:add_experiments}. For details of non-linear extensions, we refer to Appendix~\ref{sec:addmarterial}. From experiments, we observe that convolution can also improve the performance of sliced Wasserstein in non-linear projecting cases. Compared to linear versions, non-linear versions can enhance the quality of the generative model or yield comparable results. \section{Conclusion} \label{sec:conclusion} We have addressed the issue of the conventional slicing process of sliced Wasserstein when working with probability measures over images. In particular, sliced Wasserstein is defined on probability measures over vectors which leads to the step of vectorization for images. As a result, the conventional slicing process cannot exploit the spatial structure of data for designing the space of projecting directions and projecting operators. To address the issue, we propose a new slicing process by using the convolution operator which has been shown to be efficient on images. Moreover, we investigate the computational complexity and projection memory complexity of the new slicing technique. We show that convolution slicing is comparable to conventional slicing in terms of computational complexity while being better in terms of projection memory complexity. By utilizing the new slicing technique, we derive a novel family of sliced Wassersein variants, named convolution sliced Wasserstein. We investigate the properties of the convolution sliced Wasserstein including its metricity, its computational and sample complexities, and its connection to other variants of sliced Wasserstein in literature. Finally, we carry out extensive experiments in comparing digits images and training generative models on standard benchmark datasets to demonstrate the favorable performance of the convolution sliced Wasserstein.
2,877,628,089,267
arxiv
\section{Introduction} The general desire to break time-reversal symmetry and reciprocity in engineered photonic structures has garnered an immense amount of recent interest. Recall that while time-reversal symmetry is only a useful notion in non-dissipative systems, reciprocity is more general: it is defined as the invariance of photon transmission amplitudes under exchange of source and detector \cite{Deak2012}. On a fundamental level, the artificial breaking of time-reversal symmetry allows the realization of truly new photonic states, such as quantum Hall states and more general topological states \cite{Raghu2008,Haldane2008,Wang2008,Hafezi2011,Umacalilar2011}. On a more practical level, nonreciprocal devices can enable a number of signal-processing applications and greatly simplify the construction of photonic networks \cite{Jalas2013}. Nonreciprocal microwave-frequency devices are also crucial to efforts at quantum-information processing with superconducting circuits. Here one necessarily needs to use near quantum-limited amplifiers to efficiently read out qubits; nonreciprocity is crucial to ensure the qubits are protected from unwanted noise stemming from the amplifier. The conventional solution is to use circulators employing magneto-optical effects (Faraday rotation) to break reciprocity. These devices have many disadvantages: they are bulky and cannot be implemented on-chip (hindering scaling-up to multi-qubit systems), and they use large magnetic fields, which can be deleterious to superconducting devices. Their use also typically leads to insertion losses. A number of strategies have been developed to break reciprocity without the use of magneto-optical effects in both optical systems and superconducting circuits. For nonreciprocal photon transmission, approaches based on refractive-index modulation \cite{Yu2009,Lira2012} have been considered, as well as strategies using optical non-linearity \cite{Soljacic2003}, optomechanical interaction \cite{Manipatruni2009,Hafezi2012}, and interfering parametric processes \cite{Kamal2011,Kamal2014, Kerckhoff2015}. Related strategies where the phases of external driving fields generate an artificial gauge field in a lattice or cavity array have also been discussed \cite{Fang2012,Fang2012b,Tzuang2014,Estep2014}, as have alternative methods that do not use modulation \cite{Koch2010,Nunnenkamp2011, Viola2014}. Nonreciprocal quantum amplifiers have also been developed largely in the context of superconducting circuits \cite{Abdo2013b, Abdo2014,Ranzani2014a,Ranzani2014}. They typically involve engineering complex interferences between parametric processes. Understanding how to achieve such interferences can be difficult, though recently a graph-theory approach was formulated by Ranzani et al. \cite{Ranzani2014a}. \begin{figure} \centering\includegraphics[width=0.45\textwidth]{Img_1.pdf} \caption{ (a) Basic recipe for generating directionality: two cavities are directly coupled to one another via a coherent Hamiltonian $ \hat{\mathcal{H}}_{\rm coh}$, and are also each coupled to the same (non-directional) dissipative environment. (b) The dissipative environment in (a) mediates a reciprocal dissipative interaction between the two cavities. This can be modeled using a Lindblad master equation and dissipative superoperator $\mathcal{L}[\hat{z}]$ (cf.~Eq.~(\ref{Eq.:GeneralMaster})). By balancing the strength of coherent and bath-induced dissipative interactions between the cavities, one can break reciprocity. (c) Schematic cascaded quantum system, where one cavity drives another via a waveguide supporting only a right-propagating mode. The effective theory used to describe such systems corresponds to (b).} \label{fig:SketchCascaded} \end{figure} In this work, we present a simple yet general method for generating nonreciprocal behavior in a photonic system, one that can make a variety of cavity-cavity interactions completely directional, including amplifying interactions (see Fig.~\ref{fig:SketchCascaded}). It employs reservoir engineering \cite{Poyatos1996}, where a structured dissipative environment generates useful quantum behavior. In our approach, the dissipative reservoir (which could simply be a damped auxiliary cavity mode) generates an effective dissipative interaction between the modes of interest. Nonreciprocal behavior is then obtained by balancing this induced dissipative interaction against the corresponding coherent version of the interaction. As we discuss, this simple yet powerful trick allows one to generate both isolators (which only allow unidirectional transmission), and nonreciprocal, quantum-limited phase-sensitive amplifiers (which have zero added noise) as well as phase-preserving amplifiers (which add the quantum-limited amount of noise, a half quantum at the signal frequency). While our approach uses a kind of interference, it is markedly different from more typical interference-based approaches, in that it allows perfect directional behavior over a wide range of frequencies. The method is also simple enough that it could be implemented in a wide variety of architectures; in particular, it is extremely well suited to implementations using superconducting circuits and optomechanics. Our approach to nonreciprocity is intimately connected to the theory of cascaded quantum systems \cite{Carmichael1993, Gardiner1993}. This is an effective theory developed to describe situations where a nonreciprocal element is used to couple two quantum systems (e.g.,~such that the output field of one cavity drives a second cavity, but not vice-versa, see Fig.~\ref{fig:SketchCascaded}(c)). We show that the effective interactions used in this theory have {\it exactly} the form described above: one balances a coherent ``photon tunneling" interaction between the two cavities against a corresponding dissipative version of this interaction. We also demonstrate that cascaded quantum systems theory is not simply an effective theory for describing nonreciprocal transmission: it also serves as a recipe for constructing nonreciprocal devices, one that can be generalized to amplifying interactions. As we discuss, the needed dissipative interactions can be obtained by simply coupling to intermediate damped cavity modes; one does not need to start with an explicitly nonreciprocal reservoir (as assumed in the derivations of Refs.~\cite{Carmichael1993, Gardiner1993}). The remainder of this paper is organized as follows. In Sec.~\ref{Sec:Two:DirectionalityApplications}, we introduce our basic approach of balancing coherent and dissipation interactions, showing how this can be used to generate both nonreciprocal photon transmission as well as amplification. In Sec.~\ref{Sec.:Three:Implementations}, we provide further details on each of these schemes, discussing simple 3-mode implementations, as well as issues of bandwidth, impedance matching and added noise. We pay particular attention to our scheme for a nonreciprocal cavity-based phase-sensitive amplifier. In addition to being nonreciprocal and quantum-limited, we show that this system can also be constructed so that there is no fundamental gain-bandwidth limitation on its performance, and so that it is perfectly impedance matched at both its input and output (i.e.,~there are no unwanted reflections at either port of the amplifier). \section{Directionality from dissipative interactions}\label{Sec:Two:DirectionalityApplications} Throughout this work, we consider a generic situation where we have a pair of cavity modes (annihilation operators $\hat{d}_1,\hat{d}_2$), each coupled to input/output waveguides; our goal is to engineer a nonreciprocal interaction between them, thus enabling either nonreciprocal transmission or amplification of signals incident on the two modes. Our approach is sketched in Fig.~\ref{fig:SketchCascaded}(b): we allow both cavities to interact with one another in two distinct ways. The first is via a direct, coherent interaction described by an interaction Hamiltonian $ \hat{\mathcal{H}}_{\rm coh}$. While our approach can make a general factorizable cavity-cavity interaction directional, we focus here on simple bilinear interactions. The coherent interaction will thus be described by a quadratic Hamiltonian, having the general form ($\hbar=1$) \begin{equation} \hat{\mathcal{H}}_{\rm coh} = J \hat{d}_1^\dagger \hat{d}_2 + \lambda \hat{d}_1^\dagger \hat{d}_2^\dagger + h.c. , \label{Eq:Hint} \end{equation} where $J$ and $\lambda$ are in general complex. We always work in a rotating frame where the two cavities are effectively resonant, and where $ \hat{\mathcal{H}}_{\rm coh}$ is time-independent. Each of the two interactions in this Hamiltonian could be realized in many ways; for example, one could start with three modes and a generic three-wave mixing Hamiltonian, and then displace one of the modes with a coherent tone. The driven modes act as pumps; by a suitable choice of frequencies (i.e.,~at the difference and the sum of cavities 1 and 2 resonance frequencies), one realizes the above Hamiltonian, with the amplitudes and phases of the couplings $J, \lambda$ being controlled by the pump modes amplitudes. Such an approach has been exploited recently in superconducting circuits, using the Josephson parametric converter (JPC) geometry \cite{Bergeal2010a,Bergeal2010,Abdo2011,Abdo2013}, as well as in quantum optomechanics \cite{Aspelmeyer2014}. The second required interaction involves controllably coupling both cavities to the same dissipative reservoir (Fig.~\ref{fig:SketchCascaded}(a)). Eliminating this reservoir will generate an effective dissipative interaction between the cavities (i.e.,~one that cannot be described by some direct Hamiltonian coupling). The simplest setting is where this reservoir is effectively Markovian, and hence can be described using dissipators in a Lindblad master equation for the reduced density matrix $\hat{\rho}$ of the two cavity modes. As we are focusing here on a bilinear coherent interaction, the needed interactions between the engineered reservoir and the two cavities will also be linear. We are thus left with the general master equation \begin{align}\label{Eq.:GeneralMaster} \frac{d}{dt} \hat \rho =& - i \left[ \hat{\mathcal{H}}_{\rm coh} , \hat \rho \right] + \Gamma \mathcal L [\hat z ] \hat{\rho} + \sum_{j = 1,2} \kappa_j \mathcal L [\hat d_j ] \hat{\rho}, \end{align} where \begin{align} \hat{z} =& \sum_{j=1,2} \left( u_j \hat{d}_j + v_j \hat{d}_j^{\dagger} \right), \label{Eq:zDefn} \end{align} and the standard dissipative superoperator $\mathcal{L}[\hat{o}]$ is defined as \begin{align} \mathcal L [\hat o ] \hat{\rho} =& \hat o \hat \rho \hat o^{\dag} - \frac{1}{2} \hat o^{\dag} \hat o \hat \rho - \frac{1}{2} \hat \rho \hat o^{\dag} \hat o. \end{align} The first term in Eq.~(\ref{Eq.:GeneralMaster}) describes the coherent interaction between the two cavities, the second the interaction with the engineered reservoir at rate $\Gamma$ (including the induced dissipative cavity-cavity interactions), and the last the coupling of the cavities to their input-output ports at rate $\kappa_j $ . Note that an asymmetry in the couplings does not change the basic physics in which we are interested; thus, for simplicity, we take $\kappa_1 = \kappa_2 \equiv \kappa$ in what follows. The coefficients $u_j$ and $v_j$ characterize the individual coupling of each cavity to the engineered bath. As we see in what follows, the engineered reservoir need not be anything too exotic: it can simply be another (damped) cavity mode, or a (non-directional) transmission line. Also note that one does not need to be in the strict Markovian limit, though it makes it simpler to understand the physics. We discuss corrections to the Markovian limit in Sec.~\ref{Sec.:Three:Implementations}. With these ingredients in place, obtaining directionality involves first constructing $ \hat{\mathcal{H}}_{\rm coh}$ so that it gives the desired behavior (amplification or transmission), and then precisely balancing it with the corresponding dissipative interaction (i.e.,~choice of $\Gamma, u_j$ and $v_j$). To illustrate this, we can derive the equations of motion for the expectation values of the mode's operators. Starting from the Lindblad master equation in Eq.~(\ref{Eq.:GeneralMaster}) we obtain \begin{align} \frac{d}{dt} \ev{\hat d_1} =& - \frac{\Gamma_1 + \kappa }{2} \ev{ \hat d_1} \nonumber \\ & - i \left[ J \ \ + i \mu \ \frac{\Gamma}{2} \right] \ev{ \hat d_2 } - i \left[ \lambda + i \nu \frac{\Gamma}{2} \right] \ev{ \hat d_2^{\dag} } , \nonumber \\ \frac{d}{dt} \ev{\hat d_2} =& - \frac{\Gamma_2 + \kappa}{2} \ev{ \hat d_2} \nonumber \\ & - i \left[J^{\ast} + i \mu^{\ast} \frac{ \Gamma}{2} \right] \ev{\hat d_1 } - i \left[ \lambda - i \nu \frac{ \Gamma}{2} \right] \ev{ \hat d_1^{\dag}} , \end{align} with $\Gamma_n = \Gamma ( |u_n|^2 -|v_n|^2 ), \ (n \in 1,2) $ describing the local damping induced by the engineered reservoir, and the definitions $\mu = v_1 v_2^{\ast} - u_2 u_1^{\ast}$ and $ \nu = v_1 u_2^{\ast} - v_2 u_1^{\ast} $. The engineered reservoir mediates a non-local damping force on each mode, thus it couples the two modes in a similar manner as the coherent interaction. Crucially, due to the difference in the coupling coefficients we can decouple cavity $1$ from cavity $2$ by setting \begin{align} J \ \ \overset{!}{=} - i \mu \ \frac{\Gamma}{2}, \hspace{0.5cm} \lambda \overset{!}{=} - i \nu \frac{\Gamma}{2}. \end{align} For this case we obtain a uni-directional interaction, where cavity $2$ is driven by cavity $1$ but not vice versa. Moreover, it is straightforward to show that this decoupling works for all operators: the evolution of any cavity-1 operator is independent of cavity 2, while cavity-2 operator expectations are influenced by cavity 1 (cf.~Appendix \ref{AppendixA}). In what follows, we show how this general recipe of balancing coherent and dissipative interactions can be used to construct an isolator and nonreciprocal quantum-limited amplifiers (both phase-preserving and phase-sensitive). The basic recipe here will in fact allow {\it any} factorizable cavity-cavity interaction to become directional, including nonlinear interactions (see Appendix \ref{AppendixA}). It thus represents a powerful approach for constructing a wide variety of nonreciprocal behaviors. \subsection{Unidirectional photon hopping: dissipative isolator}\label{subsec:IsolatorIntro} We first discuss how our basic recipe can be used to obtain directional transmission between ports 1 and 2. We want an effective interaction between the two cavities which only allows photons to tunnel from cavity 1 to 2 (and not vice-versa). This is precisely the kind of behavior described by standard cascaded quantum systems theory \cite{Carmichael1993,Gardiner1993,Gardiner2004}. We show here how this fits into our general framework where directionality results from balancing coherent and dissipative interactions. We also show how it can be simply realized using an auxiliary cavity or reciprocal transmission line, and thus does not require an explicitly directional reservoir. To obtain nonreciprocal tunneling between the cavities, we first need to identify coherent and dissipative versions of a tunneling interaction. The coherent version is simple: choosing $\lambda = 0$ in Eq.~(\ref{Eq:Hint}), we obtain a standard hopping (or beam-splitter) Hamiltonian, \begin{align}\label{Eq.:HoppHam} \hat{\mathcal{H}}_{\rm coh} \rightarrow & J \hat d_1^{\dag} \hat d_2 + h.c. \equiv \hat{\mathcal{H}}_{\rm hop} . \end{align} For the dissipative version of a hopping interaction, we need a zero-temperature engineered reservoir that is able to absorb quanta from either cavity; crucially there needs to be coherence between absorption of a photon from cavity 1 versus cavity 2. The jump operator $\hat{z}$ in our master equation Eq.~(\ref{Eq.:GeneralMaster}) thus needs to take the form \begin{align}\label{Eq.:zhop} \hat{z} \rightarrow& \hat d_1 + e^{i \varphi} \hat d_2 \equiv \hat{z}_{\rm hop}. \end{align} The general master equation of Eq.~(\ref{Eq.:GeneralMaster}) thus reduces to \begin{align}\label{Eq.:BSmasterequation} \frac{d}{dt} \hat \rho =& - i \left[ \hat{\mathcal{H}}_{\rm hop} , \hat \rho \right] + \Gamma \mathcal L [\hat d_1 + e^{i \varphi} \hat d_2 ] \hat{\rho} + \kappa \sum_{j \in 1,2} \mathcal L [\hat d_j ] \hat{\rho}. \end{align} The second term describes the dissipative hopping interaction: the engineered reservoir can absorb a photon from either cavity 1 or cavity 2, and there is coherence between these possibilities (relative phase $\varphi$). The rate for this process is $\Gamma$. Note that via a gauge transformation, the phase $\varphi$ can be shifted into the phase of $J$. We thus set $\varphi = 0$ in what follows, but keep $J$ complex. Before discussing how to engineer such a non-local dissipator, let us discuss the consequences. Using Eq.~(\ref{Eq.:BSmasterequation}), the equations of motion for mode expectation values are \begin{align}\label{Eq.:EoMexpIso} \frac{d}{dt} \ev{\hat d_1} =& - \frac{\kappa + \Gamma}{2} \ev{\hat d_1} - \left[\frac{\Gamma}{2} + i J \right] \ev{\hat d_2}, \nonumber \\ \frac{d}{dt} \ev{\hat d_2} =& - \frac{\kappa + \Gamma}{2} \ev{\hat d_2} - \left[\frac{\Gamma}{2} + i J^{\ast} \right] \ev{\hat d_1}. \end{align} Note that the engineered non-local dissipation in Eq.~(\ref{Eq.:BSmasterequation}) couples the two cavity lowering operators in an analogous manner to the coherent tunneling interaction $J$. On a heuristic level, this is because the engineered reservoir gives rise to non-local damping: the damping force on cavity $1$ depends on the amplitude of cavity $2$ (and vice-versa). If we only have the coherent hopping interaction (i.e.,~$\Gamma = 0$), or only have the dissipative interaction (i.e.,~$J=0$), the coupling between the cavities would be reciprocal. Note however that the coherent coupling involves $J$ in the first line of Eq.~(\ref{Eq.:EoMexpIso}), and $J^\ast$ in the second line. The possibility thus emerges to have the two coupling terms {\it cancel} in one of the two equations. By setting, e.g., \begin{equation} J \overset{!}{=} i \frac{\Gamma}{2} , \label{Eq:DirectionalHoppingCond} \end{equation} we obtain a unidirectional interaction: cavity 2 is driven by cavity 1, but not vice-versa (see Fig.~\ref{Fig:Isolator}(c)). \begin{figure} \centering\includegraphics[width=0.45\textwidth]{Img_2.pdf} \caption{ (a) Realization of the engineered reservoir via an auxiliary cavity mode that is damped at a rate $\kappa^{\prime}$. For strong damping $\kappa^{\prime} \gg \kappa$ this setup corresponds to a Markovian reservoir. (b) Implementation based on a transmission line, which supports propagation of photons in both directions. (c) Scattering matrix elements for the dissipative isolator setup at zero frequency, as a function of the coherent hopping $J$; the phase of $J$ is fixed so that $\arg(J/\Gamma) = \pi/2$. When $J$ is tuned as per Eq.~(\ref{Eq:DirectionalHoppingCond}), the system only allows directional transmission between cavities $1$ and $2$. We have fixed the dissipative coupling strength $\Gamma$ to be equal to the cavity damping rate $\kappa$, and have taken the Markovian limit for the engineered reservoir ($\kappa' \gg \kappa$). (d) Scattering matrix elements as a function of frequency, when the directionality condition of Eq.~(\ref{Eq:DirectionalHoppingCond}) is fulfilled. In the Markovian limit, directionality holds over all frequencies. } \label{Fig:Isolator} \end{figure} With this tuning of $J$, our master equation Eq.~(\ref{Eq.:BSmasterequation}) takes the standard form used in cascaded quantum systems theory \cite{Gardiner2004}: \begin{align}\label{Eq.:BSmasterequationCascaded} \frac{d}{dt} \hat \rho = \left(\Gamma + \kappa \right) \sum_{n = 1,2} \mathcal L \left[ \hat d_n \right] \hat{\rho} - \Gamma \left\{ \left[ \hat d_2^{\dag}, \hat d_1 \hat \rho\right] - \left[\hat \rho \hat d_1^{\dag} ,\hat d_2\right] \right\}. \end{align} We are most interested in the evolution of the extra-cavity fields, i.e.,~signals entering and leaving the two cavities via the coupling waveguides. Treating our engineered dissipative reservoir using a Markovian oscillator bath (which is equivalent to the above Lindblad description), one can use standard input-output theory to calculate the relation between the input fields incident on the two cavities, $\hat d_{n, \rm in}$, and the output fields leaving the cavities, $\hat d_{n, \rm out}$ (see Sec.~\ref{Sec.:AuxMode} for details). Using the input-output boundary condition $\hat d_{n,\rm out} = \hat d_{n, \rm in} + \sqrt{\kappa} \hat d_n $ \cite{Gardiner1985,Clerk2010}, and letting $\mathbf{D}[\omega] =\left( \hat d_{1 }[\omega] , \hat d_{2 }[\omega] \right)^T$, scattering between the cavity in/out fields is described by a $2 \times 2$ scattering matrix $\mathbf{s}[\omega]$: \begin{align} \mathbf{D}_{\rm out}[\omega] = \mathbf{s}[\omega] \, \mathbf{D}_{\rm in}[\omega] + \vec{\hat{\xi}}[\omega]. \label{Eq:sDefn} \end{align} Here, $\hat{\xi}[\omega]$ describes (operator-valued) noise incident on the cavities from the engineered reservoir, and the zero frequency (i.e.,~on-resonance) scattering matrix is \begin{align}\label{Eq.:SMatrixIso} \mathbf{s}[0] = \left( \begin{array}{cc} \frac{ \Gamma - \kappa }{\kappa + \Gamma} & 0 \\[2mm] \frac{4 \kappa \Gamma}{(\kappa + \Gamma)^2} & \frac{ \Gamma - \kappa }{\kappa + \Gamma} \end{array} \right). \end{align} As expected, there is transmission from port $1$ to port $2$, but not vice versa. Note that $\mathbf{s}$ is in general not unitary, and hence the noise $\xi$ must be non-vanishing in order to preserve canonical commutators of the output fields; we discuss this noise in more detail in Sec.~\ref{Sec.:AuxMode}, showing that it can indeed have the minimal amount required by quantum mechanics. We also show that the vanishing of $\mathbf{s}_{12}$ can be made to extend up to frequencies comparable to the relaxation rate of the engineered reservoir (i.e.,~much larger than $\kappa$), see Fig.~\ref{Fig:Isolator}(d). Equation~(\ref{Eq.:SMatrixIso}) still does not have the ideal scattering matrix of an isolator \cite{Jalas2013}, as the incident signal on cavity $1$ could be partially reflected. To suppress such reflections, we simply impedance match the system, i.e.,~tune $\Gamma = \kappa$. We then obtain the ideal isolator scattering matrix (on resonance) \begin{align} \mathbf{s}[0] = \left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right). \end{align} On a physical level, interference causes signals incident on cavity $2$ to be perfectly dumped into the dissipative reservoir. Interference also ensures that signals incident on cavity $1$ never end up in this reservoir, but instead emerge from cavity $2$. We still have not specified {\it how} one obtains the required non-local dissipator. The original works on cascaded quantum systems assumed an inherently nonreciprocal, unidirectional reservoir (i.e.,~a chiral transmission line), and then derived the effective master equation of Eq.~(\ref{Eq.:BSmasterequationCascaded}). However, the above dynamics can be obtained {\it without} needing an explicitly directional reservoir. One simple choice would be a one-dimensional transmission line, cf.~Fig.~\ref{Fig:Isolator}(b), supporting both right-moving and left-moving modes, which couples to cavity $j$ at position $x_j$ \begin{equation} \label{Eq:SysBathWG} \hat{\mathcal{H}}_{\rm SB} = -\sqrt{\frac{\Gamma v_{\rm G}}{2}} \sum_{j=1,2} \left( \hat{d}_j^\dagger \left[ \hat{c}_R(x_j) + \hat{c}_L(x_j) \right] + h.c. \right), \end{equation} where $\hat{c}_{L,R}(x)$ denote the left and right moving fields in the waveguide, and $v_{\rm G}$ is the waveguide velocity. For a suitable choice of $|x_1 - x_2|$, one again obtains the master equation of Eq.~(\ref{Eq.:GeneralMaster}) with jump operator $\hat{z}$ as per Eq.~(\ref{Eq.:zhop}). Further details are provided in the Appendix \ref{AppendixB}. Another simple implementation involves taking a damped auxiliary mode as the engineered reservoir (annihilation operator $\hat{c}$), see Fig.~\ref{Fig:Isolator}(a), which interacts with the two principle modes via a Hamiltonian: \begin{equation} \label{Eq:SysBath} \hat{\mathcal{H}}_{\rm SB} = J' \hat{c}^\dagger \left( \hat{d}_1 + \hat{d}_2 \right) + h.c. . \end{equation} Such a quadratic interaction can be realized in a tunable fashion by starting with a three-wave mixing Hamiltonian and pumping one of the modes at an appropriate frequency; this is the same strategy used to implement the coherent direct interaction in Eq.~(\ref{Eq:Hint}) (see discussion following that equation). As we show in Sec.~\ref{Sec.:AuxMode}, if the damping of the auxiliary mode $\kappa'$ is sufficiently large, it can be adiabatically eliminated, yielding the scattering matrix given above. For this particular realization, our isolator reduces to a three mode system with an asymmetric choice of damping rates. Furthermore, the required phase of $J$ in the directionality matching condition of Eq.~(\ref{Eq:DirectionalHoppingCond}) corresponds to having the three mode system pierced by an effective magnetic flux of a quarter flux quantum. We stress that in many physical implementations, the couplings $J$ and $J'$ are tunable simply by controlling the amplitude and phases of the relevant pump modes. Thus, the directional interaction we finally obtain is not the result of having used an explicitly directional reservoir, but rather it results from the control of relative phases in a driven system. Note that this three-mode realization of our dissipative isolator was also discussed by Ranzani et al. \cite{Ranzani2014a}. It is also interesting that this three-mode implementation directly yields the scattering matrix of an ideal circulator (see Sec.~\ref{Sec.:AuxMode}); it is closely analogous to previous proposals for non-magnetic circulators \cite{Koch2010, Habraken2012, Ranzani2014a, Sliwa2015}. \subsection{Directional phase-preserving quantum amplifier}\label{Sec.:NDPA} We next use our general recipe to construct a nonreciprocal, phase-preserving amplifier, a topic that is of considerable interest to the superconducting qubit community \cite{Abdo2013b, Abdo2014,Ranzani2014a}. We again consider a two-mode system as sketched in Fig.~\ref{fig:SketchCascaded}(b). Our goal is a dynamics that leads to signals incident on cavity 1 emerging amplified from cavity 2, while at the same time, signals (and noise) incident on cavity 2 are prevented from emerging from cavity 1. Our basic recipe is the same as the previous subsection: engineer both coherent and dissipative versions of the desired interaction, and then balance them to obtain directionality. The coherent interaction needed corresponds to a non-degenerate parametric amplifier (NDPA), as obtained by setting $J=0$ in Eq.~(\ref{Eq:Hint}): \begin{align}\label{Eq.:HamCohPA} \hat{\mathcal{H}}_{\rm coh} \rightarrow& \lambda \hat d_1^{\dag} \hat d_2^{\dag} + \lambda^{\ast} \hat d_1 \hat d_2 \equiv \hat{\mathcal{H}}_{\rm PA}. \end{align} This textbook interaction results in the amplification of an input signal (or noise) incident on either cavity, in both transmission and reflection (see, e.g.,~Refs.~\onlinecite{Clerk2010, Walls2008}). We next need to add the dissipative version of this NDPA interaction, as mediated by an appropriately chosen dissipative reservoir. This kind of dissipative amplification was recently introduced in our previous work, Ref.~\cite{Metelmann2014}. The dissipative reservoir now needs to be able to absorb photons from one cavity and to emit photons to the other, with coherence between these possibilities. The jump operator $\hat{z}$ associated with the reservoir (cf.~Eq.~(\ref{Eq.:GeneralMaster})) thus needs to take the general form \begin{align} \label{Eq:zPA} \hat{z} \rightarrow& \sqrt{2} \left( \cos \theta \hat d_1 + e^{i \varphi} \sin \theta \hat d_2^\dagger \right) \equiv \hat{z}_{\rm PA}, \end{align} where the angle $\theta$ parametrizes the asymmetry between the two kinds of processes. The relative phase $\varphi$ can again be gauged away into the phase of $\lambda$; we thus set it to zero in what follows. With this choice of coherent Hamiltonian and dissipator, the two-cavity system is again described by the master equation Eq.~(\ref{Eq.:GeneralMaster}), with $\Gamma$ parametrizing the strength of the coupling to the engineered reservoir, and hence of the dissipative amplifier interaction. To see clearly that the dissipation here leads to amplification, we consider the equations of motion for the means of lowering operators. One obtains \begin{align} \frac{d}{dt} \ev{\hat d_1} =& - \frac{ \kappa + 2 \Gamma \cos^2 \theta }{2} \ev{\hat d_1} - \left[\frac{\Gamma}{2} \sin 2 \theta + i \lambda \right] \ev{\hat d_2^{\dag}}, \nonumber \\ \frac{d}{dt} \ev{\hat d_2^{\dag}} =& - \frac{\kappa - 2\Gamma \sin^2 \theta }{2} \ev{\hat d_2^{\dag}} + \left[\frac{\Gamma}{2} \sin 2 \theta + i \lambda^{\ast} \right] \ev{\hat d_1}. \end{align} \begin{figure} \centering\includegraphics[width=0.48\textwidth]{Img_3.pdf} \caption{ s-matrix elements of the directional, phase preserving amplifier, as a function of scaled frequency; the cooperativity $\mathcal{C} \equiv \Gamma / \kappa = 0.95$, where $\Gamma$ is the dissipative interaction strength, and $\kappa$ is the damping rate of cavities $1$ and $2$. (a) Auxiliary mode damping $\kappa' = \kappa$, indicating a strong deviation from the Markovian limit; while perfect isolation exists at $\omega = 0$, it is rapidly lost for non-zero frequencies. (b) Auxiliary mode damping $\kappa' = 100 \kappa$, closer to the Markovian limit. The directionality is much better at finite frequencies, while the gain is unchanged.} \label{Fig:NDPA} \end{figure} The crucial terms behind the amplification are the last term in each line, which cause $\hat{d}_1$ to be driven by $\hat{d}_2^\dagger$ and vice-versa. Again, both the coherent interaction and the dissipative interaction give rise to such terms; each interaction thus facilitates amplification that can be quantum limited, but that is not directional \cite{Metelmann2014}. To obtain a unidirectional interaction, we again simply tune the amplitude and phase of the coherent interaction with respect to the dissipative interaction, so as to cancel the coupling term in the first equation, i.e., \begin{equation} \lambda \overset{!}{=} i \frac{\Gamma}{2} \sin 2 \theta . \label{Eq:DirectionalNDPACond} \end{equation} To see that this choice gives the desired behavior of the output fields, we model the dissipative bath as a Markov reservoir, and calculate the scattering matrix for the system using input-output theory. Letting $\mathbf{D}_{\rm in/out}[\omega] =\left( \hat d_{1,\rm in/out }^{\phantom{\dag}}[\omega] , \hat d_{2,\rm in/out }^{\dag}[\omega] \right)^T$, the input-output relations take the form of Eq.~(\ref{Eq:sDefn}). $\vec{\hat{\xi}}$ again describes noise incident from the engineered reservoir, while the $2 \times 2$ scattering matrix $\mathbf{s}$ takes the following explicitly nonreciprocal form at zero frequency \begin{align} \label{Eq:NDPASMatrix} \mathbf{s}[0] = \left( \begin{array}{cc} \frac{ 2 \mathcal C \cos^2 \theta - 1 }{ 2 \mathcal C \cos^2 \theta + 1 } & 0 \\[2mm] \frac{4 \mathcal C \sin 2 \theta } { \left[ 2 \mathcal C \cos^2 \theta + 1 \right] \left[ 2 \mathcal C \sin^2 \theta - 1\right] } & \frac{ 2 \mathcal C \sin^2 \theta + 1}{2 \mathcal C \sin^2 \theta - 1} \end{array} \right). \end{align} Here, the cooperativity is given as $\mathcal C = \frac{\Gamma}{\kappa} $. If we further tune $\theta$ so that \begin{equation} \cos^2 \theta \overset{!}{=} 1/(2 \mathcal C ), \label{Eq:NDPAImpedanceMatchCond} \end{equation} (possible as long as $ \mathcal C > 1/2$), we cancel all reflections of input signals incident on cavity $1$. With this tuning, the scattering matrix becomes \begin{align} \label{Eq:NDPASMatrixIM} \mathbf{s}[0] = \left( \begin{array}{cc} 0 & 0 \\ \sqrt{ \mathcal G} & \sqrt{ \mathcal G + 1} \end{array} \right), \end{align} with $ \mathcal G = \frac{ 2 \mathcal C - 1 }{(\mathcal C - 1)^2 }$. As claimed, we have a scattering matrix describing nonreciprocal, phase-preserving amplification, with a gain that diverges as $\mathcal{C}$ approaches 1. Signals incident on cavity $1$ are never reflected, and emerge from cavity $2$ with an amplitude gain $s_{21} = \sqrt{\mathcal G}$, whereas signals incident on cavity $2$ do not emerge at the output from cavity $1$. The system exhibits a standard parametric instability when $\mathcal{C} > 1$ (analogous to the instability in a standard, coherent NDPA). The frequency dependence of the scattering coefficients is discussed in Sec.~\ref{Sec.:NDPAProp}. Strikingly, the directionality property $s_{12}[\omega] = 0$ holds for all frequencies for which the Markovian bath approximation is valid. The system is limited by a standard gain-bandwidth constraint (in contrast to the purely dissipative amplification process, which is has no such constraint \cite{Metelmann2014}). We also discuss the added noise of the amplifier in Sec.~\ref{Sec.:NDPAProp}, showing that it is quantum limited in the large gain limit as long as there is no thermal noise incident on cavity $2$; surprisingly, the engineered reservoir need not be at zero-temperature. While there are many ways to realize the engineered reservoir used in this scheme, the simplest choice is a damped third auxiliary mode, see Sec.~\ref{Sec.:NDPAProp}. With this particular choice, our scheme reduces to the 3-cavity amplifier discussed by Ranzani and Aumentado in Ref.~\onlinecite{Ranzani2014a}. Our analysis thus generalizes this scheme, and provides insight into the underlying mechanism. It also shows the crucial importance of having the auxiliary mode damping $\kappa'$ be much larger than that of the principle modes; in this Markovian limit, one has directionality over the full amplification bandwidth (see Fig.~\ref{Fig:NDPA}). \subsection{Directional phase-sensitive amplifier}\label{Sec.:DPA} As a third application of our recipe for nonreciprocity, we construct a phase-sensitive amplifier. Phase-sensitive amplifiers only measure a single quadrature of an incident signal; as a result, quantum mechanics allows them to amplify without adding any added noise \cite{Caves1982,Clerk2010}. Our general approach allows one to construct a nonreciprocal and noiseless version of such an amplifier, again using the two-cavity-plus-reservoir setup in Fig.~\ref{fig:SketchCascaded}(b). The resulting amplifier has another striking advantage over a standard paramp: it does not suffer from any fundamental gain-bandwidth limitation, a point we discuss more fully in Sec.~\ref{Sec.:DPAMode}. As before, the first step is to construct a coherent interaction that gives the desired amplification. The standard choice would be a degenerate parametric amplifier (DPA) Hamiltonian involving just a single mode, of the form $ \hat{\mathcal{H}}_{\rm int} = \lambda \hat{d} \hat{d} + \lambda^* \hat{d}^\dagger \hat{d}^\dagger$. In contrast, to be able to implement our recipe for directionality, we want an interaction that couples {\it two} modes. \begin{figure}[t] \centering\includegraphics[width=0.5\textwidth]{Img_4.pdf} \caption{Schematic illustrating the directional phase-sensitive amplifier. The coherent QND Hamiltonian of Eq.~(\ref{Eq:HamCohAmp}) causes the cavity-1 $P$ quadrature to drive the cavity-2 $P$ quadrature, and the cavity-2 $X$ quadrature to drive the cavity-1 $X$ quadrature (blue arrows); there is gain associated with each of these drivings, cf.~Eqs.~(\ref{Eq.:EoMexpectDPA}). The engineered reservoir (jump operator described by Eq.~(\ref{Eq:zQND})) also mediates the same drivings (green and magenta arrows), again with gain. By balancing these interactions, one can cancel the $X_2 \rightarrow X_1$ driving, resulting in directional amplification. } \label{Fig:DPASchematic} \end{figure} Surprisingly, there is a simple coherent two-mode interaction which does the job and which yields ideal amplification properties (zero added noise, no gain-bandwidth limitation). One needs to use the kind of quantum non-demolition (QND) interaction discussed extensively in the context of Gaussian cluster-state generation \cite{Zhang2006,Menicucci2006,Weedbrook2012}. Suppose we want to amplify the $P$ quadrature of cavity 1, i.e.,~the operator $\hat{P}_1 = -i(\hat{d}_1^{\phantom{\dag}} - \hat{d}_1^\dagger) / \sqrt{2}$. We then use an interaction Hamiltonian that commutes with this operator, but that takes information in the $P_1$ quadrature and dumps it into a cavity $2$ quadrature (i.e.,~$P_2$). The required coherent Hamiltonian is obtained by setting $J = \lambda = i \lambda_{\rm QND}/2 \ (\lambda_{\rm QND} \in \mathbb{R})$ in Eq.~(\ref{Eq:Hint}), i.e.,~ \begin{align}\label{Eq:HamCohAmp} \hat{\mathcal{H}}_{\rm coh} \rightarrow& \lambda_{\rm QND} \ \hat P_1 \hat X_2 \equiv \hat{\mathcal{H}}_{\rm QND}, \end{align} with $\hat X_2 = \left( \hat d_2^{\phantom{\dag}} + \hat d_2^{\dag} \right)/\sqrt{2}$. It is straightforward to see from the Heisenberg equations of motion that $ \hat{\mathcal{H}}_{\rm QND}$ causes $P_2$ to be driven by $P_1$, and hence $P_2$ will contain information on $P_1$ (see Fig.~\ref{Fig:DPASchematic}). The same holds for the extra-cavity fields: the $P$ quadrature of a signal incident on cavity $1$ will emerge in the $P$ quadrature of a signal leaving cavity $2$. Note that $\hat P_1$ and $\hat X_2$ are QND variables: they commute with the Hamiltonian in Eq.~(\ref{Eq:HamCohAmp}), and are undisturbed by the amplification process. It follows that there is no possibility of feedback in this system, and hence the system is stable irrespective of the value of $\lambda_{\rm QND}$. By increasing $\lambda_{\rm QND}$, one can thus achieve increasing amounts of phase-sensitive gain. Furthermore, as the amplification mechanism here does not involve coming close to an instability, the amplification bandwidth is always $\sim \kappa$, irrespective of the gain. Following our recipe for directionality, we next need to construct the dissipative counterpart to the coherent interaction in Eq.~(\ref{Eq:HamCohAmp}). We need the jump operator $\hat{z}$ characterizing the engineered reservoir to also preserve the QND structure of the coherent Hamiltonian. Taking $u_1 = u_2 = v_2 = \sqrt{2}$ and $v_1 = -\sqrt{2}$ in Eq.~(\ref{Eq:zDefn}) yields \begin{align} \label{Eq:zQND} \hat{z} \rightarrow& \hat{X}_2 + i \hat{P}_1 \equiv \hat{z}_{\rm QND}. \end{align} This dissipative interaction is the counterpart of the coherent interaction in Eq.~(\ref{Eq:HamCohAmp}): with this choice of $\hat{z}$, the dissipative terms in Eq.~(\ref{Eq.:GeneralMaster}) alone lead to amplification of the $P$ quadrature of signals incident on cavity 1. The heuristic interpretation of this dissipative amplification is similar to that presented in Ref.~\onlinecite{Metelmann2014} for the phase-preserving case: the engineered reservoir ``measures" the QND quadrature $\hat{P}_1$, and then dumps this information into the non-QND quadrature $\hat{P}_2$ (see Fig.~\ref{Fig:DPASchematic}, as well as Sec.~\ref{Sec.:DPAMode}). With these choices for $ \hat{\mathcal{H}}_{\rm coh}$ and $\hat{z}$ in Eq.~(\ref{Eq.:GeneralMaster}), we have both coherent and dissipative phase-sensitive amplifying interactions between the cavities. Using this master equation, the equations of motions for the quadrature means have the expected form: \begin{align}\label{Eq.:EoMexpectDPA} \frac{d}{dt} \ev{\hat P_1} =& - \frac{\kappa}{2} \ev{\hat P_1} , \nonumber \\ \frac{d}{dt} \ev{\hat X_2} =& - \frac{\kappa}{2} \ev{\hat X_2} , \nonumber \\ \frac{d}{dt}\ev{ \hat X_1} =& - \frac{\kappa}{2} \ev{\hat X_1} + \left[ \lambda_{\rm QND} - \Gamma \right] \ev{\hat X_2} , \nonumber \\ \frac{d}{dt} \ev{\hat P_2} =& - \frac{\kappa}{2} \ev{\hat P_2} - \left[ \lambda_{\rm QND} + \Gamma \right] \ev{\hat P_1}. \end{align} $P_1$ and $X_2$ are QND variables and thus undisturbed by either interaction. In contrast, both interactions cause $P_2$ to become an amplified copy of $P_1$. We can now finally apply the last step of our general recipe: balance the dissipative and coherent interactions to break reciprocity. This simply involves setting \begin{equation} \Gamma \overset{!}{=} \lambda_{\rm QND}, \label{Eq:DPADirectionalCond} \end{equation} which ensures that cavity $1$ is insensitive to the state of cavity $2$. \begin{figure} \centering\includegraphics[width=0.5\textwidth]{Img_5.pdf} \caption{Gain $\mathcal{G}_{\phi}[\omega]$ and reverse gain $\bar{\mathcal{G}}_{\phi}[\omega]$ of the directional phase-sensitive amplifier, plotted as a function of signal frequency $\omega$ and coherent coupling strength $\lambda_{\rm QND}$ (cf.~Eq.~(\ref{Eq:HamCohAmp})), assuming that the dissipative coupling $\Gamma$ always satisfies the matching condition $\Gamma = \lambda_{\rm QND}$ (cf.~Eq.~(\ref{Eq:DPADirectionalCond})). $\mathcal{G}_{\phi}[\omega]$ describes the amplification in transmission of signals incident on the cavity-1 $P$ quadrature, while $\bar{\mathcal{G}}_{\phi}[\omega]$ describes the amplification in transmission of signals incident on the cavity-2 $X$ quadrature. We have taken the engineered reservoir to be an auxiliary cavity mode with damping rate $\kappa^{\prime}/\kappa = 100$ (see Sec.~\ref{Sec.:DPAMode}). In this limit, deviations from the Markovian-reservoir approximation are small.} \label{Fig:DPA3D} \end{figure} Finally, we are as usual interested in the behavior of the output fields from the cavity. Treating the engineered reservoir as a Markovian bath and using input-output theory, we can again calculate the scattering matrix of the system. Writing this matrix in a quadrature representation, we find that on-resonance (i.e.,~at zero-frequency) $\mathbf{Z}_{\rm out} = \mathbf{s} \ \mathbf{Z}_{\rm in} + \vec{\hat{\xi}}$ with \begin{align}\label{Eq.DirAmpOutputShort} \mathbf{s}[0] = \left( \begin{array}{cccc} -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & \sqrt{\mathcal G_\phi} & 0 & -1 \end{array} \right), \hspace{0.3cm} \mathbf{Z} = \left( \begin{array}{c} \hat X_{1} \\ \hat P_{1} \\ \hat X_{2} \\ \hat P_{2} \end{array} \right). \end{align} Here, the zero-frequency amplitude gain is given by $\sqrt{\mathcal G_\phi} = \frac{8\lambda_{\rm QND}}{\kappa}$. The input-output relations in Eqs.~(\ref{Eq.DirAmpOutputShort}) describe an ideal directional degenerate amplifier: the $P$ quadrature of signals incident on cavity $1$ emerge with gain from cavity $2$, whereas signals or noise incident on cavity $2$ never emerge from cavity $1$. Note further that there is no unwanted amplification in reflection of incident signals and noise. The amplifier also has several other remarkable properties: it is quantum limited (i.e., no added noise in the large gain limit), and does not suffer from any fundamental gain-bandwidth limitation. The directionality is also maintained over a large range of frequencies (see Fig.~\ref{Fig:DPA3D}). These properties (along with the possibility of eliminating unwanted reflections) are discussed in more detail in Sec.~\ref{Sec.:DPAMode}. \section{Noise, bandwidth and three-cavity implementation}\label{Sec.:Three:Implementations} \subsection{Dissipative isolator: additional details}\label{Sec.:AuxMode} \subsubsection{Auxiliary-cavity implementation of the engineered reservoir}\label{subsubsec:AuxMode} To demystify the engineered reservoirs used in our schemes, we provide more details here on the simplest possible realization: a damped auxiliary cavity mode. In the limit where the damping rate $\kappa'$ of this auxiliary mode is large, this model describes a general Markovian reservoir. We stress that this setup is just one of many ways to implement the necessary dissipative dynamics. In Appendix \ref{AppendixB}, we explicitly show how coupling two cavities to a (non-directional) one-dimensional transmission line or waveguide also generates the needed dissipative dynamics. Consider the dissipative isolator described by the master equation in Eq.~(\ref{Eq.:BSmasterequation}), and take the engineered reservoir to be an auxiliary mode with lowering operator $\hat{c}$ which is damped at rate $\kappa'$ by a coupling to a Markovian reservoir. $1/\kappa'$ will act as the correlation time of our engineered reservoir. As discussed in Sec.~\ref{subsec:IsolatorIntro}, we need this auxiliary mode (i.e.,~the engineered reservoir) to interact with the principle modes via the interaction Hamiltonian in Eq.~(\ref{Eq:SysBath}). The simplest limit is where $\kappa'$ is much larger than all other frequency scales; in this limit, the $\hat{c}$ mode will itself act as a Markovian reservoir for the system modes $\hat{d}_1, \hat{d}_2$. One could then recover the master equation of Eq.~(\ref{Eq.:BSmasterequation}) using standard adiabatic elimination techniques \cite{Gardiner2004}. Alternatively, one can eliminate the auxiliary mode within a Heisenberg-Langevin formalism, using the coherent Hamiltonian $ \hat{\mathcal{H}} = \hat{\mathcal{H}}_{\rm hop} + \hat{\mathcal{H}}_{\rm SB}$. Solving the equation of motion for $\hat{c}$ in the large-damping (adiabatic) limit yields \begin{align} \hat c =& - \frac{2}{\sqrt{\kappa^{\prime}}} \hat c_{ \rm in} - i \frac{2 J^{\prime}}{\kappa^{\prime}} \left(\hat d_1 + \hat d_2 \right) , \end{align} where all operators are evaluated at the same time, and $\hat c_{ \rm in}$ describes thermal and vacuum fluctuations stemming from the mode's internal dissipation. Substituting this equation into the equations of motion for the principle cavity operators $\hat{d}_j$ then yields: \begin{align} \frac{d}{dt} \hat d_1 =& - \frac{\kappa + \Gamma}{2} \hat d_1 - \sqrt{\kappa} \hat d_{1,\rm in} + i \sqrt{\Gamma} \hat c_{ \rm in} - \left[ \frac{\Gamma}{2} + i J \right] \hat d_2 , \nonumber \\ \frac{d}{dt} \hat d_2 =& - \frac{\kappa + \Gamma}{2} \hat d_2 - \sqrt{\kappa} \hat d_{2,\rm in} + i\sqrt{\Gamma} \hat c_{ \rm in} - \left[ \frac{\Gamma}{2} + i J^{\ast}\right] \hat d_1 , \label{Eqs:AuxCavEOM} \end{align} where we take $J^{\prime} \in \mathbb{R}$ without loss of generality, and define $\Gamma \equiv 4 J^{\prime 2} / \kappa^{\prime} $. Taking average values, we recover the master-equation result of Eq.~(\ref{Eq.:EoMexpIso}). Using the Heisenberg-Langevin approach, we can now calculate the {\it full} scattering matrix for the system which includes the scattering of noise incident from the engineered reservoir. Letting $\mathbf{Y}[\omega] = \left(\hat d_{1}[\omega],\hat d_{2}[\omega],\hat c[\omega]\right)^T$, the full scattering relations take the form $\mathbf{Y}_{\rm out}[\omega] = \tilde{\mathbf{s}}[\omega] \ \mathbf{Y}_{\rm in}[\omega]$. Consider first the Markovian limit, where $\kappa' \gg \omega, \kappa, \Gamma$. Assuming that the system has been tuned to satisfy both the directionality condition $J = i \frac{\Gamma}{2}$ (cf.~Eq.~(\ref{Eq:DirectionalHoppingCond})) and the impedance matching condition $\kappa = \Gamma$, the full scattering matrix in this limit is \begin{equation} \tilde{\mathbf{s}}[\omega] = \left( \mkern-5mu \begin{tikzpicture}[baseline=-.65ex] \matrix[ matrix of math nodes, column sep=0.5ex, ] (m) { \frac{ - i \frac{ \omega}{\kappa} } {1 - i \frac{\omega}{\kappa} } & 0 & \frac{ i } {1 - i \frac{ \omega}{\kappa} } \\ \frac{1 } { \left(1 - i \frac{ \omega}{\kappa} \right)^2} & \frac{ - i \frac{ \omega}{\kappa} } {1 - i \frac{ \omega}{\kappa}} & \frac{ \frac{ \omega}{\kappa} } {\left(1 - i \frac{ \omega}{\kappa}\right)^2} \\ \frac{ \frac{ \omega}{\kappa}} {\left(1 - i \frac{ \omega}{\kappa} \right)^2} & \frac{ i } {1 - i \frac{ \omega}{\kappa}} & \frac{\left( \frac{ \omega^2}{\kappa^2} \right) } {\left( 1 - i \frac{ \omega}{\kappa} \right)^2} \\ }; \draw[dashed] ([xshift=2.5ex]m-1-2.north east) -- ([xshift=0.5ex]m-2-2.south east); \draw[dashed] ( m-2-1.south west) -- ([yshift=-0.65ex]m-2-2.south east); \end{tikzpicture} \mkern-5mu \right) + \mathcal O\left[\frac{1}{\kappa^{\prime}} \right]. \end{equation} The upper left $2\times 2$ matrix is the scattering matrix $\mathbf{s}$ for the reduced, two-mode system, cf.~Eq.(\ref{Eq.:SMatrixIso}). The elements $\tilde{s}_{13}$ and $\tilde{s}_{23}$ describe the scattering of noise from the engineered reservoir to the main cavity modes. This then explicitly yields the noise operator in Eq.~(\ref{Eq:sDefn}) as $\vec{\hat{\xi}} = [ \tilde{s}_{13}, \tilde{s}_{23} ]^T \hat{c}_{\rm in}$. We see that directionality holds for all frequencies in this Markovian limit, i.e., $\tilde{s}_{12}[\omega] = 0$. In contrast, the impedance matching (which ensures no reflections at the input of cavity $1$) only holds for $\omega \ll \kappa$. Finally, note that at zero frequency, the full scattering matrix becomes: \begin{equation} \label{Eq.:MatrixIsoAuxMode} \tilde{\mathbf{s}}[0] = \left( \mkern-5mu \begin{tikzpicture}[baseline=-.65ex] \matrix[ matrix of math nodes, column sep=1ex, ] (m) { 0 & 0 & i\\ 1 & 0 & 0\\ 0 & i & 0\\ }; \draw[dashed] ([xshift=0.5ex]m-1-2.north east) -- ([xshift=0.5ex]m-2-2.south east); \draw[dashed] (m-2-1.south west) -- (m-2-2.south east); \end{tikzpicture} \mkern-5mu \right) . \end{equation} In this ideal case, signals incident on cavity $2$ are perfectly transmitted to the engineered reservoir, while the input field on the reservoir (i.e.,~the $\hat{c}$ mode) is perfectly transmitted to mode $1$. If the engineered reservoir is at zero temperature, we see that the output from cavity 1 is simply vacuum noise. Amusingly, the above unitary scattering matrix is that of a perfect circulator: the effective magnetic field associated with the phase of $J$ breaks the degeneracy of right and left circulating eigenmodes of the coherent three-mode hopping Hamiltonian. In the case of symmetric decay rates, i.e., $\kappa^{\prime} = \kappa$, this kind of circulator has been discussed in the context of superconducting circuit setups \cite{Koch2010, Ranzani2014a} and just recently experimentally demonstrated by Sliwa and co-workers \cite{Sliwa2015}. An analogous circulator for phonons has been discussed in the context of optomechanics \cite{Habraken2012}. \subsubsection{Non-Markovian corrections}\label{Subsec:IsolatorNonMarkov} We can also consider deviations from the Markovian limit, where the internal damping rate of the engineered reservoir $\kappa'$ is not arbitrarily large. The scattering matrix follows simply from solving the full (linear) Langevin equations without any adiabatic assumption. We quote only the results for the forward and reverse transmission probabilities, again assuming that the directionality and impedance matching conditions have been met. We find \begin{align} |\tilde{s}_{21}[\omega]|^2 =& \frac{ \left(1 + \frac{ \omega^2}{\kappa^{\prime 2}} \right)} { \left[\frac{\omega^2}{\kappa^{\prime 2}} \left(1+ \frac{4 \omega^4}{\kappa^4}\right) -\frac{4\omega^4}{\kappa^3\kappa^{\prime }} +\left(1+\frac{\omega ^2}{\kappa^2}\right)^2 \right]}, \nonumber \\ |\tilde{s}_{12}[\omega]|^2 =& \frac{ \frac{ \omega^2}{\kappa^{\prime 2}} } { \left[\frac{\omega^2}{\kappa^{\prime 2}} \left(1+ \frac{4 \omega^4}{\kappa^4}\right) - \frac{4\omega^4}{\kappa^3\kappa^{\prime }} +\left(1+\frac{\omega ^2}{\kappa^2}\right)^2 \right]}. \end{align} One clearly sees that the directionality only holds for frequencies that are small compared to the inverse correlation time $1/\kappa'$ of the reservoir: for small $\omega$, $|\tilde{s}_{12}[\omega] |^2 \propto \omega^2 / \kappa'^2$. For non-zero $\omega / \kappa'$, the engineered reservoir gives rise to both dissipative and coherent interactions. The extra induced coherent interaction ruins the directionality matching condition of Eq.~(\ref{Eq:DirectionalHoppingCond}), leading to a lack of perfect isolation. \subsection{Directional phase-preserving amplifier: additional details}\label{Sec.:NDPAProp} \subsubsection{Bandwidth and non-Markovian effects} We return now to the setup presented for directional amplification in Sec.~\ref{Sec.:NDPA}. As in the previous section, we will investigate the frequency-dependent behavior of the system using an auxiliary damped cavity mode $\hat{c}$ to represent the engineered reservoir. With this choice, the system is analogous to that studied in Ref.~\onlinecite{Ranzani2014a} , which was recently implemented in a superconducting circuit experiment \cite{Sliwa2015}. We emphasize the importance of having a large damping rate $\kappa'$ of the auxiliary mode, thus complementing the discussion in Ref.~\onlinecite{Ranzani2014a}. For phase-preserving amplification, the coherent interaction between the principle modes $\hat{d}_1,\hat{d}_2$ has the NDPA form of Eq.~(\ref{Eq.:HamCohPA}). To obtain the correct dissipative interaction, the coupling $ \hat{\mathcal{H}}_{\rm SB}$ between the principle cavity modes and the auxiliary mode should have the form \begin{equation} \label{Eq:SysBathPA} \hat{\mathcal{H}}_{\rm SB} = \sqrt{2} \lambda' \hat{c}^\dagger \left( \cos \theta \, \hat{d}_1 + \sin \theta \, \hat{d}_2^\dagger \right) + h.c. . \end{equation} Taking the large $\kappa'$ limit and using standard adiabatic elimination techniques, one recovers the master equation described by Eqs.~(\ref{Eq.:GeneralMaster}) and (\ref{Eq:zPA}), with $\Gamma = 4 \lambda'^2 / \kappa'$. One can again solve the full Heisenberg-Langevin equations to obtain the full $3 \times 3$ scattering matrix for the system at all frequencies. If we tune the couplings to satisfy the directionality condition of Eq.~(\ref{Eq:DirectionalNDPACond}) and the impedance matching condition of Eq.~(\ref{Eq:NDPAImpedanceMatchCond}), the ``forward photon number gain" of the amplifier takes the form \begin{align} \mathcal G[\omega] \equiv \left| s_{21}[\omega] \right|^2 =& \frac{ \left(2 \mathcal C -1\right) } {\left[\frac{ \omega^2}{\kappa^2}+1\right] \left[\left( \mathcal C -1\right)^2 + \frac{ \omega^2}{\kappa^2}\right]} +\mathcal O \left[ \frac{\omega}{\kappa^{\prime}}\right]. \end{align} The corresponding reverse photon number gain (which we ideally want to vanish) is given by \begin{align} \bar{\mathcal G}[\omega] \equiv \left| s_{12}[\omega] \right|^2 = \mathcal G[\omega] \frac{\omega^2}{\kappa^{\prime 2}} +\mathcal O \left[ \frac{\omega^3}{\kappa^{\prime 3}}\right] . \label{Eq:NDPAReverseGain} \end{align} Consider first the limit where the engineered reservoir is effectively Markovian, $\omega / \kappa' \rightarrow 0$. The reverse gain always vanishes, while the zero frequency forward gain $\mathcal G[0]$ is controlled by $\mathcal C$, and diverges as $\mathcal C \rightarrow 1$; the system is unstable for larger $\mathcal C$. In the large gain limit, $\mathcal G[\omega]$ is a Lorentzian as a function of frequency, with a bandwidth $\Delta \omega = 2\kappa (1 - \mathcal C)$ that decreases as one increases the gain.. The amplifier has a finite gain-bandwidth limitation just like a standard cavity-based NDPA (i.e., the product $\sqrt{\mathcal G[0]} \Delta \omega$ is fixed) \cite{Ranzani2014a}. Note that the dissipative parametric interaction on its own suffers from no such limitation \cite{Metelmann2014}, but is of course not directional. Directionality is thus obtained by introducing a coherent NDPA interaction, with the price that this interaction naturally leads to a conventional gain-bandwidth limit. Turning to the non-Markovian effects, we see from Eq.~(\ref{Eq:NDPAReverseGain}) that for finite $\omega / \kappa'$, the reverse gain is non-zero, implying that directionality is lost; this is also depicted in Fig.~\ref{Fig:NDPA}. The loss of directionality here is analogous to what happens in the directional isolator, and occurs for the same basic physical reason: for finite $\omega / \kappa'$, the engineered reservoir also induces a coherent interaction between the two modes, and hence the perfect matching of coherent and dissipative interactions needed for directionality is lost. \subsubsection{Added noise and quantum-limited behavior} In addition to directionality, for many applications it is crucial that our amplifier reaches the fundamental quantum limit on its added noise. This limit corresponds to adding noise equivalent to half a quanta at the input, $\bar n_{ \rm add} \geq 1/2$ \cite{Caves1982}. The added noise follows directly from the full scattering matrix $\tilde{s}$, and will have contributions both from noise incident on cavity 2 that is reflected, and noise emerging from the engineered reservoir (i.e.,~the auxiliary $\hat{c}$ mode). Assuming that the impedance matching and directionality conditions have been fulfilled, and letting $\bar{n}_{d_2}^T$ and $\bar{n}_c^T$ represent the thermal occupancies (respectively) of these two noise sources, we find: \begin{align} \bar n_{\rm add}[0] = \left(\frac{1}{2} + \bar n_{d_2}^T \right) \left[1 +\frac{1}{\mathcal G [0]} \right]. \end{align} Thus, in the large gain limit, the added noise is quantum limited as long as there is no thermal noise incident upon cavity 2 (i.e., $n_{d_2}^T=0$) \cite{Ranzani2014a}. Remarkably, thermal noise in the engineered reservoir does not prevent one from reaching the quantum limit; similar behavior is found in a purely dissipative (non-directional) phase-preserving amplifier \cite{Metelmann2014}. While not relevant to the quantum limit, from a practical standpoint one also wants the noise leaving cavity $1$ to be small (so as not to damage the signal source). Using our scattering matrix, it is straightforward to calculate the noise of the output field from cavity one. Characterizing this noise by an effective thermal occupancy $\bar{n}^T_{1,\rm out}$, we find at zero frequency: \begin{align} \bar{n}^T_{1,\rm out} = \bar n_c^T. \end{align} Thus, while thermal noise in the engineered reservoir does spoil quantum limited performance, this noise does show up in the output of cavity $1$. \subsection{Directional phase sensitive amplifier: additional details}\label{Sec.:DPAMode} \subsubsection{Full scattering matrix and impedance matching} We now turn attention to our scheme of Sec.~\ref{Sec.:DPA} for directional and noiseless single-quadrature amplification. As discussed in that section, we need to combine the coherent QND interaction of Eq.~(\ref{Eq:HamCohAmp}) (QND variables $X_2$ and $P_1$) with the corresponding dissipative interaction; this dissipative interaction requires the jump operator $\hat{z} = \hat{X}_2 + i \hat{P}_1$, as given in Eq.~(\ref{Eq:zQND}). To generate the required dissipation, we again take the engineered reservoir to be a damped auxiliary mode $\hat{c}$ (damping rate $\kappa'$). Writing this operator in terms of quadratures as $\hat{c} =( \hat{U} + i \hat{V}) / \sqrt{2}$, the required system-bath interaction has the form \begin{align}\label{Eq:HamDissAmp} \hat{\mathcal{H}}_{\rm SB} = \Lambda \left[ \hat P_1 \hat V + \hat X_2 \hat U \right]. \end{align} This interaction preserves the QND structure in the coherent interaction, as it also commutes with $\hat{X}_2$ and $\hat{P}_1$. One can again confirm that in the Markovian limit of a large $\kappa'$, one recovers the master equation description, with the dissipative rate $\Gamma$ in Eq.~(\ref{Eq.:GeneralMaster}) being given by $ \Gamma = \frac{2 \Lambda^2}{\kappa^{\prime}}$. The dissipative interaction on its own generates phase-sensitive amplification that can be quantum limited. Heuristically, this can be understood as arising from a kind of transduction mediated by the reservoir. From Eq.~(\ref{Eq:HamDissAmp}) information in the $P_1$ quadrature of cavity 1 drives the auxiliary mode $U$ quadrature. The $U$ quadrature in turn drives the cavity-2 $P_2$ quadrature, effectively letting $P_1$ drive $P_2$. As $P_1$ and $X_2$ are QND variables, increasing $\Lambda$ simply increases the gain associated with this process, with no possibility of instability. An analogous argument of course shows that one obtains reverse gain: signals incident on $X_2$ will emerge amplified in $X_1$. Thus, the dissipative amplification here is not directional; directionality is only obtained when this process is matched against its coherent counterpart, as described in Sec.~\ref{Sec.:DPA}. The only small non-ideality left in the directional phase-sensitive amplifier of Sec.~\ref{Sec.:DPA} is the presence of reflections at the input (cf.~Eq.~(\ref{Eq.DirAmpOutputShort})). Even though there is no gain associated with these, one would ideally want them to be exactly zero to protect the signal source. We now show that this can be easily accomplished by modifying both the coherent and dissipative interactions used in the scheme, so as to slightly deviate the QND structure discussed above. This modification also allows one to cancel reflections of signals and noise at the amplifier output. To impedance match, we first modify the system-bath Hamiltonian in Eq.~(\ref{Eq:HamDissAmp}) to take the more general form \begin{align}\label{Eq.:DPAdissIM} \hat{\mathcal{H}}_{\rm SB} \equiv& \hspace{0.4cm} \sqrt{2} \Lambda_U \ \hat U \left( \sin \theta \hat X_1 + \cos \theta \hat X_2 \right) \nonumber \\ & + \sqrt{2} \Lambda_V \ \hat V \left( \cos \theta \hat P_1 + \sin \theta \hat P_2 \right). \end{align} For $\theta = 0$ and $\Lambda_U = \Lambda_V = \Lambda /\sqrt{2}$ we recover Eq.~(\ref{Eq:HamDissAmp}). By allowing $\Lambda_U \neq \Lambda_V$ , we modify the relative strength of the two QND interactions. By letting $\theta$ deviate slightly from zero, we break the QND structure of Eq.~(\ref{Eq:HamDissAmp}); this will allow us to cancel the unwanted reflections from both cavities. In what follows, it will be useful to parametrize the system-bath couplings in terms of a cooperativity $\bar{\mathcal{C}}$ and asymmetry parameter $\alpha$: \begin{align} \bar{\mathcal{C}} & = 4 \Lambda_U \Lambda_V / (\kappa \kappa'), \\ \alpha &= (\Lambda_V / \Lambda_U)^2, \label{Eq:AlphaDefn} \end{align} The general structure of Eq.~(\ref{Eq.:DPAdissIM}) implies that once the auxiliary mode is eliminated, the cavity $X_j$ modes will drive one another in a non-directional way; the same goes for the cavity $P_j$ quadratures. This is depicted schematically in Fig.~\ref{Fig:DPADetails}(a). To obtain directionality, we need to cancel the ability of the cavity-$2$ quadratures to drive the corresponding cavity-$1$ quadratures. We do this in the usual manner: we balance the dissipative quadrature-quadrature interactions generated by Eq.~(\ref{Eq.:DPAdissIM}) by the coherent versions of these interactions. This will require a coherent Hamiltonian of the form: \begin{align}\label{Eq.:DPAcohIM} \hat{\mathcal{H}}_{\rm coh} \equiv& \lambda_1 \hat P_1 \hat X_2 + \lambda_2 \hat P_2 \hat X_1 . \end{align} The second term here is new compared to Eq.~(\ref{Eq:HamCohAmp}), and breaks its QND-structure. As usual, we balance the above coherent interactions against their dissipative counterparts (as generated by Eq.~(\ref{Eq.:DPAdissIM})) so that the cavity-1 quadratures are not driven by the cavity-$2$ quadratures. Working through the equations of motion, and focusing on the Markovian limit, we obtain the directionality conditions \begin{align} \label{Eqs:DPALambdaTuning} \lambda_1 = \kappa \bar{\mathcal C} \cos^2 \theta, \hspace{0.5cm} \lambda_2 = - \kappa \bar{\mathcal C} \sin^2 \theta. \end{align} Using a standard Heisenberg-Langevin analysis, we find the full scattering matrix of the system; tuning the coherent interactions as per Eqs.~(\ref{Eqs:DPALambdaTuning}), the scattering will indeed be directional. Insisting further that there are no reflections of signals and noise incident on either cavity (i.e.,~impedance matching) leads to an additional condition on the angle $\theta$: \begin{align} \sin 2 \theta = 1/ \bar{\mathcal C} . \end{align} Note that for a large cooperativity $\bar{\mathcal{C}}$, the angle $\theta$ is very close to zero, implying that one is very close to our original scheme where $\hat P_1$ and $\hat X_2$ are QND variables. Even after satisfying the above conditions, the cooperativity $\bar{\mathcal{C}}$ and the asymmetry parameter $\alpha$ remain unspecified; they control the final form of the impedance-matched, directional scattering matrix. Using the above conditions, the full scattering matrix of the system (describing both the principle modes and the auxiliary mode $\hat{c}$) takes a simple form. Introducing the vector \begin{align} \mathbf{W} = \left( \begin{array}{cccccc} \hat X_{1} & \hat P_{1} & \hat X_{2} & \hat P_{2} & \hat U & \hat V \end{array} \right)^T, \end{align} the scattering relations at each frequency then take the form $\mathbf{W}_{\rm out} = \tilde{\mathbf{s}} \ \mathbf{W}_{\rm in}$. At zero frequency, the scattering matrix is \begin{equation} \label{Eq.DirAmpOutput} \tilde{\mathbf{s}} = \left( \mkern-5mu \begin{tikzpicture}[baseline=-.65ex] \matrix[ matrix of math nodes, column sep=-0.5ex, ] (m) { 0 & 0 & 0 & 0 & 0 & - \left[\alpha \mathcal G_{\phi} \right]^{\frac{1}{4}} \\[-1.0ex] 0 & 0 & 0 & 0 & \frac{1 }{ \left[\alpha \mathcal G_{\phi}\right]^{\frac{1}{4}}} & 0 \\[-1.0ex] \frac{1}{\sqrt{\mathcal G_{\phi}}} & 0 & 0 & 0 & 0 & 0 \\[-1.0ex] 0 & \sqrt{\mathcal G_{\phi}} & 0 & 0 & 0 & 0 \\[-1.0ex] 0 & 0 & 0 & - \left[ \frac{\alpha }{\mathcal G_{\phi} } \right]^{\frac{1}{4}} & 0 & 0 \\[-1.0ex] 0 & 0 & \left[ \frac{\mathcal G_{\phi} }{\alpha } \right]^{\frac{1}{4}} & 0 & 0 & 0 \\[-1.0ex] }; \draw[dashed] ([xshift=0.5ex]m-1-4.north east) -- ([xshift=0.5ex]m-4-4.south east); \draw[dashed] ([yshift=-0.5ex]m-4-1.south west) -- ([yshift=-0.6ex]m-4-4.south east); \end{tikzpicture} \mkern-5mu \right) . \end{equation} \begin{figure*} \centering\includegraphics[width=1.0\textwidth]{Img_6.pdf} \caption{Properties of the directional phase-sensitive amplifier. (a) Sketch of couplings and drivings used to impedance match the amplifier. (b) Reverse gain $\bar{\mathcal{G}}_{\phi}[\omega]$ for $\omega$ set to half of the amplification bandwidth $\Delta \omega$, for various choices of the auxiliary-mode damping rate $\kappa'$. On resonance we always have perfect directionality: $\bar{\mathcal{G}}_{\phi}[0]=0$. (c) Amplification bandwidth $\Delta \omega$ as a function of zero-frequency forward gain $\mathcal{G}_{\phi}$ for various values of $\kappa'$. The amplifier does not suffer from a standard gain-bandwidth constraint. \label{Fig:DPADetails} } \end{figure*} $\mathcal{G}_\phi$ describes the zero-frequency phase-sensitive photon number gain of our amplifier, and it is given by \begin{equation} \sqrt{ \mathcal{G}_\phi} = \bar{\mathcal{C}} \left( 1 + \sqrt{1 - \frac{1}{\bar{\mathcal{C}}} } \right). \end{equation} The upper $4 \times 4$ block describes an ideal, directional phase-preserving amplifier. As for the full $6 \times 6$ scattering matrix, it describes a kind of ``squeezing circulator", where the input on port $j$ emerges from port $j+1$ after having undergone a squeezing transformation. Crucially, the squeezing parameters or gains for each of these transformations are not all equal. While a squeezing circulator may have interesting applications, if the goal is amplification, it represents a potential hazard. As indicated by Eq.~(\ref{Eq.DirAmpOutput}), incident noise from the auxiliary mode will emerge from the output of cavity $1$, having undergone a squeezing transformation with gain $\sqrt{\alpha \mathcal{G}_{\phi}}$. To protect the signal source at the amplifier input, we do not want to amplify any fluctuations emerging from the auxiliary mode. Hence, the ideal choice is to null this effective gain by tuning the asymmetry $\alpha$ to satisfy \begin{equation} \alpha = 1/\mathcal{G}_\phi, \label{Eq:alphaOpt} \end{equation} Tuning the asymmetry of the couplings to the auxiliary cavity in this manner ensures that one can have large, directional gain, without unduly large amounts of noise emerging from the amplifier input port. The presented phase-sensitive amplifier has several highly desirable properties: it is quantum limited, directional and has no gain-bandwidth limitation. However, an experimental implementation in a superconducting circuit setting will also face some technical challenges. Most notably, a straightforward implementation requires 6 pump tones to be applied with excellent control over their amplitudes and relative phases. While demanding, experiments with analogous levels of complexity and multiple pumps have recently been performed in circuit QED architectures, see e.g.,~Refs.~\onlinecite{Sliwa2015,Shankar2013}. \subsubsection{Frequency dependence} The full scattering matrix can also be easily calculated at non-zero frequencies. The relevant forward gain of the amplifier describes signals incident in the $P$ quadrature of cavity 1 emerging in the $P$ quadrature of the output from cavity 2. Assuming that we chose parameters to impedance match (as described), and that we further tune the asymmetry parameter $\alpha$ to minimize noise as per Eq.~(\ref{Eq:alphaOpt}), the forward photon number gain is given by: \begin{align} \mathcal G_{\phi} [\omega] \equiv \left| \tilde{s}_{42} \right|^2 =& \frac{\mathcal G_{\phi} \left(1 +\frac{\omega^2}{\kappa^{\prime 2}}\right)} {\left(1+\frac{\omega ^2}{\kappa ^2}\right)^2 + \frac{\omega ^2}{\kappa^{\prime 2}} \left(1+ \frac{4 \omega ^4}{\kappa ^4}\right) - \frac{ 4\omega ^4}{\kappa ^3 \kappa^{\prime }} }. \end{align} As already discussed, the zero-frequency gain $\mathcal{G}_\phi$ can be made arbitrarily large by simply increasing the various couplings (i.e.,~$\bar{\mathcal{C}}$); the linear system never exhibits any instability. In the Markovian limit $\kappa' \gg \omega$, the frequency dependence of the gain is extremely simple: it is simply a Lorentzian squared, with a bandwidth $\Delta \omega \sim \kappa$ which is {\it independent} of the zero-frequency gain. Thus, this amplifier is not constrained by any fundamental gain-bandwidth limitation. Including non-Markovian effects (i.e.,~finite $\omega / \kappa^{\prime}$), the frequency dependence is slightly more complex, but the ultimate bandwidth is still set by $\kappa$, irrespective of the size of the zero-frequency gain. While deviations from the Markovian limit do not degrade amplification, they impact the directionality of the amplifier. In the ideal Markovian limit, signals incident on cavity 2 in either quadrature never emerge from cavity 1. For finite $\omega / \kappa'$, this is no longer true: now, the reverse-gain scattering matrix element $\tilde{s}_{13}$ becomes non-zero, implying that incident signals on $\hat X_2$ can emerge from $\hat X_1$. We find: \begin{align} \bar{ \mathcal G}_{\phi} [\omega] \equiv \left|\tilde{s}_{13} \right|^2 =& \frac{\mathcal G_{\phi} \left( \frac{\omega^2}{\kappa^{\prime 2}}\right)} {\left(1+\frac{\omega ^2}{\kappa ^2}\right)^2 + \frac{\omega ^2}{\kappa^{\prime 2}} \left(1+ \frac{4 \omega ^4}{\kappa ^4}\right) - \frac{ 4\omega ^4}{\kappa ^3 \kappa^{\prime }} } . \end{align} As expected, the optimal situation is clearly in the Markovian limit where $\kappa^{\prime} \gg \kappa$. In this limit, one has purely directional amplification over the full bandwidth $\kappa$ of the principle cavity modes. \subsubsection{Added noise for finite frequency} An ideal phase-preserving amplifier can amplify a single quadrature without any added noise \cite{Caves1982}. From the scattering relations of Eq.~(\ref{Eq.DirAmpOutput}) it immediately follows that our scheme reaches this quantum limit on resonance. For completeness, we also present the added noise at finite frequency, again focusing on the impedance-matched version of the amplifier. We calculate the added noise of our amplifier in the standard manner \cite{Clerk2010}, by calculating the noise in the cavity-2 output $P$ quadrature (symmetrized spectral density $\bar S_{P_2}[\omega]$), and then referring this back to the input. Expressing this added noise as an effective number of thermal quanta, and focusing on the Markovian limit $\kappa^{\prime} \gg \kappa $, we find \begin{align} \bar n_{P_2, \rm add}[\omega] =& \frac{\omega ^2}{\kappa ^2 } \left( \frac{ \bar n_c^T + \frac{1}{2}}{ \sqrt{ \mathcal{G}_{\phi} \alpha}} + \left[ 1 + \frac{\omega ^2}{\kappa ^2 } \right] \frac{\left(\bar n_2^T+\frac{1}{2}\right) }{\mathcal G_{\phi}} \right) + \mathcal O \left[ \frac{\omega}{\kappa^{\prime }}\right]. \end{align} Note that we have left the asymmetry parameter $\alpha$ (cf.~Eq.~(\ref{Eq:AlphaDefn})) unspecified here. The added noise always vanishes at $\omega=0$, irrespective of the gain. Furthermore, for a fixed value of $\alpha$ the added noise vanishes at all frequencies in the large gain limit, implying that one is quantum limited at all frequencies. If however one tunes $\alpha = 1 / \mathcal{G}_\phi$ to minimize the noise hitting the input port, then the added noise is non-zero at finite frequencies even in the large gain limit. \section{Conclusion} We have presented an extremely general yet simple method for achieving directional behavior in coupled photonic systems, based on matching a given (reciprocal) coherent interaction with the corresponding dissipative version of the interaction. We demonstrated how this principle could be used to construct both isolators and directional, quantum-limited amplifiers. In particular, our approach allows the construction of a directional phase-sensitive amplifier that is not limited by a standard gain-bandwidth constraint. The recipe we present is not tied to a particular realization, and could be implemented in photonic systems, microwave superconducting circuits, and optomechanical systems. Finally, while our focus here has been on bilinear interactions between two principle cavity modes, a similar approach of balancing coherent and dissipative interactions could be used to make nonlinear interactions directional, and could be used in more complex cavity lattice structures. Understanding how this form of reciprocity breaking leads to useful functionalities and possibly new photonic states in such systems will be the subject of future work. \acknowledgments{ We thank Joe Aumentado, Michel Devoret, Archana Kamal and Leonardo Ranzani for useful conversations. This work was supported by the DARPA ORCHID program through a grant from AFOSR. }
2,877,628,089,268
arxiv
\section{Introduction} In the modern theory of critical phenomena, many analytical and numerical methods have been developed that allow to describe both qualitatively and quantitatively the critical behavior of various systems. Of course, each of these methods is associated with certain difficulties, both purely technical and lying in their justification and application area. As a rule, they allow to obtain acceptable quantitative estimates at least for systems from the universality class of the $O(N)$ model. From the point of view of renormalization group (RG) approaches, the simplicity of the $O(N)$ model lies in the uniqueness of the coupling constant, i.e. there is only one non-trivial fixed point, which is IR attractive in $2<d<4$. For models with two or more coupling constants, the situation is more complicated, a stable fixed point may be absent, and then one observes a fluctuation-induced first-order transition. The properties of a RG-flow and possible fixed points are widely discussed in multiple coupling scalar theories such as general $N$-vector models \cite{Brezin74,Michel84,Osborn18,Rychkov19,Codello20}. The situation remains controversial even for one of the simplest generalizations of the $O(N)$ model, namely for the $O(N)\otimes O(M)$ model with two coupling constants. The $O(N)\otimes O(M)$ model has arisen almost half a century ago in the context of studying transitions in spin systems with non-collinear ordering (such as helimagnets)\cite{Bak76,Garel76,Brazovskii76} and superfluid helium-3\cite{Jones76}. (See \cite{Delamotte04} for a review.) To date, this model has been considered in the framework of various approaches: $4-\varepsilon$ expansion \cite{Kawamura88, Kawamura90, Sokolov95, Pelissetto01, Calabrese04, Kompaniets20}, $1/N$ expansion \cite{Pelissetto01, Gracey02, Gracey02-2}, perturbative RG \cite{Sokolov94, Loison00, Pelissetto01-2, Pelissetto01-3, Sokolov02,Calabrese03,Parruccini03,Pelissetto04,Delamotte10}, pseudo-$\varepsilon$ expansion \cite{Calabrese04,Holovatch04}, $2+\epsilon$ expansion \cite{Azaria90,Azaria93,David96,Pelissetto01} (see also\cite{Hikami81} for the $N=M$ and $N=M+1$ cases), non-perturbative (functional) RG \cite{Zumbach93,Zumbach94,Zumbach94-2, Delamotte00,Delamotte03,Delamotte04,Delamotte16} (NPRG), and the conformal bootstrap (CB) program \cite{Nakayama14,Nakayama15,Henriksson20}. Unexpectedly, the simplest and most considered case $M=2$ turns out to be the most controversial. (The case $M=1$ is the usual $O(N)$ model.) Moreover, this controversy relates to the most physically significant cases $N=2$ and $N=3$. Apart from these cases, discrepancies in the predictions of different approaches are reduced to quantitative estimates of critical exponents and the value $N_c^+(M,d)$ such that for $M\leq N<N_c^+(M,d)$ a stable fixed point is absent but appears for $N>N_c^+(M,d)$. Estimates of $N_c^+(M,3)$ obtained by various theoretical methods for $2\leq M\leq4$ are shown in table \ref{Table1}. However, the perturbative (fixed-dimension) RG computations performed at six loops within the zero momentum massive scheme for $M=2$ predict the additional critical value $N_{c2}<N_c^+(2,3)$ below of which a stable fixed point reappears and exists for $N=2,\,3$ and 4 \cite{Sokolov02,Calabrese03,Pelissetto04}. (In this approach, $N_c^+(2,3)\approx6.4$ and $N_{c2}\approx5.7$.) Numerical analysis of the RG-flow geometry, based on the resummation of the 6-loop approximation for $\beta$-functions, suggests that this new fixed point is of the focus-type with a complex-valued correction-to-scaling exponents $\omega$. In contrast to the perturbative RG, the $4-\varepsilon$, pseudo-$\varepsilon$ expansions as well as the non-perturbative RG do not predict the appearance of such a fixed point. If the perturbative RG would be the only method giving results contrary to other methods then one can appeal to his unreliability. In fact, this method is less rigorously justified than the $4-\varepsilon$ and $1/N$ expansions, if only because of the absence of a formal small expansion parameter, and its results are high sensitive with respect to the resummation parameters. Nevertheless, this approach gives acceptable quantitative estimates of the critical behavior of the $O(N)$ model, and series obtained from the more reliable $4-\varepsilon$ and $1/N$ expansions are also only asymptotic with rather poor convergence properties for physically interesting values of $\varepsilon$ and $N$. In addition, the conformal bootstrap program \cite{Nakayama14,Nakayama15,Henriksson20} also predicts the existence of a non-trivial fixed point below $N_c^+(2,3)$ with critical exponents in good agreement with the fixed-dimension perturbative results. The conformal bootstrap determines the exact bound to the scaling dimensions of operators. These exclusion bounds may have kinks which as expected correspond to the position of actual exponents of the critical point.The main advantage of this method is that it is not based on series expansions and does not have convergence problems, contrary to RG approaches. The disadvantages of the method include the fact that it postulates scale invariance which is absent upon a first-order transition, and although even mild kinks can be interpreted as the scaling dimensions of the conformal field theory corresponding to the critical point, only the presence of kinks cannot serve as evidence that a transition is continuous. In addition, the conformal bootstrap predicts an ordinary fixed point instead of the focus-type, so the situation for the $O(2)\otimes O(2)$ and $O(3)\otimes O(2)$ symmetry classes remains unclear. \begin{table}[t] \caption{\label{Table1} Numerical estimates of $N_c^+(M,3)$ for $M=2,\,3,\,4$ obtained by different approaches within various orders of perturbation theory and resummation procedures. PB --- Pad\'e--Borel; PBL --- Pa\'e–-Borel-–Leroy; DSIS --- direct summation of the inverse series; CB --- conform--Borel; P --- Pad\'e; LPA --- local potential approximation; LPA' is LPA with a moment-dependent anomalous dimension.} \begin{tabular}{llll} \hline \hline Method & $M=2$ & $M=3$ & $M=4$ \\ \hline $4-\varepsilon$, $\mathcal{O}(\varepsilon^3)$, PB \cite{Sokolov95} & $3.39$ & & \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^3)$ \cite{Pelissetto01} & $5.3(2)$ & $9.1(9)$ & $12.1(1)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^4)$, PB \cite{Kompaniets20} & $4.6(2.1)$ & $7.7(1.4)$ & $10.3(1.6)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^5)$, PBL \cite{Calabrese04} & $5.47(7)$ & $\sim9$ & \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^5)$, DSIS \cite{Calabrese04} & $6.1(2)$ & $9.6(4)$ & $12.7(7)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^5)$, PB \cite{Kompaniets20} & $5.3(7)$ & $8.4(1.1)$ & $11.2(1.3)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, PB \cite{Kompaniets20} & $5.8(8)$ & $9.3(5)$ & $12.3(6)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, CB \cite{Kompaniets20} & $6.0(6)$ & $9.3(4)$ & $12.4(3)$ \\ $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, DSIS & $5.9(2)$ & $9.2(4)$ & $12.2(5)$ \\ PRG, $\mathcal{O}(g^4)$, PB \cite{Sokolov94} & $3.91(1)$ & & \\ PRG, $\mathcal{O}(g^7)$, PB \cite{Calabrese03} & $6.4(4)$ & $11.1(6)$ & $14.7(8)$ \\ Pseudo-$\varepsilon$, $\mathcal{O}(\tau^6)$, P \cite{Holovatch04} & $6.23(21)$ & & \\ Pseudo-$\varepsilon$, $\mathcal{O}(\tau^6)$, P \cite{Calabrese04} & $6.22(12)$ & $9.9(3)$ & $13.2(6)$ \\ $1/N$, $\mathcal{O}(1/N)$ & $3.8$ & $5.2$ & $6.6$ \\ $1/N$, $\mathcal{O}(1/N^2)$ \cite{Pelissetto01} & $5.3$ & $7.3$ & $9.2$ \\ NPRG, LPA \cite{Zumbach93} & $4.7$ & & \\ NPRG, LPA' \cite{Delamotte16} & $5.24(2)$ & & \\ CB \cite{Nakayama14} & & $\sim7$ & \\ This work & $5.5(4)$ & $6.5(4)$ & $7.5(4)$ \\ \hline \hline \end{tabular} \end{table} Monte Carlo (MC) simulations of lattice models from these symmetry classes do not bring complete clarity to the problem. The main difficulty here is typical for models with several coupling constants, like the $O(N)\otimes O(M)$ model: lattice systems can undergo a first-order phase transition even if a stable fixed point exists on the RG diagram, but initial values of coupling constants locate outside the attraction region of this point. Many different models have been considered using various MC algorithms (see \cite{Loison04,Delamotte04} for a review). The most famous of them is an antiferromagnet on a stacked-triangular lattice (STA). In early works for $N=2$ and $N=3$, both first-order and continuous phase transitions have been observed depending on models. Moreover, some models with second-order behavior demonstrate the universality. At that, the tendency towards continuous as well as universal behavior for the case $N=3$ is more pronounced. In fact, these finite-size lattice results can be explained in terms RG even if a stable fixed point is absent, but if a corresponding RG trajectory passes through a region characterized by a very slow evolution of RG parameters. Such a region may arise, e.g., if a fixed point has complex-valued coordinates with a small imaginary part. Moreover, if this region is small and attracts trajectories starting from a wide set of initial values of RG parameters then the almost universal behavior is observed. It is this picture (including the tendency mentioned above) that is observed using the non-perturbative RG approach \cite{Zumbach93,Zumbach94,Zumbach94-2, Delamotte00, Delamotte03}. Further numerical studies have confirmed the first-order transition scenario for the $N=2$ case (including STA\cite{Itakura03,Peles04,Diep08-1}, helimagnets\cite{Sorokin14}, the lattice version of the $O(2)\otimes O(2)$ model\cite{Itakura03,Okubo10}, the lattice version of the $O(2)\otimes O(2)$ sigma model\cite{Kunz93,Loison98}) as well as for the $N=3$ case (including STA\cite{Diep08}, helimagnets\cite{Sorokin14}, the lattice version of the $O(3)\otimes O(2)$ model\cite{Okubo10}, the lattice version of the $O(3)\otimes O(2)$ sigma model\cite{Loison99,Itakura03}). However, the recent study \cite{Kawamura19} of $N=3$ STA considering huge lattices finds a continuous transition. The authors \cite{Kawamura19} do not observe any double-peak structure the energy distribution in contrast to the results of the work \cite{Diep08}, and obtain an indication the focus-type fixed point, namely, they obtain the complex-valued correction-to-scaling exponent. The authors note that the RG flow around the focus-like fixed point may temporarily moves from the potential stability region, that seems in finite-lattice studies as a first-order transition, sings of which disappear in the thermodynamic limit (so-called pseudo-first order). Apparently, further research is required to explain the inconsistencies in the results of works ref. \cite{Kawamura19} and ref. \cite{Diep08} using different MC algorithms. Note that the perturbative RG predicts the focus-type fixed point for $N=3$ as well as for $N=2$, but for the later case MC simulations observe a first-order transition. In addition, the $N=2$ case of the $O(N)\otimes O(2)$ model has the same order parameter space $G/H=\mathbb{Z}\otimes SO(2)$ as the $N=2$ case of the Ising-$O(N)$ model (with three coupling constants), where a first-order transition is found \cite{Sorokin18,Sorokin19-2,Sorokin19-3}. Also note that the correction-to-scaling exponent can be complex-valued for a complex-valued fixed point, that can be observed for the pseudo-scaling behavior upon a weak first-order transition. We have one more argument in favor of the scenario with a first-order transition at least for the $N=3$ case. There are topological excitations of the special type, namely so-called $\mathbb{Z}_2$-vortices, in the spectrum of the $O(3)\otimes O(2)$ model. We know that in two dimensions $\mathbb{Z}_2$-vortices can crucial change the critical behavior, in particular in the $O(3)\otimes O(3)$ model one observes a finite-temperature first-order transition instead of a Ising-like continuous one \cite{Sorokin17,Sorokin19-1}. In $2+\epsilon$ dimensions where a transition occurs at low temperature, $\mathbb{Z}_2$-vortices are associated in topologically neutral configurations, so a transition is of the second order from the universality class of the $O(4)$ model \cite{Azaria90,Azaria93}. One expects that the critical behavior changes at some finite $\epsilon<1$. So, we cannot exclude that at $\epsilon=1$ a transition becomes of the first order. Although the presence of topological defects of any types does not guarantee changes in the critical behavior, one can note that they are absent in the $O(N)\otimes O(2)$ model for $N\geq6$ that is close to the value $N_c^+(2,3)$. This coincidence is not reproduced for $M>2$ at least in RG approaches. However, the consistency of different RG methods in estimating the value $N_c^+(M,3)$ also deteriorates with increasing $M$. So, one should use a method without series expansions. In this work we consider the $O(N)\otimes O(M)$ model, namely the lattice version of the $O(N)\otimes O(M)$ sigma model for $M=2,\,3,\,4$ and $N=M,\ldots,8$ using Monte Carlo simulations. Our results do not confirm the expectations of RG approaches that the first order of a transition becomes more pronounced with increasing $M$. \section{Model and methods} The $O(N)\otimes O(M)$ model is described by the Ginzburg -- Landau functional\cite{Kawamura90} \begin{eqnarray} F=\int d^dx\left(\sum_i\Bigl((\partial_\mu\mathbf{\phi}_i)^2+r\mathbf{\phi}_i^2\Bigr)+\right.\nonumber\\ \left. +u\Bigl(\sum_i\mathbf{\phi}_i^2\Bigr)^2+2v\sum_{i,j}\Bigl((\mathbf{\phi}_i \mathbf{\phi}_j)^2-\mathbf{\phi}_i^2\mathbf{\phi}_j^2\Bigr)\right), \label{GLW-model} \end{eqnarray} where $\phi_i$ is a $N$-component vector field, $i,\,j=1,\ldots,M$. The region of the potential stability with the non-collinear ground state in the broken symmetry phase $r<0$ is \begin{equation} u>0,\quad v>0,\quad \frac{M}{M-1}u-v>0, \end{equation} and the ground state is \begin{equation} \phi_i^2=\frac{-r}{2(Mu-(M-1)v)},\quad \phi_i\perp\phi_j. \end{equation} The order parameter $\Phi=(\phi_1,\ldots,\phi_M)$ is a $N\times M$-matrix. In the disordered phase, it is invariant under global $O(N)_L\otimes O(M)_R$ symmetry group acting correspondingly left and right on a matrix. In the ordered phase, the symmetry group is broken down to $O(N-M)_L\otimes O(M)_{\mathrm{diag}}$ subgroup. So, the order parameter space $G/H$ is a Stiefel manifold \begin{equation} \frac{O(N)_L\otimes O(M)_R}{O(N-M)_L\otimes O(M)_{\mathrm{diag}}}\approx \frac{O(N)}{O(N-M)}\equiv V_{N,M}. \end{equation} If we take the limits $u\to\infty$, $v\to\infty$ keeping $|\phi|=1$ and $u/u=\mathrm{const}$, we obtain the $O(N)\otimes O(M)$ sigma model. In this work, we consider this model on a lattice with the Hamiltonian \begin{equation} H=-J\sum_{\mathbf{x},\mu}\mathrm{tr}\,\Phi_\mathbf{x}^T\Phi_{\mathbf{x}+\mathbf{e}_\mu},\quad \mu=1,\ldots,3, \label{lattice-model} \end{equation} where $\mathbf{e}_\mu$ is a unit vector of a simple cubic lattice, $J>0$. Below, for brevity, we denote the $O(N)\otimes O(M)$ sigma model on a lattice as the $V_{N,M}$ model. We investigate the $V_{N,M}$ model by Monte Carlo simulations using the Wollf cluster algorithm \cite{Wollf89}. We consider the cases $M=2,\,3,\,4$ and $N=M,\ldots,8$, thus we reproduce all known numerical results for the three-dimensional $V_{2,2}$\cite{Kunz93,Loison98}, $V_{3,2}$\cite{Diep94,Loison99,Itakura03}, $V_{3,3}$\cite{Diep94,Loison00-2}, $V_{4,3}$\cite{Loison00-2}, and $V_{4,4}$\cite{Loison00-2} models. In addition, we consider the simplest case $M=1$. We use periodic boundary conditions and lattices with sizes $L=15,\,20,\,25,\,30,\,40,\,50,\,60,\,80$ for $M=1$, $L=15,\ldots,60$ for $M=2$ and $N>3$, and $L=15,\ldots,50$ for $M=3,\,4$ and $N>5$. In each simulation, $5\cdot10^5$ MC steps are made for thermalization, and $5\cdot10^6$ steps for calculation of averages. A field configuration $\Phi_\mathbf{x}$ is defined by generalized Euler angles\cite{Hoffman72}. For the uniform distribution of a random direction on a hypersphere, it is necessary to define the following functions \begin{equation} f_n(\theta)=\int \sin^n\theta\,d\theta,\quad n=2,\ldots,6, \end{equation} and the inverse functions \begin{equation} \theta=f_n^{-1}(r),\quad r\in[0,1],\quad \theta\in[0,\pi], \end{equation} where $r$ is a random number. For the inverse functions, we use tables of values of size $6.4\cdot10^5$ and linear interpolation. The order parameter is simply defined as \begin{equation} \mathbf{m}=\frac1{L^3}\sum_\mathbf{x} \phi_1(\mathbf{x}),\quad m=\sqrt{\mathbf{m}^2}. \end{equation} The estimation of the transition temperature is performed using the Binder cumulant crossing method \cite{Binder81} \begin{equation} U=1-\frac{\langle m^4 \rangle}{3\langle m^2 \rangle^2}. \end{equation} Critical exponent $\nu$ is estimated using the following cumulants \cite{Ferrenberg91}: \begin{equation} V_n=\frac{\partial}{\partial(1/T)}\ln \langle m^n\rangle=L^3\left(\frac{\left<m^n E\right>}{\left<m^n\right>}-\langle E\rangle\right), \label{Vn} \end{equation} \begin{equation} \max\left(V_n\right)\sim L^{\frac1\nu}. \end{equation} Other exponents are estimated as follows: \begin{equation} \left.m \right|_{T=T_c}\sim L^{-\frac\beta\nu},\quad \left.\chi\right|_{T=T_c}\sim L^{\frac\gamma\nu}, \end{equation} where $\chi$ is the susceptibility \begin{equation} \chi=\frac{L^3}{T}\left<m^2\right>,\quad T\geq T_c. \end{equation} Since we independently determine the exponents $\nu$, $\beta/\nu$, and $\gamma/\nu$ from our simulations, we can more accurately estimate the Fisher exponent $\eta$ using both scaling relations \begin{equation} \eta=2-\frac\gamma\nu=2\frac{\beta}\nu-1. \end{equation} It is very useful for a case of a weak first-order transition, where we do not observe a double-peak structure of the energy distribution. From the unitarity bound for the anomalous dimensions of the field $\Phi$, we have \begin{equation} \eta\geq0,\quad \frac{\beta}\nu\geq\frac12,\quad \frac\gamma\nu\leq2. \end{equation} Otherwise, we deals with a first-order transition. \begin{table}[t] \caption{\label{Table2}Homotopy groups\cite{Stiefel35,Whitehead45} of Stiefel manifolds $\pi_k(V_{N,M})$.} \begin{tabular}{ccccc} \hline \hline $G/H=V_{N,M}$ & $\pi_0(G/H)$ & $\pi_1(G/H)$ & $\pi_2(G/H)$ & $\pi_3(G/H)$\\ \hline $V_{1,1}$ & $\mathbb{Z}_2$ & 0 & 0 & 0 \\ $V_{2,2}$ & $\mathbb{Z}_2$ & $\mathbb{Z}$ & 0 & 0 \\ $V_{3,3}$ & $\mathbb{Z}_2$ & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}$ \\ $V_{4,4}$ & $\mathbb{Z}_2$ & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}+\mathbb{Z}$ \\ $V_{N,N},\, N\geq5$ & $\mathbb{Z}_2$ & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}$ \\ \hline $V_{2,1}$ & 0 & $\mathbb{Z}$ & 0 & 0 \\ $V_{3,2}$ & 0 & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}$ \\ $V_{4,3}$ & 0 & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}+\mathbb{Z}$ \\ $V_{N,N-1},\, N\geq5$ & 0 & $\mathbb{Z}_2$ & 0 & $\mathbb{Z}$ \\ \hline $V_{3,1}$ & 0 & 0 & $\mathbb{Z}$ & $\mathbb{Z}$ \\ $V_{4,2}$ & 0 & 0 & $\mathbb{Z}$ & $\mathbb{Z}+\mathbb{Z}$ \\ $V_{N,N-2},\,N\geq5$ & 0 & 0 & $\mathbb{Z}$ & $\mathbb{Z}$ \\ \hline $V_{4,1}$ & 0 & 0 & 0 & $\mathbb{Z}$ \\ $V_{N,N-3},\,N\geq5$ & 0 & 0 & 0 & $\mathbb{Z}_2$ \\ \hline $N\geq M+4\geq5$ & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Since we are going to discuss the presence of topological excitations of any type, it is useful to know some topological properties of the order parameter space $G/H=V_{N,M}$. In general, a $q$-dimensional topological configuration is topologically protected in $d$-dimensions if the homotopy group $\pi_{d-q-1}(G/H)$ is non-trivial. So in three dimensions, a 2-dimensional configuration is a domain wall. Domain walls appear if the order parameter space has the form $G/H=G_d\otimes C$, where $G_d$ is a discrete group, and $C$ is a connected homogeneous space. 1-dimensional topological configurations are vortex tubes. Besides topological defects of these types, skyrmion-like configurations may be present if $\pi_d(G/H)$ is non-trivial. Necessary information about the topology of Stiefel manifolds is shown in Table \ref{Table2}. One can note that topological defects of any types are absent for $N\geq M+4$. As we have discussed above, the presence of topological defects does not guarantee changes in the critical behavior, but we can formulate a criterion for when topological configurations make a significant contribution. For lattice models, one can define the (total) density of topological defects using the local definition of defects. This quantity contains two terms corresponding to free and associated defects: $\rho_\mathrm{total}=\rho_\mathrm{free}+\rho_\mathrm{pairs}$ for point-like defects, and $\rho_\mathrm{total}=\rho_\mathrm{infinite}+\rho_\mathrm{closed}$ for extended ones. In the ordered phase $\rho_\mathrm{free}$ and $\rho_\mathrm{infinite}$ tend to be zero, while in the disordered phase these quantities have some finite values, renormalized by critical fluctuations. Without fluctuations, the defect density has a jump at the transition point, and we deal with a first-order transition. In a case of strong fluctuations, the situation is more delicate. Point-like defects associated in pairs or closed extended defects have the topological charge of zero, so they are indistinguishable from ordinary non-topological excitations, but can screen the topological charge of free defects, making $\rho_\mathrm{free}$ or $\rho_\mathrm{infinite}$ finite in the ordered phase. So, the singularity of the topological defect density becomes softer or quite disappears. Since the internal energy is proportional to the total defect density\cite{Sorokin19}, a significant contribution of topological defects in the critical behavior means that the specific heat (as derivative of the internal energy with respect to temperature) has a singularity \begin{equation} C\sim(T-T_c)^{-\alpha},\quad \alpha>0. \end{equation} In particular, monopole-like configurations in the $O(3)$ model with $\alpha<0$ are not relevant to the critical behavior as discussed in refs.\cite{Holm94,Antunes02}. \section{Results} \subsection{$M=1$} \begin{table}[t] \caption{\label{TableTc}Critical temperature for the $V_{M,N}$ model.} \begin{tabular}{lllll} \hline \hline & $M=1$ & $M=2$ & $M=3$ & $M=4$ \\ \hline $N=1$ & $4.51150(4)$ & & & \\ $N=2$ & $2.20163(5)$ & $2.444(1)$ & & \\ $N=3$ & $1.44295(5)$ & $1.53119(6)$ & $1.670(1)$ & \\ $N=4$ & $1.06855(5)$ & $1.11768(5)$ & $1.174(1)$ & $1.206(1)$ \\ $N=5$ & $0.84640(5)$ & $0.87840(7)$ & $0.91245(7)$ & $0.91936(8)$ \\ $N=6$ & $0.69998(5)$ & $0.72254(7)$ & $0.74602(8)$ & $0.75010(8)$ \\ $N=7$ & $0.59621(5)$ & $0.61286(8)$ & $0.63105(8)$ & $0.63296(8)$ \\ $N=8$ & $0.51902(5)$ & $0.53232(8)$ & $0.54481(8)$ & $0.54501(8)$ \\ \hline \hline \end{tabular} \end{table} \begin{table}[b] \caption{\label{TableM1}Critical exponents for the case $M=1$.} \begin{tabular}{llll} \hline \hline $N$ & $\nu$ & $\beta$ & $\gamma$\\ \hline $1$ & $0.630(5)$ & $0.327(2)$ & $1.236(10)$ \\ $2$ & $0.672(5)$ & $0.348(3)$ & $1.320(10)$ \\ $3$ & $0.712(6)$ & $0.370(4)$ & $1.396(12)$ \\ $4$ & $0.750(7)$ & $0.388(4)$ & $1.474(14)$ \\ $5$ & $0.760(7)$ & $0.392(4)$ & $1.496(14)$ \\ $6$ & $0.784(7)$ & $0.406(4)$ & $1.541(14)$ \\ $7$ & $0.830(8)$ & $0.433(5)$ & $1.624(16)$ \\ $8$ & $0.850(8)$ & $0.436(5)$ & $1.678(16)$ \\ \hline \hline \end{tabular} \end{table} We consider the case $M=1$ for two reasons. First, it allows us to test our modeling technique, that is especially important for large $N$. For most values of $N$, the critical temperatures and exponents are known more accurately then in this work. We just fill in some gaps. Second, we use the case $M=1$ to fit of the critical temperature as a function of $N$ and $M$. Our results on the estimation of the critical temperature are shown in Table \ref{TableTc}, and the critical exponents in the case $M=1$ are shown in Table \ref{TableM1}. The simplest fitting of the inverse critical temperature is \begin{equation} \frac{J}{T_c}\equiv K_c\approx 0.2440835N-0.0335268. \end{equation} A more general fit using the results for $M>1$ is as follows: \begin{equation} K_c\approx K_1 N+K_0, \end{equation} where \begin{eqnarray} K_1=0.247208-0.004056M+0.001239M^2,\nonumber\\ K_0=0.020598-0.055813M+0.001772M^2.\nonumber \end{eqnarray} \subsection{$M=2$} \begin{table*} \caption{\label{TableM2}Critical exponents for the case $M=2$.} \begin{tabular}{clllllll} \hline \hline \,\,$V_{N,M}$\,\, & & $\nu$ & $\alpha$ & $\beta$ & $\gamma$ & $\beta/\nu$ & $\eta$\\ \hline & This work & $0.572(6)$ & $0.284(18)$ & $0.276(4)$ & $1.165(14)$ & $0.482$ & $-0.037(14)$ \\ $V_{4,2}$ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.676$ & $-0.03$ & $0.365$ & $1.297$ & $0.541$ & $0.081$ \\ \hline & This work & $0.621(7)$ & $0.137(21)$ & $0.308(8)$ & $1.246(17)$ & $0.496$ & $-0.008(15)$ \\ $V_{5,2}$ & PRG, $\mathcal{O}(g^4)$, PB \cite{Loison00} & $0.565$ & $0.305$ & $0.300$ & $1.095$ & $0.531$ & $0.063$ \\ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.676$ & $-0.03$ & $0.365$ & $1.297$ & $0.541$ & $0.081$ \\ \hline & This work & $0.686(7)$ & $-0.058(21)$ & $0.354(8)$ & $1.35(2)$ & $0.516$ & $0.032(17)$ \\ & MC, STA \cite{Loison00} & $0.700(11)$ & $–0.100(33)$ & $0.359(14)$ & $1.383(36)$ & $0.505$ & $0.025(20)$ \\ & $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, CB \cite{Kompaniets20} & $0.65(2)$ & $0.05$ & $0.34$ & $1.27(3)$ & $0.523$ & $0.047(3)$ \\ $V_{6,2}$ &PRG, $\mathcal{O}(g^4)$, PB \cite{Loison00} & $0.575$ & $0.275$ & $0.302$ & $1.121$ & $0.525$ & $0.051$ \\ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.730$ & $-0.19$ & $0.390$ & $1.410$ & $0.534$ & $0.068$ \\ & $1/N$, $\mathcal{O}(1/N^2)$ \cite{Pelissetto01} & $0.633$ & $0.10$ & $0.336$ & $1.227$ & $0.531$ & $0.061$ \\ & NPRG, LPA' \cite{Delamotte16} & $0.695(5)$ & $-0.09$ & $0.362$ & $1.36$ & $0.521$ & $0.042(2)$ \\ \hline & This work & $0.739(7)$ & $-0.217(21)$ & $0.381(8)$ & $1.456(20)$ & $0.515$ & $0.030(17)$ \\ & $4-\varepsilon$, $\mathcal{O}(\varepsilon^5)$, PBL \cite{Calabrese04} & $0.71(4)$ & $-0.13$ & $0.37$ & $1.39(6)$ & $0.52$ & $0.042(3)$ \\ & $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, CB \cite{Kompaniets20} & $0.713(8)$ & $-0.139$ & $0.373$ & $1.396(14)$ & $0.523$ & $0.045(3)$ \\ & PRG, $\mathcal{O}(g^4)$, PB \cite{Loison00} & $0.566$ & $0.303$ & $0.295$ & $1.108$ & $0.521$ & $0.042$ \\ $V_{7,2}$ & PRG, $\mathcal{O}(g^7)$, CM \cite{Calabrese03} & $0.68(2)$ & $-0.04$ & $0.354$ & $1.31(5)$ & $0.521$ & $0.042(2)$ \\ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.768$ & $-0.305$ & $0.406$ & $1.492$ & $0.523$ & $0.058$ \\ & $1/N$, $\mathcal{O}(1/N^2)$ \cite{Pelissetto01} & $0.697$ & $-0.09$ & $0.367$ & $1.357$ & $0.523$ & $0.053$ \\ & NPRG, LPA' \cite{Delamotte16} & $0.735(5)$ & $-0.21$ & $0.382$ & $1.44$ & $0.520$ & $0.039(2)$ \\ \hline & This work & $0.771(8)$ & $-0.313(24)$ & $0.400(8)$ & $1.516(20)$ & $0.518$ & $0.034(20)$ \\ & $4-\varepsilon$, $\mathcal{O}(\varepsilon^5)$, PBL \cite{Calabrese04} & $0.75(4)$ & $-0.25$ & $0.40$ & $1.45(6)$ & $0.53$ & $0.067(3)$ \\ & $4-\varepsilon$, $\mathcal{O}(\varepsilon^6)$, CB \cite{Kompaniets20} & $0.745(11)$ & $-0.235$ & $0.388$ & $1.461(17)$ & $0.521$ & $0.042(2)$ \\ $V_{8,2}$ & PRG, $\mathcal{O}(g^4)$, PB \cite{Loison00} & $0.616$ & $0.152$ & $0.319$ & $1.211$ & $0.518$ & $0.035$ \\ & PRG, $\mathcal{O}(g^7)$, CM \cite{Calabrese03} & $0.71(1)$ & $-0.13$ & $0.369$ & $1.40(2)$ & $0.520$ & $0.039(1)$ \\ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.797$ & $-0.39$ & $0.419$ & $1.554$ & $0.525$ & $0.051$ \\ & $1/N$, $\mathcal{O}(1/N^2)$ \cite{Pelissetto01} & $0.743$ & $-0.23$ & $0.389$ & $1.451$ & $0.524$ & $0.047$ \\ \hline \hline \end{tabular} \end{table*} \begin{table*} \caption{\label{TableM3}Critical exponents for the case $M=3$.} \begin{tabular}{clllllll} \hline \hline \,\,$V_{N,M}$\,\, & & $\nu$ & $\alpha$ & $\beta$ & $\gamma$ & $\beta/\nu$ & $\eta$\\ \hline & This work & $0.564(18)$ & $0.31(6)$ & $0.264(12)$ & $1.164(40)$ & $0.468$ & $-0.063(40)$ \\ $V_{6,3}$ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.640$ & $0.08$ & $0.349$ & $1.222$ & $0.545$ & $0.09$ \\ \hline & This work & $0.635(8)$ & $0.095(24)$ & $0.328(9)$ & $1.249(24)$ & $0.516$ & $0.033(20)$ \\ $V_{7,3}$ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.691$ & $-0.073$ & $0.372$ & $1.329$ & $0.539$ & $0.077$ \\ \hline & This work & $0.701(14)$ & $-0.10(5)$ & $0.373(14)$ & $1.358(40)$ & $0.531$ & $0.063(40)$ \\ $V_{8,3}$ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.730$ & $-0.19$ & $0.390$ & $1.410$ & $0.534$ & $0.068$ \\ & $1/N$, $\mathcal{O}(1/N^2)$ \cite{Pelissetto01} & $0.641$ & $0.076$ & $0.341$ & $1.242$ & $0.532$ & $0.064$ \\ \hline \hline \end{tabular} \end{table*} \begin{table*} \caption{\label{TableM4}Critical exponents for the case $M=4$.} \begin{tabular}{clllllll} \hline \hline \,\,$V_{N,M}$\,\, & & $\nu$ & $\alpha$ & $\beta$ & $\gamma$ & $\beta/\nu$ & $\eta$\\ \hline $V_{6,4}$ & This work & $0.54(4)$ & $0.38(12)$ & $0.27(4)$ & $1.08(11)$ & $0.50$ & $0.00(10)$ \\ \hline & This work & $0.612(17)$ & $0.16(5)$ & $0.306(12)$ & $1.225(40)$ & $0.49$ & $-0.002(20)$ \\ $V_{7,4}$ & $1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.614$ & $0.16$ & $0.337$ & $1.169$ & $0.548$ & $0.10$ \\ \hline & This work & $0.643(12)$ & $0.07(4)$ & $0.347(10)$ & $1.236(30)$ & $0.54$ & $0.078(30)$ \\ $V_{8,4}$ &$1/N$, $\mathcal{O}(1/N)$ \cite{Pelissetto01} & $0.662$ & $0.013$ & $0.359$ & $1.27$ & $0.542$ & $0.084$ \\ \hline \hline \end{tabular} \end{table*} \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig1.pdf}% \caption{\label{fig1} Internal energy distribution in the $V_{2,2}$ model} \end{figure \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig2.pdf}% \caption{\label{fig2} Internal energy distribution in the $V_{3,2}$ model} \end{figure As expected\cite{Loison98,Loison99,Itakura03}, we find a transition of the pronounced first order for the $V_{2,2}$ and $V_{3,2}$ models. Figs. \ref{fig1} and \ref{fig2} show a typical double-peak structure of the internal energy distributions. For the $V_{4,2}$ and $V_{5,2}$ models, we do not observe such a structure up to $L=60$. So, the pseudo-scaling exponents can be estimated, and we find that the Fisher exponent is negative $\eta<0$ (see Table \ref{TableM2}). We interpret this as a weak first-order transition. For the $V_{6,2}$, $V_{7,2}$ and $V_{8,2}$ models, we find a second-order phase transition. It should be especially noted that our results are in good agreement with the results for $N=6$ stacked-triangular antiferromagnet\cite{Loison00}, as well as with the study within the framework of the non-perturbative RG approach\cite{Delamotte16} for $N=6$ and $N=7$. The simplest fitting of the inverse critical temperature for $M=2$ is \begin{equation} K_c\approx 0.244812 N-0.082677. \end{equation} \subsection{$M=3$} \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig3.pdf}% \caption{\label{fig3} Internal energy distribution in the $V_{3,3}$ model} \end{figure \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig4.pdf}% \caption{\label{fig4} Internal energy distribution in the $V_{4,3}$ model} \end{figure \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig5.pdf}% \caption{\label{fig5} Internal energy distribution in the $V_{5,3}$ model} \end{figure We reproduce the results of ref.\cite{Loison00-2} and find a distinct first-order transition for the $V_{3,3}$ and $V_{4,3}$ models (see figs. \ref{fig3} and \ref{fig4}). However, for the case $M=3$, we obtain the same result for the $V_{5,3}$ model (fig. \ref{fig5}). In the case of the $V_{6,3}$ model, we find a weak first-order transition with the negative value of $\eta$ (see Table \ref{TableM3}). Somewhat more unexpectedly, we observe a second-order transition for the $V_{7,3}$ and $V_{8,3}$. This contradicts the results of the perturbative RG as well as the $4-\varepsilon$ and pseudo-$\varepsilon$ expansions. Again, the simplest fitting of the inverse critical temperature is \begin{equation} K_c\approx 0.245994N-0.135670 \end{equation} \subsection{$M=4$} \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig6.pdf}% \caption{\label{fig6} Internal energy distribution in the $V_{4,4}$ model} \end{figure \begin{figure}[t] \center \includegraphics[scale=0.30]{Fig7.pdf}% \caption{\label{fig7} Internal energy distribution in the $V_{5,4}$ model} \end{figure In this case, we also reproduce the results of ref.\cite{Loison00-2} and find a distinct first-order transition for the $V_{4,4}$ model (see fig. \ref{fig6}). A distinct first-order transition occurs also in the $V_{5,4}$ model (fig. \ref{fig7}). However, for the $V_{6,4}$ and $V_{7,4}$ models, we find the weak first order (see Table \ref{TableM4}). The $V_{8,4}$ model has a continuous transition. The simplest fitting of the inverse critical temperature is \begin{equation} K_c\approx 0.250345N-0.169116 \end{equation} \section{Conclusion} \begin{table}[t] \caption{\label{TableOrder}Order of a transition in the $V_{N,M}$ model. Weak I order means that we do not observe a double-peak structure of the energy distribution, but $\eta<0$. The lattice size $L$ indicates that we do not observe a double-peak structure on smaller lattices.} \begin{tabular}{lccc} \hline \hline & $M=2$ & $M=3$ & $M=4$\\ \hline $N=2$ & I, $L\geq8$ & & \\ $N=3$ & \,\, I, $L\geq50$\,\, & I, $L\geq8$ & \\ $N=4$ & weak I & I,$L\geq20$ & I, $L\geq8$ \\ $N=5$ & weak I & \,\, I, $L\geq50$\,\, & \,\, I, $L\geq40$\,\, \\ $N=6$ & II & weak I & weak I \\ $N=7$ & II & II & weak I \\ $N=8$ & II & II & II \\ \hline \hline \end{tabular} \end{table} We performed extensive numerical investigation of the $V_{N,M}$ model, and obtained a few rather interesting results. We found the value of $N_c^+(M,3)$ is less than predicted by the perturbative RG and the $4-\varepsilon$ expansion for $M>2$. Although it may be a coincidence, but we found that a transition is of the second order for cases where topological defects are absent. The results of determining the order of a transition are collected in the Table \ref{TableOrder}. It would be interesting to compare values of the critical exponents for $M>2$ with predictions of the non-perturbative RG and the conformal bootstrap program. Also we find that for $M\geq2$ the estimates of the exponents and marginal dimensionality $N_c^+(M,3)$ lie between the values obtained in the first and second orders of the large-N expansion without resummation. Possibly, the resummation of the large-N series improves the agreement with the results of the numerical analysis, but for this it is useful to calculate the third-order corrections, that is quite a difficult task. \begin{acknowledgments} This work was supported by the Theoretical Physics and Mathematics Advancement Foundation 'BASIS' (project No. 19-1-3-38-1). \end{acknowledgments}
2,877,628,089,269
arxiv
\section{Introduction} Spin-glass models describe magnetic materials that randomly interact spatially. The mean field theory of spin-glass models, e.g., the Sherrington--Kirkpatrick model, has been solved rigorously by the full replica symmetry breaking solution~\cite{Parisi,Parisi2,Guerra,Talagrand}; however, it is extremely difficult to obtain analytical results for finite-dimensional models, except on the Nishimori line~\cite{Nishimori}. Although analytical approaches ~\cite{ON} for two-dimensional systems are slightly progressing, those for three-dimensional systems have been primarily neglected, except numerical analysis. In ferromagnetic spin models, correlation inequalities play an important role in non-perturbative analysis and yield rigorous results for unsolvable models. Correlation inequalities are also valid for the Ising model in a random field. A recent study~\cite{KTZ} proved based on the Fortuin--Kasteleyn--Ginibre inequality that the random-field Ising model comprising two-body interactions for all the lattice and field distributions does not have a spin-glass phase. Therefore, it is expected that the concept of correlation inequalities will be important for a rigorous analysis of spin-glass models, and their establishment for spin-glass models is a very important problem. Some previous studies have been conducted on the correlation inequalities in spin-glass models. A recent study~\cite{CG,CL} exhibited that the response of the quenched average of a partition function with respect to the variance is generally positive, which is considered as the counterpart of the Griffiths first inequality in spin-glass models. In addition, for various bond randomness, including Gaussian and binary distribution types, it is shown that the counterpart of the Griffiths second inequality holds on the Nishimori line~\cite{MNC, Kitatani}. However, correlation inequalities, as in the case of ferromagnetic spin models, have not been obtained in general, and a rigorous analysis based on them is yet to be conducted satisfactorily for spin-glass models. In this study, we obtain a lower bound on the quenched average of the local energy for the Ising model with quenched randomness. The result of a previous study~\cite{KNA} that was limited to a symmetric distribution is generalized to an asymmetric distribution. Furthermore, as a simple application of the acquired inequality, we obtain the correlation inequalities for a Gaussian distribution. We demonstrate that the expectation of the square of the correlation function generally has a finite lower bound at any temperature. Thus, we prove that the spin-glass order parameter has a finite lower bound in the Ising model in a Gaussian random field, regardless of the forms of the other interactions. The organization of this paper is as follows. In Sec. II, we define the model and present the method to obtain the lower bound on the average of the local energy for the Ising model with quenched randomness. In Sec. III, we describe the application of the acquired inequality when the randomness of the interactions follows a Gaussian distribution. Finally, our conclusion is presented in Sec. IV. \section{Lower bound on local energy for asymmetric distribution of randomness} Following Ref. \cite{KNA}, we consider a generic form of the Ising model, \begin{eqnarray} H&=&- \sum_{B \subset{V}} J_{B} \sigma_B , \\ \sigma_B&\equiv& \prod_{i\in B} \sigma_i , \end{eqnarray} where $V$ is the set of sites, the sum over $B$ is over all the subsets of $V$ in which interactions exist, and the lattice structure adopts any form. The probability distribution of a random interaction $J_B$ is represented as $P_B (J_B)$. The probability distributions can be generally different from each other, i.e., $P_B (x)\neq P_{C} (x)$, and are also allowed to present no randomness, i.e., $P_B(J_B)=\delta(J-J_B)$. The correlation function for a set of fixed interactions, $\{J_{B}\}$, is expressed as \begin{eqnarray} \langle\sigma_A \rangle_{\{J_{B}\}}&=& \frac{ \mbox{Tr}\, \sigma_A \exp\left(\beta \sum_{B \subset{V}} J_{B} \sigma_B \right)}{ \mbox{Tr}\, \exp\left(\beta \sum_{B \subset{V}} J_{B} \sigma_B \right)} . \end{eqnarray} The configurational average over the distribution of the randomness of the interactions is written as \begin{eqnarray} \mathbb{E}\left[g(\{J_{B}\}) \right] = \left(\prod_{B \subset{V}} \int_{-\infty}^\infty dJ_B P_B(J_B) \right) g(\{J_{B}\}). \end{eqnarray} For example, the quenched average of the correlation function is obtained as \begin{eqnarray} \mathbb{E}\left[ \langle\sigma_A \rangle_{\{J_{B}\}} \right]&=& \left(\prod_{B \subset{V}} \int_{-\infty}^\infty dJ_B P_B(J_B) \right) \frac{ \mbox{Tr}\, \sigma_A \exp\left(\beta \sum_{B \subset{V}} J_{B} \sigma_B \right)}{ \mbox{Tr}\, \exp\left(\beta \sum_{B \subset{V}} J_{B} \sigma_B \right)} . \end{eqnarray} Our result is the following theorem. \begin{theorem}\label{th1} When the distribution function of the randomness satisfies \begin{eqnarray} P_A(-J_A)&=& \exp(-2\beta_{\mathrm NL} J_A) P_A(J_A), \end{eqnarray} then for any even function $f(J_A)\ge0$, the system defined above satisfies the following inequality: \begin{eqnarray} \mathbb{E}\left[-J_{A} f(J_{A}) \langle\sigma_A \rangle_{\{J_{B}\}} \right] &\ge& \mathbb{E}\left[-J_{A} f(J_{A})\tanh(\beta J_{A}) -J_{A}f(J_{A})(1-e^{-\beta_{\mathrm NL}J_{A}}) \frac{1}{\sinh(2\beta J_{A})} \right] \label{inequality-2} . \end{eqnarray} \end{theorem} We note that the right-hand side of Eq. (\ref{inequality-2}) does not depend on the other interactions. When the distribution function is symmetric, i.e., $P_A(-J_A)=P_A(J_A)$ ($\beta_{\mathrm NL}=0$) and $f(J_A)=1$, Eq. (\ref{inequality-2}) is reduced to \begin{eqnarray} \mathbb{E}\left[-J_{A} \langle\sigma_A \rangle_{\{J_{B}\}} \right] &\ge& \mathbb{E}\left[-J_{A} \tanh(\beta J_{A}) \right] , \label{pre-ieq} \end{eqnarray} which is in accordance with the result in Ref. \cite{KNA}. In this case, an intuitive explanation of the inequality is possible: the local energy is generally larger than or equal to the energy in the absence of all the other interactions. However, for $\beta_{\mathrm NL}\neq0$, it is difficult to provide an intuitive explanation, because the second term in the right-hand side of Eq. (\ref{inequality-2}) does not have a physical relevance. On the other hand, the right-hand side of Eq. (\ref{inequality-2}) can be rewritten as \begin{eqnarray} &&\mathbb{E}\left[-J_{A} f(J_{A})\tanh(\beta J_{A}) -J_{A}f(J_{A})(1-e^{-\beta_{\mathrm NL}J_{A}}) \frac{1}{\sinh(2\beta J_{A})} \right] \nonumber\\ &=&\mathbb{E}\left[-J_{A}f(J_{A})\tanh(\beta J_{A}) \right] - \int_{0}^\infty dJ_A P_A(J_A) J_{A} f(J_A) e^{-2\beta_{\mathrm NL} J_A} (1-e^{\beta_{\mathrm NL}J_{A}})^2 \frac{1}{\sinh(2\beta J_{A})} \nonumber\\ &\le&\mathbb{E}\left[-J_{A}f(J_{A})\tanh(\beta J_{A}) \right], \end{eqnarray} which suggests that the local energy can be lower than the energy in the absence of all the other interactions, unlike in the case of $\beta_{\mathrm NL}=0$. We also note that the second term in the right-hand side of Eq. (\ref{inequality-2}) is numerically very small. Thus, to establish the bound for $\beta_{\mathrm NL}\neq0$, such a small correction must considered. \begin{proof} {\rm We define $Z(\beta, J_{A})$ and $\langle\sigma_A \rangle_{J_{A}}$ as \begin{eqnarray} Z(\beta, J_{A}) &=& \sum_{\{ \sigma \}} \exp\left(\beta \sum_{B \subset{V}\setminus A} J_{B} \sigma_B +\beta J_{A} \sigma_A \right), \\ \langle\sigma_A \rangle_{J_{A}} &=& \frac{\sum_{\{ \sigma \}} \sigma_A \exp\left(\beta \sum_{B \subset{V}\setminus A} J_{B} \sigma_B +\beta J_{A} \sigma_A \right)}{\sum_{\{ \sigma \}} \exp\left(\beta \sum_{B \subset{V}\setminus A} J_{B} \sigma_B +\beta J_{A} \sigma_A \right)} . \end{eqnarray} We note that $\langle\sigma_A \rangle_{J_{A}}=\langle\sigma_A \rangle_{\{J_{B}\}}$ but $\langle\sigma_A \rangle_{-J_{A}} \neq \langle\sigma_A \rangle_{\{J_{B}\}}$. Subsequently, we obtain \begin{eqnarray} \frac{Z(\beta, J_{A})}{Z(\beta,-J_{A})} &=&\cosh(2\beta J_{A}) + \langle\sigma_A \rangle_{-J_{A}} \sinh(2\beta J_{A}) \nonumber\\ &=&e^{\beta_{\mathrm NL}J_{A}}+ \Gamma(\beta,-J_{A}) \frac{\sinh(2\beta J_{A})}{J_{A}}\ge 0 \label{part-relation-2-1}, \\ \frac{Z(\beta,-J_{A})}{Z(\beta,J_{A})} &=&\cosh(2\beta J_{A}) - \langle\sigma_A \rangle_{J_{A}} \sinh(2\beta J_{A}) \nonumber\\ &=&e^{-\beta_{\mathrm NL}J_{A}}+ \Gamma(\beta, J_{A}) \frac{\sinh(2\beta J_{A})}{J_{A}}\ge 0 \label{part-relation-2-2}, \end{eqnarray} where $\Gamma(\beta,J_{A})$ is defined as \begin{eqnarray} \Gamma(\beta,J_{A}) \equiv -J_{A} \langle \sigma_A \rangle_{J_{A}} +J_{A} \tanh(\beta J_{A}) +(1 -e^{-\beta_{\mathrm NL}J_{A}} )\frac{J_{A}}{\sinh(2\beta J_{A})} .\label{def-Gam} \end{eqnarray} Since Eq. (\ref{part-relation-2-1}) is the reciprocal of Eq. (\ref{part-relation-2-2}), we obtain \begin{eqnarray} e^{-2\beta_{\mathrm NL}J_{A}}\Gamma(\beta,-J_{A}) &=&\frac{-e^{-\beta_{\mathrm NL}J_{A}}\Gamma(\beta, J_{A}) }{e^{-\beta_{\mathrm NL}J_{A}}+ \Gamma(\beta, J_{A}) \frac{\sinh(2\beta J_{A})}{J_{A}}} \label{inverse-relation-2} . \end{eqnarray} Then, from Eq. (\ref{def-Gam}), we immediately obtain \begin{eqnarray} &&\mathbb{E}\left[-J_{A} f(J_A)\langle\sigma_A \rangle_{\{J_{B}\}} \right] \nonumber\\ &=& \mathbb{E}\left[f(J_A)\Gamma(\beta, J_{A})-J_{A}f(J_A) \tanh(\beta J_{A}) -J_{A}f(J_A)(1-e^{-\beta_{\mathrm NL}J_{A}} ) \frac{1}{\sinh(2\beta J_{A})} \right] \label{def-st} . \end{eqnarray} Furthermore, for any even function $f(J_A)\ge0$, we obtain $\mathbb{E}\left[ f(J_A) \Gamma(\beta, J_{A}) \right]\ge0$, because \begin{eqnarray} \mathbb{E}\left[ f(J_A)\Gamma(\beta, J_{A}) \right] &=& \int_{-\infty}^\infty dJ_A P_A(J_A) f(J_A) \mathbb{E}\left[ \Gamma(\beta, J_{A}) \right]' \nonumber\\ &=& \int_{0}^\infty dJ_A P_A(J_A) f(J_A) \mathbb{E}\left[ \Gamma(\beta, J_{A}) + \exp(-2\beta_{\mathrm NL}J_A) \Gamma(\beta, -J_{A}) \right]' \nonumber\\ &=& \int_{0}^\infty dJ_A P_A(J_A) f(J_A) \mathbb{E}\left[\frac{ \Gamma^2(\beta, J_{A}) \frac{\sinh(2\beta J_{A})}{J_{A}}}{e^{-\beta_{\mathrm NL}J_{A}}+ \Gamma(\beta, J_{A}) \frac{\sinh(2\beta J_{A})}{J_{A}}} \right]' \nonumber\\ &\ge&0 , \label{posiive2} \end{eqnarray} where $\mathbb{E}[\cdots]'$ denotes the configurational average over the randomness of the interactions other than $J_A$. We used Eq. (\ref{inverse-relation-2}) in the third identity and Eq. (\ref{part-relation-2-2}) in the last inequality. Thus, Eqs. (\ref{def-st}) and (\ref{posiive2}) yield Eq. (\ref{inequality-2}). } \hfill $\Box$\end{proof} \section{Application to Gaussian spin-glass model} In this section, we present the application of Eq. (\ref{inequality-2}) to a spin-glass model with a Gaussian distribution. We note that the result for $\beta_{\mathrm NL}=0$ in Ref. \cite{KNA} is sufficient to obtain the inequalities that are presented in this section. First, we consider the distinct case, $P_A(J_{0,A}-J_A)= P_A(J_{0,A}+J_A)$. Then, we obtain the following result: \begin{corollary} When the distribution function of the randomness satisfies \begin{eqnarray} P_A(J_{0,A}-J_A)&=& P_A(J_{0,A}+J_A), \end{eqnarray} then for any even function $f(J_A)\ge0$, the system defined above satisfies the following inequality: \begin{eqnarray} \mathbb{E}\left[\left(J_{0,A}-J_{A}\right) f(J_{A}-J_{0,A}) \langle\sigma_A \rangle_{\{J_{B}\}} \right] &\ge& \mathbb{E}\left[\left(J_{0,A}-J_{A}\right)f(J_{A}-J_{0,A}) \tanh(J_{A}-J_{0,A}) \right] \label{inequality-1} . \end{eqnarray} \end{corollary} \begin{proof}{} {\rm We regard $P_A(J_{0,A}+J_A)$ as a new probability distribution $P_A'(J_A)$, where $P_A'(J_A)$ is symmetric. Therefore, using Eq. (\ref{inequality-2}) for $\beta_{\mathrm NL}=0$, we obtain \begin{eqnarray} \mathbb{E}\left[\left(J_{0,A}-J_{A}\right) f(J_{A}-J_{0,A}) \langle\sigma_A \rangle_{\{J_{B}\}} \right] &=& \int_{-\infty}^\infty dJ_A P_A(J_{0,A}+J_A) \mathbb{E}\left[-J_{A}f(J_{A}) \langle\sigma_A \rangle_{J_A+J_{0,A}} \right]' \nonumber\\ &\ge&\int_{-\infty}^\infty dJ_A P_A(J_{0,A}+J_A) \left( -J_{A} \right) f(J_A) \tanh(J_{A}) \nonumber\\ &=&\mathbb{E}\left[\left(J_{0,A}-J_{A}\right)f(J_{A}-J_{0,A}) \tanh(J_{A}-J_{0,A}) \right] . \end{eqnarray} } \hfill $\Box$\end{proof} In the following, using Eq. (\ref{inequality-1}), we obtain several inequalities. \subsection{Correlation inequality for Gaussian spin-glass model} Next, we consider the case where all the interactions follow a Gaussian distribution with mean $J_{0,B}$ and variance $\Lambda_{B}^2$. All the $J_{0,B}$ and $\Lambda_{B}^2$ can adopt different values. We denote the configurational average over the distribution of the randomness of the interactions as $\mathbb{E}\left[\cdots \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}}$. Then, we obtain the following result: \begin{corollary} For the quenched average of the square of the correlation function, we obtain a lower bound, \begin{eqnarray} \mathbb{E}\left[ \tanh^2(\beta J_{A}) \right]_{\left\{0,\Lambda_{A}^2 \right\}} \le \mathbb{E}\left[ \langle\sigma_A \rangle_{\{J_{B} \}}^2 \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}} \label{corr-ineq} . \end{eqnarray} \end{corollary} We note that the left-hand side of Eq. (\ref{corr-ineq}) is independent of mean $\{J_{0,b}\}$. Inequality (\ref{corr-ineq}) indicates that the expectation of the square of the correlation function is generally a finite non-zero value, regardless of the other interactions. This behavior is quite different from those of the correlation functions of ferromagnetic models and may reflect the fact that the counterpart of the Griffiths second inequality has not been established in spin-glass models~\cite{CUV}. \begin{proof}{} {\rm For the Gaussian distribution with mean $J_{0,B}$ and variance $\Lambda_{B}^2$, and $f(J_{A})=1$, Eq. (\ref{inequality-1}) is reduced to \begin{eqnarray} \mathbb{E}\left[(J_{0,A}-J_{A})\langle\sigma_A \rangle_{\{ J_{B}\}} \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}}&=&\mathbb{E}\left[ -J_{A} \langle\sigma_A \rangle_{\{J_{B}+J_{0,B}\}} \right]_{\left\{0,\Lambda_{B}^2 \right\}} \nonumber\\ &\ge& \mathbb{E}\left[ - J_{A}\tanh(\beta J_{A}) \right]_{\left\{0,\Lambda_{B}^2 \right\}} . \end{eqnarray} Furthermore, conducting integration by parts, we obtain Eq. (\ref{corr-ineq}). } \hfill $\Box$\end{proof}{} A similar calculation is possible for higher order terms. Taking $f(J_{A})=J_{A}^2$ in Eq. (\ref{inequality-1}), we obtain \begin{eqnarray} \mathbb{E}\left[ -(J_{A}-J_{0,A})^3 \langle\sigma_A \rangle_{\{J_{B}\}} \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}}&=&\mathbb{E}\left[ -J_{A}^3 \langle\sigma_A \rangle_{\{J_{B}+J_{0,B}\}} \right]_{\left\{0,\Lambda_{B}^2 \right\}} \nonumber\\ &\ge& \mathbb{E}\left[ -J_{A}^3 \tanh(\beta J_{A}) \right]_{\left\{0,\Lambda_{B}^2 \right\}} . \end{eqnarray} Conducting integration by parts and using Eq. (\ref{corr-ineq}), we obtain the lower bound on the expectation of the fourth power of the correlation function, \begin{eqnarray} \mathbb{E}\left[ \frac{1}{6\beta^2\Lambda_{A}^2} \left(8\beta^2\Lambda_{A}^2 -3 \right) \left(\langle\sigma_A \rangle_{\{J_{B}+J_{0,B} \}}^2-\tanh^2(\beta J_{A}) \right) + \tanh^4(\beta J_{A}) \right]_{\left\{0,\Lambda_{A}^2 \right\}} \le \mathbb{E}\left[ \langle\sigma_A \rangle_{\{J_{B} \}}^4 \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}}. \label{4th-corr-ineq} \end{eqnarray} For $8\beta^2\Lambda_{B}^2 \ge 3$, Eqs. (\ref{corr-ineq}) and (\ref{4th-corr-ineq}) yield \begin{eqnarray} \mathbb{E}\left[ \tanh^4(\beta J_{A}) \right]_{\left\{0,\Lambda_{A}^2 \right\}} \le \mathbb{E}\left[ \langle\sigma_A \rangle_{\{J_{B} \}}^4 \right]_{\left\{J_{0,B},\Lambda_{B}^2 \right\}}. \end{eqnarray} Thus, for a sufficiently high temperature, the quenched average of the fourth power of the correlation function has a non-zero lower bound. \subsection{Lower bound on spin-glass order-parameter in Gaussian random-field Ising model} Finally, we demonstrate that the spin-glass order-parameter in the Ising model in a Gaussian random field generally adopts a finite value at any temperature, regardless of the forms of the other interactions. We consider the case where a random field, $\{h_i\},$ is independently applied to all the sites, where $\{h_i\}$ follows a Gaussian distribution with mean $J_{0}$ and variance $\Lambda_{}^2$. The Hamiltonian is obtained as \begin{eqnarray} H&=&- \sum_{B \subset{V}} J_{B} \sigma_{B } \nonumber\\ &=&- \sum_{B\subset{V}\setminus{\{h_i\}}} J_{B} \sigma_{B}- \sum_{i=1}^N h_i \sigma_i , \label{random-field-system} \end{eqnarray} where interaction $J_B$ other than $\{h_i\}$ takes any form. Then, Eq. (\ref{corr-ineq}) is reduced to \begin{eqnarray} \mathbb{E}\left[ \tanh^2(\beta h_i) \right]_{\left\{0,\Lambda^2 \right\}} \le \mathbb{E}\left[ \langle\sigma_i \rangle_{\{J_{B} \}}^2 \right]_{\left\{J_{0},\Lambda_{}^2 \right\}} , \end{eqnarray} which suggests that the quenched average of the square of the local magnetization has a non-zero value. Furthermore, because the same inequality holds for all the sites, we obtain the following result: \begin{corollary} For the spin-glass order-parameter, $q$, \begin{eqnarray} q=\frac{1}{N} \sum_i \mathbb{E}\left[ \langle\sigma_i \rangle_{\{J_{B} \}}^2 \right]_{\left\{J_{0},\Lambda_{}^2 \right\}}, \end{eqnarray} the system (\ref{random-field-system}) satisfies the following inequality: \begin{eqnarray} \mathbb{E}\left[ \tanh^2(\beta h_i) \right]_{\left\{0,\Lambda^2 \right\}} \le q. \label{sg-order-finite} \end{eqnarray} \end{corollary} Thus, when a Gaussian random field is applied, the spin-glass order-parameter generally has a non-zero lower bound. In ferromagnetic models, the ferromagnetic order parameter, i.e., magnetization, has a finite value when a magnetic field is applied. Equation (\ref{sg-order-finite}) suggests that a similar phenomenon occurs in the Ising model in a Gaussian random field. This is a natural consequence; however, the existence of a finite lower bound is not obvious. In addition, we note that Eq. (\ref{sg-order-finite}) does not indicate that there is a spin-glass phase in the Ising model in a Gaussian random field. \section{ Conclusions} In this study, we have obtained the lower bound on the local energy of the Ising model with quenched randomness. We emphasize that the acquired inequality (\ref{inequality-2}) is independent of the other interactions. Our result is a natural generalization of Ref. \cite{KNA} in which a symmetric distribution was considered. Applying the obtained inequality to a Gaussian spin-glass model, we determine that the expectation of the square of the correlation function generally has a finite lower bound at any temperature. Thus, the spin-glass order-parameter in the Ising model in a Gaussian random field generally adopts a finite value at any temperature, which is a natural but not a obvious result. It is an interesting question whether a similar inequality as Eq. (\ref{corr-ineq}) will hold for a general distribution function of the random interactions. Our proof relies on the property of a Gaussian distribution, and we have not obtained proofs for other distributions. \begin{acknowledgment} The authors thank Shuntaro Okada for useful discussions. The present work was financially supported by JSPS KAKENHI Grant Nos. 18H03303, 19H01095, and 19K23418, and JST-CREST grant (No. JPMJCR1402) of the Japan Science and Technology Agency. \end{acknowledgment}
2,877,628,089,270
arxiv
\section{Introduction} Suppose $h,k\ge 2$ are positive integers. The Lawson cone $M_{kh}$ is the level set \begin{align*} M_{kh}=\left\{z=(x,y)\in \R^k\times \R^h:\frac{|x|}{\sqrt{k-1}}=\frac{|y|}{\sqrt{h-1}}\right\}. \end{align*} It is known to be area-minimizing (see \cite{c1}, \cite{c2}, \cite{c3}, and \cite{c4}) provided \begin{align}\label{lc} h+k\ge 9,\text{ or }(h,k)=(3,5),(4,4),(5,3). \end{align} In their paper \cite{phi}, G. De. Philippis and F. Maggi proved global quadratic stability inequalities and derived explicit lower bounds for the first eigenvalues of the stability operators for all area-minimizing Lawson cones $M_{kh}$, except for \begin{align*} (h,k),(k,h)\in S=\{&(3,5),(2,7),(2,8),(2,9),(2,10),(2,11)\}. \end{align*} They achieved this by exploiting sub-calibrations for Lawson cones. Unfortunately, the sub-calibrations that they used did not work for the cones $M_{kh}$ with $(h,k),(k,h)\in S.$ Our main results, Theorem 1 and Theorem 2 in Section 1.1, extend these inequalities to the cones $M_{kh}$ with $(h,k),(k,h)\in S.$ We achieve this by carefully choosing sub-calibrations for these Lawson cones in Lemma 2 of Section 2.1. However, our sub-calibrations do not work for other cases in general. We first review their results and explain their methods, which we mostly follow. Consider a variation with compact support of the Lawson cone $M_{kh}.$ Suppose the variation can be realized as the boundary of a set $F$ of finite perimeter. Roughly speaking, their first result controls the volume bounded between the Lawson cone and the variation $\pd F$ by the difference between the area of the variation $\pd F$ and that of the cone $M_{kh}$ up to scaling. Their second result provides lower bounds for the first eigenvalues of the stability operators. For a great discussion of the significance of these results, please refer to Section 1 of \cite{phi}. The Lawson cone $M_{kh}$ can be realized as the boundary $\pd K_{kh}$ of the region \begin{align*} K_{kh}=&\left\{(x,y)\in \R^k\times \R^h:\frac{|x|}{\sqrt{k-1}}<\frac{|y|}{\sqrt{h-1}}\right\}. \end{align*} Let $\lb^m$ denote the $m$-dimensional Lebesgue measure, $\om_n$ denote the volume of unit $n$-ball, and $P(A;B)$ denote the perimeter of $A$ in $B.$ Their results are as follows. \begin{resu}(Theorem 5 in \cite{phi}) If $R>0,m=h+k,(h,k)\not\in S$ satisfy all the conditions in (1), then \begin{align*} \left(\frac{\lb^m(K_{kh}\De F)}{R^m}\right)^2\le C\frac{P(F;H_R)-P(K_{kh}; H_R)}{R^{m-1}}, \end{align*}whenever $F$ is a set of locally finite perimeter with symmetric difference $K_{kh}\De F\s \s H_R= B_R^k\times B_R^h.$ Possible values of $C$ are \begin{align*} C=&\frac{2^{12}\sqrt{\om_k\om_h}}{(k-1)^{1/8}}\sqrt{\frac{hk}{m-1}}(\frac{h-1}{k-1})^{3/2},\text{ if }2\le k\le h, (k,h)\not=(4,4),\\&\text{Interchange }k,h\text{ if }2\le h\le k.\\ C=&128\om_4,\text{ if }(k,h)=(4,4). \end{align*} \end{resu} \begin{resu}(Theorem 2 in \cite{phi}) If $R,m,h,k$ are as in Result 1, and \begin{align*} \lam_{k,h}(R)=\inf\bigg\{\int_{M_{kh}}^{}|\na^{M_{kh}}\vp|^2-|\textnormal{\II}_{{M_{kh}}}|^2\vp^2 d\mathcal{H}^{m-1}:\int_{M_{kh}}\vp^2=1,\textnormal{spt}\vp\s\s B^m_R\bigg\}, \end{align*}then \begin{align*} \lam_{k,h}(R)\ge \frac{c_{k,h}}{R^2}. \end{align*} Possible values of $c_{k,h}$ are \begin{align*} c_{k,h}=&\frac{1}{2^9}\bigg(\frac{k-1}{h-1}\bigg)^{9/4}\frac{(m-2)^{1/2}}{(h-1)^{1/4}},\text{ if }2\le k\le h, (k,h)\not=(4,4).\\&\text{Interchange }k,h\text{ if }2\le h\le k.\\ c_{k,h}=&\frac{\sqrt{2}}{16},\text{ if }(k,h)=(4,4). \end{align*} \end{resu} As illustrated in Figure 1, their method is based on sub-calibrating the Lawson cones with a unit-length vector field $g.$ In other words, the vector field $g$ restricts to the unit normal on $M_{kh}$, and the divergence $\di g$ does not change sign in $K_{kh}$ and $K_{kh}\cp$, respectively. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{drawing} \caption{A sub-calibration $g$ of the Lawson cone $M_{kh}$ and a variation.} \label{fig:drawing} \end{figure} After cleverly choosing $g$, they proved that \begin{align} \di g(z)\ge c_{k,h}\frac{\dist(z,M_{kh})}{|z|^2}, \end{align}where $\dist$ is the Euclidean distance. Then they exploit inequality (2) to deduce the desired results. For a beautiful discussion of sub-calibrations (also called quantitative calibrations), please refer to their paper \cite{phi}. Unfortunately, the sub-calibrations they used did not work for $(h,k),(k,h)\in S$. The main results of this paper extend their stability inequalities to include those $(k,h).$ We achieve this by using sub-calibrations inspired by \cite{c4}. \subsection{Stability Inequalities Extended to $(h,k),(k,h)\in S$} \begin{thm} If $R>0,m=h+k,(h,k),(k,h)\in S,$ then \begin{align*} \left(\frac{\lb^m(K_{kh}\De F)}{R^m}\right)^2\le C\frac{P(F;H_R)-P(K_{kh}; H_R)}{R^{m-1}}, \end{align*}whenever $F$ is a set of locally finite perimeter with $K_{kh}\De F\s \s B_R^k\times B_R^h.$ A possible value of $C$ is $7^2\times 12^2\times 10^{20}.$ \end{thm} \begin{thm} If $R,m,h,k$ are as in Theorem 1, and \begin{align*} \lam_{k,h}(R)=\inf\bigg\{\int_{M_{kh}}^{}|\na^{M_{kh}}\vp|^2-|\textnormal{\II}_{{M_{kh}}}|^2\vp^2 d\mathcal{H}^{m-1}:\int_{M_{kh}}\vp^2=1,\textnormal{spt}\vp\s\s B^m_R\bigg\}, \end{align*}then \begin{align*} \lam_{k,h}(R)\ge \frac{c_{k,h}}{R^2}, \end{align*} Possible values of $c_{k,h}$ are \begin{align*} c_{3,5}=c_{5,3}&=\frac{\sqrt{3}}{21^3},\\c_{k,2}=c_{2,h}&=\frac{\sqrt{11}}{11^6}, \end{align*} for $k,h=7,8,9,10,11.$ \end{thm} \section{Proof of the Theorems} We now prove, in order, Theorem 2 and Theorem 1. By the symmetry of Lawson cones, it suffices to prove the cases with $(h,k)\in S.$ The following lemma is the basic tool to extract information from the sub-calibrations $g.$ \begin{lem} If $m\ge 2,$ $E$ is of locally finite perimeter in $\R^m,$ and $g\in W^{1,1}_{\textnormal{loc}}(\R^m,\R^m),$ \begin{align*} |g|&\le 1\text{ on }\R^m,\\ \di g&\ge 0,\text{ a.e. on }E^c,\\ \di g&\le 0,\text{ a.e. on }E,\\ g&=\nu_E,\text{ }\hi^{m-1}-\text{a.e. on }\pd_{1/2}E, \end{align*}then $E$ is a local minimizer of the perimeter in $\R^m,$ with \begin{align}\label{va} P(F;A)-P(E;A)=\int_{E\De F}|\di g|+\int_{A\cap \pd_{1/2}F}1-(g\ip \nu_F)d\hi^{m-1}. \end{align} \end{lem} Here $\hi^{m-1}$ is the $m-1$-dimensional Hausdorff measure, $\nu_E$ is the out-pointing unit normal. If $|E|$ denote the $\lb^m$-volume of a set $E$, then \begin{align*} \pd_{1/2}E=\{x\in\R^m:\lim_{r\to 0^+}\frac{|E\cap B(x,r)|}{\om_nr^n}=\frac{1}{2}\}, \end{align*}is defined as the set of points of density $1/2$ in $E.$ For proof of Lemma 1 and details about $\pd_{1/2}E$, please refer to the proof of Proposition 4.1 in \cite{phi} and the relevant discussions on page 416 in \cite{phi}. Roughly speaking, Lemma 1 can be proved by breaking down the integration definition of perimeter and then using the divergence theorem. The left hand-side of (\ref{va}) can be seen as variation of area, so it can provide information for second variation by Taylor expansion and choosing suitable variation $F$. The key to using this information is to find vector fields $g$ that satisfy inequality (1) in Section 1. \subsection{Sub-calibrations for $M_{kh}$ with $(h,k)\in S$} \begin{lem} For $E=K_{kh},$ the vector field $$g=\frac{\na f}{|\na f|}$$ satisfies all the hypothesis in Lemma 1. The function $f$ for $(h,k)=(3,5)$ is \begin{align*} f(x,y)=\begin{cases}\ds \frac{(h-1)|x|^2-(k-1)|y|^2}{4}((h-1)|x|)^{3/2},&\text{if }z\in K_{kh},\\ \ds\frac{(h-1)|x|^2-(k-1)|y|^2}{4}((k-1)|y|)^{3/2},&\text{if }z\in K_{kh}\cp, \end{cases} \end{align*} and the functions $f$ for $(h,k)=(2,k)$ with $k=7,8,9,10,11$ are \begin{align*} f(x,y)=\begin{cases} \ds\frac{(h-1)|x|^2-(k-1)|y|^2}{4}((h-1)|x|)^{3},&\text{if }z\in K_{kh},\\ \ds\frac{(h-1)|x|^2-(k-1)|y|^2}{4}((k-1)|y|)^{2},&\text{if }z\in K_{kh}\cp. \end{cases} \end{align*}Moreover, $g$ also satisfy \begin{align*} |\di g|\ge\frac{c_{k,h}}{|z|^2}\dist(z,M_{kh}), \end{align*} with values of $c_{k,h}$ the same as in Theorem 2. \end{lem} The proof of Lemma 2 is left to Section 3. The sub-calibrations we choose work well for $(h,k)\in S,$ but do not work for some other Lawson cones. In some sense, these are specifically chosen to cover the cases $(h,k)\in S.$ \subsection{Proof of Theorem 2} By Lemma 1, we have \begin{align*} P(F;H_R)-P(K_{kh};H_R)\ge& \int_{K_{kh}\De f}|\di g|\\\ge& c_{k,h}\int_{K_{kh}\De F}\frac{\dist(z,M_{kh})}{|z|^2}dz\\\ge& \frac{c_{k,h}}{R^2}\int_{K_{kh}\De F}\dist(z,M_{kh})dz. \end{align*} Now, suppose $\vp\in C^1(M_{kh}),$ with $0\not\in\text{spt}\vp\s\s B_R^m.$ For $t_0>0$ small enough, there exists an open set $F\s \R^m$ with $\pd F-\{0\}$ a $C^1$ hypersurface and $K_{kh}\De F\s\s H_R,$ such that \begin{align*} \pd F-\{0\}=\{z+t\vp(z)\nu_{K_{kh}}(z):z\in M_{kh}-\{0\}\}. \end{align*} By second variation and Taylor expansion, we have \begin{align*} P(F;H_R)-P(K_{kh};H_R)=\frac{t^2}{2}\int_{M_{kh}}|\na^{M_{kh}}\vp|^2-|\II_{M_{kh}}|^2\vp^2 d\hi^{m-1}+O(t^3). \end{align*} Calculating the integral directly by pulling back the volume form on $\R^m$, we have \begin{align*} \int_{K_{kh}\De F}\dist(z,M_{kh})dz&=(1+O(t))\int_{M_{kh}}^{}d\hi^{m-1}(z)\int_0^{t|\vp(z)|}sds\\&=\frac{t^2}{2}\int_{M_{kh}}\vp^2d\hi^{m-1}+O(t^3). \end{align*} For details, please refer to Lemma 3.1 in \cite{phi}. Putting these two, and letting $t\to 0,$ we deduce that \begin{align} \int_{M_{kh}}|\na^{M_{kh}}\vp|^2-|\II_{M_{kh}}|^2\vp^2 d\hi^{m-1}\ge \frac{c_{k,h}}{R^2}\int_{M_{kh}}\vp^2d\hi^{m-1}. \end{align} To extend (4) to all $\phi\in C^1(M_{kh}),$ let $\psi_j$ be a sequence of cut-off functions so that $\textnormal{spt}\psi_j\s B_{2/j}^m$ and $\psi_j=1$ on $B_{1/j}^m$ with $|D\psi_j|\le C_mj$ everywhere, where $C_m$ is a positive constant depending only on $m$. We know that $\hi^{m-1}(M_{kh}\cap B_r^m)\le c(m)r^{m-1}$ for some constant $c(m)$ depending only on $m$ and $|\II_{M_{kh}}|\le \frac{C}{|z|}$ for some constant $C$ depending only on $k,h.$ Combining these estimates, we can see that the integrand on the left hand side of (4) is dominated by $O(\frac{1}{|z|^2}),$ and thus the integral on the left hand side converges as $j\to\infty$. Let $j\to\infty$ and use dominated convergence. We deduce that (4) is true for all $\vp\in C^1(M_{kh}).$\qed \subsection{Proof of Theorem 1} Define \begin{align*} p(z)=&\bigg|\frac{|x|}{\sqrt{k-1}}-\frac{|y|}{\sqrt{h-1}}\bigg|. \end{align*} By Lemma 1 and Lemma 2, we have \begin{align*} |K_{kh}\De F|&\le |(K_{kh}\De F)\cap\{p>\e\}|+|H_R\cap\{p<\e\}|\\ &\le \int_{(K_{kh}\De F)\cap\{p>\e\}}\frac{p(z)}{\e}\frac{R^2}{|z|^2}dz+|H_R\cap\{p<\e\}|\\ &=\frac{lR^2}{\e} \int_{(K_{kh}\De F)\cap\{p>\e\}}\frac{\dist(z,M_{kh})}{|z|^2}dz+|H_R\cap\{p<\e\}|\\ &\le \frac{lR^2}{c_{k,h}\e}\int_{(K_{kh}\De F)\cap\{p>\e\}}|\di g|dz+|H_R\cap\{p<\e\}|\\ &\le \frac{lR^2}{c_{k,h}\e}\bigg(P(F;H_R)-P(K_{kh};H_R)\bigg)+|H_R\cap\{p<\e\}|, \end{align*} where $l=\sqrt{\frac{1}{h-1}+\frac{1}{k-1}}$ by elementary geometry. Now, we need to get a suitable upper bound for $|H_R\cap\{p<\e\}|$. We have \begin{align*} |H_R\cap\{p<\e\}|=&\int_{B_R^h}\hi^k\bigg(\bigg\{x\in B_R^k:\frac{|y|}{\sqrt{h-1}}-\e<\frac{|x|}{\sqrt{k-1}}<\frac{|y|}{\sqrt{h-1}}+\e\bigg\}\bigg)dy\\ &\le \om_k(k-1)^{k/2}\int_{B_R^h}\bigg(\frac{|y|}{\sqrt{h-1}}+\e\bigg)^k-\bigg(\frac{|y|}{\sqrt{h-1}}-\e\bigg)_+^kdy. \end{align*} We can break down the estimate into two parts, namely \begin{align*} &\int_{B_{\e\sqrt{h-1}}^h}\bigg(\frac{|y|}{\sqrt{h-1}}+\e\bigg)^k-\bigg(\frac{|y|}{\sqrt{h-1}}-\e\bigg)_+^kdy\\ =&\int_{B_{\e\sqrt{h-1}}^h}\bigg(\frac{|y|}{\sqrt{h-1}}+\e\bigg)^kdy\\ \le&2^k\e^{h+k}\om_h(h-1)^{k/2}, \end{align*} and \begin{align*} & \int_{B_R^h\backslash B_{\e\sqrt{h-1}}^h}\bigg(\frac{|y|}{\sqrt{h-1}}+\e\bigg)^k-\bigg(\frac{|y|}{\sqrt{h-1}}-\e\bigg)^k_+dy\\ =&\int_{B_R^h\backslash B_{\e\sqrt{h-1}}^h}\bigg(\frac{|y|}{\sqrt{h-1}}+\e\bigg)^k-\bigg(\frac{|y|}{\sqrt{h-1}}-\e\bigg)^kdy\\ \le&\frac{1}{(h-1)^{k/2}}\int_{B_R^h\backslash B_{\e\sqrt{h-1}}^h}|y|^k\bigg(\bigg(1+\frac{\e\sqrt{h-1}}{|y|}\bigg)^k-\bigg(1-\frac{\e\sqrt{h-1}}{|y|}\bigg)^k\bigg)dy\\ \le& \frac{2^k}{(h-1)^{k/2}}\int_{B_R^h\backslash B_{\e\sqrt{h-1}}^h}|y|^k\frac{\e\sqrt{h-1}}{|y|}dy\\ \le& \frac{2^k\e}{(h-1)^{(k-1)/2}}\int_{\e\sqrt{h-1}}^{R}r^{k-1}\hi^{m-1}(S_r^{h-1})dr\\ \le&\frac{2^kh\om_h\e}{(h-1)^{(k-1)/2}}\int_{\e\sqrt{h-1}}^Rr^{k+h-2}dr\\ \le&\frac{2^kh\om_h\e}{(h-1)^{(k-1)/2}(m-1)}R^{m-1}, \end{align*}where we use $(1+t)^k-(1-t)^k\le 2^kt$ for $t\in(0,1),k\in \N.$ Combining the two parts, we have \begin{align*} |H_R\cap\{p<\e\}|\le &2^k\om_k\om_h(k-1)^{k/2}(h-1)^{h/2}\e\bigg(\e^{m-1}+\frac{hR^{m-1}}{(h-1)^{(m-1)/2}(m-1)}\bigg) \end{align*} Now, note that $\om_j<6$ for all $2\le j\le 11,$ so by substituting the explicit values for $c_{k,h},$ we have \begin{align} |K_{kh}\De F|\le& \frac{2\times 11^5\sqrt{11}R^2}{\e}\bigg(P(F;H_R)-P(K_{kh};H_R)\bigg)\nonumber\\&+2^{11}6^210^{11/2}2^{3/2}\e(\e^{m-1}+\frac{3}{6}R^{m-1}) \\ \le& 7\times 10^{10}\bigg(\frac{R^2}{\e}\big(P(F;H_R)-P(K_{kh};H_R)\big)+\e(\e^{m-1}+R^{m-1})\bigg). \end{align} Let \begin{align*} \ai&=\frac{\lb^m(K_{kh}\De F)}{R^m},\\ \de&=\frac{P(F;H_R)-P(K_{kh};H_R)}{R^{m-1}}. \end{align*} Note that $\ai\le R^{-m}\lb^m(H_R)=\om_k\om_h\le 6^2.$ If $\de\ge 6^2,$ then $\ai\le\om_k\om_h\le 6\sqrt{\de}.$ Thus we assume $\de \le 6^2.$ Inequality (6) implies \begin{align} \ai\le 7\times 10^{10}\bigg(\frac{R}{\e}\de+\frac{\e}{R} \big((\e/R)^{m-1})+1\big)\bigg) \end{align} If $\e<\sqrt[13]{35}R,$ then inequality (7) implies \begin{align*} \ai\le 7\times 10^{10}(\frac{R}{\e}\de+36\frac{\e}{R} ). \end{align*} Note that \begin{align*} \frac{R}{\e}\de+36\frac{\e}{R}\ge 12\sqrt{\de} \end{align*} with equality if and only if $\e=R\sqrt{\frac{\de}{36}}.$ Since $\frac{\de}{36}\le1,$ we can let $\e=R\sqrt{\frac{\de}{36}},$ and deduce that \begin{align*} \ai\le 7\times 12\times 10^{10}\sqrt{\de}. \end{align*}\qed \section{Proof of Lemma 2} \subsection{Calculating $\di g$ on $K_{kh}$} To make calculations simpler, let $u=(h-1)|x|^2,v=(k-1)|y|^2.$ First, consider the function \begin{align*} f(z)=\frac{1}{4}(u-v)u^d. \end{align*} We have \begin{align*} \pd_{x_i}f=&\frac{h-1}{2}x_i\big((d+1)u^d-dvu^{d-1}\big),\\ \pd_{x_i}\pd_{x_j}f=&\frac{h-1}{2}\de_{ij}\big((d+1)u^d-dvu^{d-1}\big)+(h-1)^2 x_ix_j\big((d+1)du^{d-1}-d(d-1)vu^{d-2}\big),\\ \pd_{y_i}f=&-\frac{k-1}{2}y_iu^d,\\ \pd_{y_j}\pd_{y_i}f=&-\frac{k-1}{2}\de_{ij}u^d,\\ \pd_{y_j}\pd_{x_i}f=&-d(h-1)(k-1)u^{d-1}x_iy_j. \end{align*} This gives \begin{align*} |\na f|^2=&\frac{h-1}{4}u\big((d+1)u^d-dvu^{d-1}\big)^2+\frac{k-1}{4}vu^{2d},\\ \De f=&\frac{(h-1)k}{2}\big((d+1)u^d-dvu^{d-1}\big)-\frac{(k-1)h}{2}u^d\\& +(h-1)u\big((d+1)du^{d-1}-d(d-1)vu^{d-2}\big),1\\ (\pd_{x_i}f)(\pd_{x_j}f)(\pd_{x_i}\pd_{x_j}f)=&\frac{(h-1)^2}{8}u\big((d+1)u^d-dvu^{d-1}\big)^3\\&+\frac{(h-1)^2}{4}u^2\big((d+1)u^d-dvu^{d-1}\big)^2\big((d+1)du^{d-1}-d(d-1)vu^{d-2}\big),\\ (\pd_{y_i}f)(\pd_{y_j}f)(\pd_{y_i}\pd_{y_j}f)=&-\frac{(k-1)^2}{8}vu^{3d}, \\ (\pd_{x_i}f)(\pd_{y_j}f)(\pd_{x_i}\pd_{y_j}f)=&\frac{(h-1)(k-1)}{4}du^{2d }v\big((d+1)u^{d}-dvu^{d-1}\big).\end{align*} Thus, we have \begin{align*} |\na f|^3\di g=&|\na f|^3\di\frac{\na f}{|\na f|}\\=&|\na f|^2\De f-(\pd_{x_i}f)(\pd_{x_j}f)(\pd_{x_i}\pd_{x_j}f)-(\pd_{y_i}f)(\pd_{y_j}f)(\pd_{y_i}\pd_{y_j}f)\\&-2(\pd_{x_i}f)(\pd_{y_j}f)(\pd_{x_i}\pd_{y_j}f)\\ =&\frac{ (h-1) (k-1)}{8} u^{3d-2} (u - v) \bigg((1 + d)^2 \big(-1 + d (-1 + h)\big) u^2\\& + d (-2 + d (1 + 2 d - 2 (1 + d) h) + k) u v + d^3 (-1 + h) v^2\bigg). \end{align*} \subsection{Calculating $\di g$ on $K_{kh}\cp$} Now, define $$f(z)=\frac{1}{4}(u-v)v^d,$$ which can be obtained by interchanging $u,v$ and $h,k$ and adding an additional minus sign to $f$ in the previous subsection. Thus, by symmetry or by direct computations, we must have \begin{align*} |\na f|^2=& \frac{h-1}{4} u v^{2 d} + \frac{k-1}{4} v (d u v^{d-1} - (d+1) v^d)^2,\\ |\na f|^3\di g=&|\na f|^3\di\frac{\na f}{|\na f|}\\=&\frac{(h-1)(k-1)}{8} (u - v) v^{3d-2} \bigg(d^3 (k-1) u^2 \\&+ d \big(-2 + d + 2 d^2 + h - 2 d (1 + d) k\big) u v\\& + (d+1)^2 (-1 + d (k-1)) v^2\bigg). \end{align*} Note that if we set $g=\frac{\na f}{|\na f|},$ then $g$ is clearly continuous, and smooth except on $M_{kh}.$ Calculations can show that the derivative of $g$ is of order $O(|z|^{-1})$ near origin, so $g\in W^{1,1}_{\textnormal{loc}}(\R^m,\R^m).$ \subsection{The Cases $(2,k)$} We use the basic inequalities $\max\{|x|,|y|\}\le z\le \sqrt{2}\max\{|x|,|y|\}$ and $(a^q+b^q)^{1/q}\le (a^2+b^2)^{1/2}$ for $a,b>0,q\ge 2.$ Also note that $$d(z,M_{kh})=\frac{|\sqrt{u}-\sqrt{v}|}{\sqrt{h+k-2}}$$ by elementary geometry. If $u>v,$ then choosing $d=3/2,$ we have \begin{align*} \di g=\frac{\frac{1}{64} (-1 + k) u^{5/2} (u - v) (25 u^2 + 12 (-11 + k) u v + 27 v^2)}{\bigg(\frac{1}{16} u^2 (25 u^2 + 2 (-17 + 2 k) u v + 9 v^2)\bigg)^{3/2}}. \end{align*} Let $p_2(t)=27 t^2-48 t+25.$ We have $\min_{[0,1]}p_2=p_2(8/9)=11/3.$ This gives \begin{align*} \di g\ge &(k-1)(\sqrt{u}-\sqrt{v})\frac{\sqrt{u}+\sqrt{v}}{\sqrt{u}}\frac{25 u^2 + 12 (-11 + 7) u v + 27 v^2}{ (25 u^2 + 2 (-17 + 2 k) u v + 9 v^2)^{3/2}}\\ \ge&(k-1)(\sqrt{u}-\sqrt{v})\frac{u^2 p_3(v/u)}{ (25 u^2 + 10 u v + 9 v^2)^{3/2}}\\ \ge&(k-1)(\sqrt{u}-\sqrt{v})\frac{\frac{11}{3}(|z|/\sqrt{2})^4 }{ (44u^2)^{3/2}}\\ \ge&(k-1)(\sqrt{u}-\sqrt{v})\frac{\frac{11}{3}(|z|/\sqrt{2})^4 }{ (44 |z|^4)^{3/2}}\\ \ge&\frac{(k-1)}{2^5 3\sqrt{11}}\frac{\sqrt{u}-\sqrt{v}}{|z|^2}\\ \ge&\frac{\sqrt{11}}{2^53}\frac{\sqrt{u}-\sqrt{v}}{|z|^2}. \end{align*} If $u<v,$ choosing $d=1,$ we have \begin{align*} \di g=\frac{\frac{1}{8} (k-1) (u - v) v ((k-1) u^2 + (3 - 4 k) u v + 4 (-2 + k) v^2)}{\bigg(\frac{1}{4} v ((k-1) (u - 2 v)^2 + u v)\bigg)^{3/2}}. \end{align*} Let $q_2(t)=(k-1)t^2+(3-4k)t+4(k-2).$ We know that $\min_{[0,1]}q_2=q_2(1)=k-6.$ This gives \begin{align*} |\di g|\ge&(k-1)|\sqrt{u}-\sqrt{v}|\frac{\sqrt{u}+\sqrt{v}}{\sqrt{v}}\frac{v^2q_2(u/v)}{((k-1)4v^2+v^2)^{3/2}}\\ \ge &(k-1)|\sqrt{u}-\sqrt{v}|\frac{(k-6)(k-1)^2(|y|)^4}{(4(k-1)^3+(k-1)^2)^{3/2}|y|^6}\\ \ge&(k-1)|\sqrt{u}-\sqrt{v}|\frac{(k-6)(k-1)^2(|z|/\sqrt{k})^4}{(4(k-1)^3+(k-1)^2)^{3/2}|z|^6}\\ \ge&\frac{(k-6)(k-1)^3}{k^2(4(k-1)^3+(k-1)^2)^{3/2}}\frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}\\ \ge&\frac{6^3}{11^2(4\times 10^3+10^2)^{3/2}}\frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}\\\ge&\frac{1}{11^5}\frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}, \end{align*} where we use $v>u$ if and only if $|x|<\sqrt{k-1}|y|$ and thus $|z|<\sqrt{k}|y|$ This gives $$|\di g|\ge\frac{1}{11^5} \frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}=\frac{1}{11^5\sqrt{11} }\frac{\dist(z,M_{2,k})}{|z|^2}.$$ \subsection{The Case $(h,k)=(3,5)$} Choose $d=3/4.$ If $u>v,$ we have \begin{align*} \di g=\frac{\frac{1}{32} u^{1/4} (u - v) (49 u^2 - 72 u v + 27 v^2)}{\bigg(\frac{1}{32} \sqrt{u} (49 u^2 - 10 u v + 9 v^2)\bigg)^{3/2}}. \end{align*} Let $$p_3(t)=27 t^2-72 t+49.$$ We know that $\min_{[0,1]}p_3=p_3(1)=4.$ This yields \begin{align*} \di g\ge & 4\sqrt{2}(\sqrt{u}-\sqrt{v})\frac{\sqrt{u}+\sqrt{v}}{\sqrt{u}}\frac{u^2p_3(v/u)}{\bigg(49 \times 4 |x|^4+9\times 16 |y|^4\bigg)^{3/2}}\\\ge& 4\sqrt{2}(\sqrt{u}-\sqrt{v})\frac{4 u^2}{\bigg((49\times 4)(|x|^4+|y|^4)\bigg)^{3/2}}\\ \ge&4\sqrt{2}(\sqrt{u}-\sqrt{v})\frac{4\times 4(|z|/\sqrt{2})^4}{14^3 |z|^6}\\ \ge&\frac{2\sqrt{2}}{7^3}\frac{\sqrt{u}-\sqrt{v}}{|z|^2}. \end{align*} If $u<v,$ we have \begin{align*} \di g=\frac{\frac{1}{16} (u - v) v^{1/4} (27 u^2 - 123 u v + 98 v^2)}{\bigg(\frac{1}{16} \sqrt{v} (9 u^2 - 34 u v + 49 v^2)\bigg)^{3/2}}. \end{align*} Let $q_3(t)=27 t^2-123 t+98.$ We have $\min_{[0,1]}q_3(t)=q_3(1)=2.$ This gives \begin{align*} |\di g|\ge &4|\sqrt{u}-\sqrt{v}|\frac{\sqrt{u}+\sqrt{v}}{\sqrt{v}}\frac{2v^2q_3(u/v)}{\bigg(9\times 4|x|^4+49\times 16|y|^4\bigg)^{3/2}}\\\ge&4|\sqrt{u}-\sqrt{v}|\frac{2v^2}{\bigg(49\times 16(|x|^4+|y|^4)\bigg)^{3/2}}\\ \ge&4|\sqrt{u}-\sqrt{v}|\frac{2 \times 4^2|y|^4}{28^3|z|^6}\\ \ge&4|\sqrt{u}-\sqrt{v}|\frac{2 \times 4^2(|z|/\sqrt{3})^4}{28^3|z|^6}\\ \ge&\frac{2}{3^27^3}\frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}, \end{align*}where we use $u<v\eq 2|y|^2>|x|$ and thus $|z|^2<3|y|^2.$ This yields \begin{align*} |\di g|\ge \frac{2}{3^2 7^3}\frac{|\sqrt{u}-\sqrt{v}|}{|z|^2}=\frac{\sqrt{3}}{21^3}\frac{\dist(z,M_{5,3})}{|z|^2}. \end{align*} \section*{Acknowledgements} The author is very fortunate to have been introduced to the world of minimal surfaces by Professor Hubert Bray, who has taught the author a lot about geometry and life. The author can't thank him enough for his unwavering support and constant encouragement. Also, he would like to thank Professor Guido De Philippis and Professor Francesco Maggi for many helpful discussions and comments on the problem. He also thanks Professor Robert Bryant, Professor Mark Stern, and Professor Richard Hain for countless helpful discussions. Last but not least, he is indebted to Professor David Kraines, who funded the research in the paper with PRUV Fellowship.
2,877,628,089,271
arxiv
\section{Introduction} Explicit representations of groups have many uses in physics, chemistry, and mathematics. All representations of finite groups and compact linear groups can be expressed as unitary matrices given an appropriate choice of basis\cite{Artin9}. This makes them natural candidates for implementation using quantum circuits. Here we show that polynomial size quantum circuits can implement: \begin{itemize} \item The irreducible representations of any finite group which has an efficient quantum Fourier transform. This includes the symmetric group $S_n$. \item The irreducible representations of the alternating group $A_n$. \item The irreducible representations of polynomial highest weight of the unitary $U(n)$, special unitary $SU(n)$, and special orthogonal $SO(n)$ groups. \end{itemize} Using these quantum circuits one can find a polynomially precise additive approximation to any matrix element of these representations by repeating a simple measurement called the Hadamard test, as described in section \ref{hadamard}. More precisely, for the finite groups $S_n$ and $A_n$ we obtain any matrix element of any irreducible representation to within $\pm \epsilon$ in time that scales polynomially in $1/\epsilon$ and $n$. For the Lie groups $U(n)$, $SU(n)$, and $SO(n)$ we obtain any matrix element of any irreducible representation of polynomial highest weight to within $\pm \epsilon$ in time that scales polynomially in $1/\epsilon$ and $n$. Because the representations considered are of exponentially large dimension, one cannot efficiently find these matrix elements by classically multiplying the matrices representing a set of generators. Note that, many computer science applications use multiplicative approximations. In this case, one computes an estimate $\tilde{x}$ of a quantity $x$ with the requirement that $(1-\epsilon)x \leq \tilde{x} \leq (1-\epsilon)x$. The approximations obtained in this paper are all additive rather than multiplicative. For some problems, the computational complexity of additive approximations can differ greatly from that of mulitplicative approximations\cite{Bordewich, Tutte}. For exponentially large unitary matrices, the typical matrix element is exponentially small. Thus for average instances, a polynomially precise additive approximation provides almost no information. However, it is common that the worst case instances of a problem are hard whereas the average case instances are trivial. In section \ref{complexity} I narrow down a class of potentially hard instances for the problem of additively approximating the matrix elements of the irreducible representations of the symmetric group to polynomial precision. I also present a classical randomized algorithm to estimate normalized characters of the symmetric group $S_n$ to within $\pm \epsilon$ in $\mathrm{poly}(n,1/\epsilon)$ time. (The character is normalized by dividing by the dimension of the representation, so that the character of the identity element of the group is 1.) Thus, the techniques described here for evaluating matrix elements of irreducible representations of groups on quantum computers do not provide an obvious quantum speedup for the evaluation of the characters of $S_n$. Our results on the symmetric group relate closely to the quantum complexity of evaluating Jones polynomials and other topological invariants. Certain problems of approximating Jones and HOMFLY polynomials can be reduced to the approximation of matrix elements or characters of the Jones-Wenzl representation of the braid group, which is a $q$-deformation of certain irreducible representations of the symmetric group \cite{Aharonov1,Yard,Shor_Jordan,Jordan_Wocjan}. Figure \ref{complexities} compares the complexity of estimating matrix elements and characters of the Jones-Wenzl representation of the braid group to the complexity of the corresponding problems for the symmetric group. Exact complexity characterizations (\emph{i.e.} completeness results) are not known for all of these problems, and the exact relationships between the complexity classes referenced in figure \ref{complexities} are not rigorously known. Nevertheless, the results seem to suggest that in general the matrix elements are harder to approximate than the normalized characters, and that the Jones-Wenzl representation of braid group is computationally harder than the corresponding irreducible representations of the symmetric group. \begin{figure} \begin{center} \begin{tabular}{c|c|c} & symmetric & braid \\ \hline matrix elements & in BQP & BQP-complete \cite{Aharonov1, Yard} \\ \hline normalized characters & in BPP & DQC1-complete \cite{Shor_Jordan, Jordan_Wocjan} \end{tabular} \end{center} \caption{\label{complexities} The complexity results on the symmetric group refer arbitrary irreducible representations in Young's orthogonal form. The results on the braid group refer to the Jones-Wenzl representations, which give rise to Jones and HOMFLY polynomials. The complexity class DQC1 is the set of problems solvable in polynomial time on a one clean qubit computer. It is generally believed that one clean qubit computers are weaker than standard quantum computers but still capable of solving some problems outside of BPP.} \end{figure} \section{Hadamard Test} \label{hadamard} The Hadamard test is a standard technique in quantum computation for approximating matrix elements of unitary transformations. Suppose we have an efficient quantum circuit implementing a unitary transformation $U$, and an efficient procedure for preparing the state $\ket{\psi}$. We can then approximate the real part of $\bra{\psi} U \ket{\psi}$ using the following quantum circuit. \[ \mbox{\Qcircuit @C=1em @R=.7em { \lstick{\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})} & \qw & \ctrl{1} & \gate{H} & \meter \\ \lstick{\ket{\psi}} & {/} \qw & \gate{U} & {/} \qw & \qw & }} \] The probability of measuring $\ket{0}$ is \[ p_0 = \frac{1+\mathrm{Re}(\bra{\psi}U\ket{\psi})}{2}. \] Thus, one can obtain the real part of $\bra{\psi}U\ket{\psi}$ to precision $\epsilon$ by making $O(1/\epsilon^2)$ measurements and counting what fraction of the measurement outcomes are $\ket{0}$. Similarly, if the control bit is instead initialized to $\frac{1}{\sqrt{2}} (\ket{0} - i \ket{1})$, one can estimate the imaginary part of $\bra{\psi}U\ket{\psi}$. Thus the problem of estimating matrix elements of unitary representations of groups reduces to the problem of implementing these representations with efficient quantum circuits. \section{Fourier Transforms} \label{fourier} Let $G$ be a finite group and let $\hat{G}$ be the set of all irreducible representations of $G$. We choose a basis for the representations such that for any $\rho \in \hat{G}$ and $g \in G$, $g$ is represented by a $d_\rho \times d_\rho$ unitary matrix with entries $\rho_{i,j}(g)$. The quantum Fourier transform over $G$ is the following unitary operator\cite{Nielsen_Chuang} \[ U_{\mathrm{FT}} = \sum_{g \in G} \sum_{\rho \in \hat{G}} \sum_{i,j = 1}^{d_\rho} \sqrt{\frac{d_\rho}{|G|}} \rho_{i,j}(g) \ket{\rho,i,j}\bra{g}. \] Here $\ket{g}$ is a computational basis state (bitstring) indexing the element $g$ of $G$. Similarly, $\ket{\rho,i,j}$ is three bitstrings, one indexing the element $\rho \in \hat{G}$, and two writing out the numbers $i$ and $j$ in binary. The standard discrete Fourier transform is the special case where $G$ is a cyclic group. The regular representation of any $g \in G$ is \[ U_g = \sum_{h \in G} \ket{gh}\bra{h}. \] A short calculation shows \[ U_{\mathrm{FT}} U_g U_{\mathrm{FT}}^{-1} = \sum_{\rho \in \hat{G}} \sum_{i,j = 1}^{d_\rho} \sum_{i',j'=1}^{d_\rho} \delta_{j,j'} \rho_{i,i'}(g^{-1}) \ket{\rho,i,j}\bra{\rho,i',j'}. \] In other words, by conjugating the regular representation of $g$ with the quantum Fourier transform, one recovers the direct sum of all irreducible representations of $g^{-1}$. Given an efficient quantum circuit implementing $U_{\mathrm{FT}}$ one can thus efficiently estimate any matrix element of any irreducible representation of $G$ using the Hadamard test. Quantum circuits implementing the Fourier transform in $\mathrm{polylog}(|G|)$ time are known for the symmetric group\cite{Beals} and several other groups\cite{generalft}. The matrix elements of the representations depend on a choice of basis. The bases used in quantum Fourier transforms are subgroup adapted (see \cite{generalft}). In particular, the symmetric group Fourier transform described in \cite{Beals} uses the Young-Yamanouchi basis, also known as Young's orthogonal form. In section \ref{althyp} we describe a more direct quantum circuit implementation of the irreducible representations of the symmetric group, which generalizes to yield efficient implementations for the alternating group. \section{Schur Transform} \label{Schurxform} Let $\mathcal{H}$ be the Hilbert space of $n$ $d$-dimensional qudits. \[ \mathcal{H} = (\mathbb{C}^d)^{\otimes n}. \] We can act on this Hilbert space by choosing an element $u \in U(d)$ and applying it to each qudit. \[ \ket{\psi} \to u^{\otimes n} \ket{\psi} \] We can also act on this Hilbert space by choosing an element $\pi \in S_n$ and correspondingly permuting the $n$ qudits. \[ \ket{\psi} \to M_\pi \ket{\psi} \] $u^{\otimes n}$ and $M_\pi$ are reducible unitary $nd$-dimensional representations of $U(d)$ and $S_n$, respectively. These two actions on $\mathcal{H}$ commute. The irreducible representations of $S_n$ are in bijective correspondence with the partitions of $n$. Any partition of $n$ into $d$ parts indexes a unique irreducible representation of $U(d)$. $U(d)$ has infinitely many irreducible representations, so these partitions only index a special subset of them. As discussed in \cite{Schur}, there exists a unitary change of basis $U_{\mathrm{Schur}}$ such that \[ U_{\mathrm{Schur}} M_\pi u^{\otimes n} U_{\mathrm{Schur}}^{-1} = \bigoplus_\lambda \rho_\lambda(\pi) \otimes \nu_\lambda(u), \] where $\lambda$ ranges over all partitions of $n$ into $d$ parts. As shown in \cite{Schur}, $U_{\mathrm{Schur}}$ can be implemented by a $\mathrm{poly}(n,d)$ size quantum circuit. Thus, using the Hadamard test, one can efficiently obtain matrix elements of these representations of the symmetric and unitary groups. \section{Complexity of Symmetric Group Representations} \label{complexity} As described in section \ref{fourier}, quantum computers can solve the following problem with probability $1-\delta$ in $\mathrm{poly}(n,1/\epsilon,\log(1/\delta))$ time. Note that standard Young tableaux index the Young-Yamanouchi basis vectors, as discussed in section \ref{Young}.\\ \\ \noindent \begin{minipage}[c]{\textwidth} \textbf{Problem 1:} Approximate a matrix element in the Young-Yamanouchi basis of an irreducible representation for the symmetric group $S_n$. \\ \textbf{Input:} A Young diagram specifying the irreducible representation, a permutation from $S_n$, a pair of standard Young tableaux indicating the desired matrix element, and a polynomially small parameter $\epsilon$.\\ \textbf{Output:} The specified matrix element to within $\pm \epsilon$. \\ \end{minipage} It appears that no polynomial time classical algorithm for this problem is known. Due mainly to applications in quantum chemistry, many exponential time classical algorithms for the exact computation of entire matrices from representations of the symmetric group have been developed\cite{Hamermesh, Boerner, Wu_Zhang1, Wu_Zhang2, Egecioglu, Clifton, Rettrup, Pauncz}. There appears to be no literature on the computation or approximation of individual matrix elements of representations of $S_n$. On the other hand, the precision of approximation achieved by the quantum algorithm is trivial for average instances. We can see this as follows. Let $\lambda$ be a Young diagram of $n$ boxes, let $\rho_\lambda$ be the corresponding irreducible representation of $S_n$, and let $d_\lambda$ be the dimension of $\rho_\lambda$. For any $\pi \in S_n$, the root mean square of the matrix elements of $\rho_\lambda(\pi)$ is \[ \mathrm{RMS}(\rho_\lambda(\pi)) = \sqrt{\frac{1}{d_\lambda^2} \sum_{a,b \in B} | \bra{a} \rho_\lambda(\pi) \ket{b} |^2}, \] where $B$ is any complete orthonormal basis for the vector space on which $\rho_\lambda$ acts. We see that \[ \sum_{a \in B} | \bra{a} \rho_\lambda(\pi) \ket{b} |^2 = 1 \] since, by the unitarity of $\rho_\lambda(\pi)$, this is just the norm of $\ket{b}$. Thus, \begin{equation} \label{small_rms} \mathrm{RMS}(\rho_\lambda(\pi)) = \sqrt{\frac{1}{d_\lambda^2} \sum_{b \in B} 1} = \frac{1}{\sqrt{d_\lambda}}. \end{equation} The interesting instances of problem 1 are those in which $d_\lambda$ is exponentially large. In these instances, the typical matrix element is exponentially small, by equation \ref{small_rms}. Running the quantum algorithm yields polynomial precision, thus one could instead simply guess zero every time, with similar results. That the average case instances are trivial does not mean that the algorithm is trivial. Hard problems that are trivial on average are a common occurrence. The most relevant example of this is the problem of estimating a knot invariant called the Jones polynomial. A certain problem of estimating the Jones polynomial of knots is BQP-complete\cite{Freedman, Aharonov1, Aharonov2}. The Jones polynomial algorithm is based on estimating matrix elements of certain representations of the braid group to polynomial precision. On average these matrix elements are exponentially small. Nevertheless, the BQP-hardness of the Jones polynomial problem shows that the worst-case instances are as hard as any problem in BQP. By analogy to the results on Jones polynomials, one might ask ask whether problem 1 is BQP-hard. The existing proofs of BQP-hardness of Jones polynomial estimation rely on the fact that the relevant representations of the braid group are dense in the corresponding unitary group. Thus, one can construct a braid whose representation implements approximately the same unitary as any given quantum circuit. Furthermore, it turns out that the number of crossings needed to achieve a good approximation scales only polynomially with the number of quantum gates in the circuit. Unlike the braid group, the symmetric group is finite. Thus, no representation of it can be dense in a continuous group. Hence, if the problem of estimating matrix elements of the symmetric group is BQP-hard, the proof will have to proceed along very different lines than the BQP-hardness proof for Jones polynomials. Lacking a hardness proof, the next best thing is to identify a class of instances in which the matrix elements are large enough to make the approximation nontrivial. As shown below, we can do this using the asymptotic character theory of the symmetric group. Note that we need not worry about the matrix elements being too large, because even if we know \emph{a priori} that a given matrix element has magnitude 1, it could still be nontrivial to compute its sign. Let $\pi$ be a permutation in $S_n$, and let $\lambda$ be a Young diagram of $n$ boxes. The character \[ \chi_\lambda(\pi) = \mathrm{Tr}(\rho_\lambda(\pi)) \] is clearly independent of the basis in which $\rho_\lambda$ is expressed. Furthermore, the character of a group element depends only on the conjugacy class of the group element, because for any representation $\rho$, \[ \mathrm{Tr}(\rho(h g h^{-1})) = \mathrm{Tr}(\rho(h) \rho(g) \rho(h)^{-1}) = \mathrm{Tr}(\rho(g)). \] \begin{figure} \begin{center} \includegraphics[width=0.32\textwidth]{shapeconverge.eps} \caption{\label{shapeconverge} Here is a sequence of Young diagrams, such that as the number of boxes increases, the Young diagram converges asymptotically to some fixed shape, in this case a triangle.} \end{center} \end{figure} To understand the behavior of the characters of $S_n$ as $n$ becomes large, consider a sequence of Young diagrams $\lambda_1,\lambda_2,\lambda_3,\ldots$, where $\lambda_n$ has $n$ boxes. Suppose that the diagram $\lambda_n$, when scaled down by a factor of $1/\sqrt{n}$, converges to a fixed shape $\omega$ in the limit of large $n$, as illustrated in figure \ref{shapeconverge}. Let $d_{\lambda_n}$ be the dimension of the irreducible representation corresponding to Young diagram $\lambda_n$. Let $\pi$ be a permutation in $S_k$. We can also consider $\pi$ to be an element of $S_n$ for any $n > k$ which leaves the remaining $n-k$ objects fixed. As shown by Biane\cite{Biane}, \begin{equation} \label{Bianeformula} \frac{\chi_{\lambda_n}(\pi)}{d_{\lambda_n}} = C_\pi(\omega) n^{-|\pi|/2} + O(n^{-|\pi|/2-1}). \end{equation} Here $|\pi|$ denotes the minimum number of transpositions needed to obtain $\pi$. Note that these are general transpositions, not transpositions of neighbors. $C_{\pi}(\omega)$ is a constant that only depends on $\pi \in S_k$ and the shape $\omega$. A precise definition of what it means for the sequence to converge to a fixed shape is given in \cite{Biane}, but for present purposes, the intuitive picture of figure \ref{shapeconverge} should be sufficient. $\chi_{\lambda_n}(\pi)/d_{\lambda_n}$ is the average of the matrix elements on the diagonal of $\rho_{\lambda_n}(\pi)$. In the present setting, where $\pi$ is fixed, $\chi_{\lambda_n}(\pi)/d_{\lambda_n}$ shrinks only polynomially with $n$. Thus polynomial precision is sufficient to provide nontrivial estimates of these matrix elements. Nevertheless, finding diagonal matrix elements of $\rho_{\lambda_n}(\pi)$ for fixed $\pi$ and large $n$ is not computationally hard. This is because, as discussed in section \ref{althyp}, the Young-Yamanouchi basis is subgroup adapted. Thus, for any $\pi$ which leaves all bit the first $k$ objects fixed, $\rho_{\lambda_n}(\pi)$ is a direct sum of irreducible representations of $\pi$ in $S_k$. Because $k$ is fixed, any irreducible representations of $S_k$ has dimension $O(1)$ and can therefore be computed in $O(1)$ time by multiplying the matrices representing transpositions. To produce a candidate class of hard instances of problem 1, we recall that the character $\chi_{\lambda_n}(\pi)$ depends only on the conjugacy class of $\pi$. Thus, we consider $\pi'$ conjugate to $\pi$. Like $\pi \in S_n$, $\pi' \in S_n$ leaves at least $n-k$ objects fixed, and the representations $\chi_{\lambda_n}(\pi')$ have diagonal matrix elements with polynomially small average value. However, the objects left fixed by $\pi'$ need not be $k+1,k+2,\ldots,n$. Indeed, $\pi'$ can be chosen so that the object $n$ is not left fixed, in which case $\rho_{\lambda_n}(\pi')$ cannot be written as the direct sum of irreducible representations of $S_m$ for any $m < n$. There is an additional simple way in which an instance of problem 1 can fail to be hard. Let $r(\pi)$ be the minimal number of transpositions of neighbors needed to construct the permutation $\pi$. If $r(\pi)$ is constant or logarithmic, then the matrix elements of the irreducible representations of $\pi$ can be computed classically in polynomial time by direct recursive application of equation \ref{rule}. For a class of hard instances of problem 1 I propose the following.\\ \\ \begin{minipage}[c]{\textwidth} \begin{hypothesis} Let $\pi$ be a permutation in $S_n$. We consider it to permute a series of objects numbered $1,2,3,\ldots,n$. Let $s(\pi)$ be the number of objects that $\pi$ does not leave fixed. Let $l(\pi)$ be the largest numbered object that $\pi$ does not leave fixed. Let $r(\pi)$ be the minimum number of transpositions of neighbors needed to construct $\pi$. Let $\lambda$ be a Young diagram of $n$ boxes, and let $\rho_\lambda$ be the corresponding $d_\lambda$-dimensional irreducible representation of $S_n$. I propose the problems of estimating the diagonal matrix elements of $\rho_\lambda(\pi)$ such that $s(\pi) = O(1)$, $l(\pi) = \Omega(n)$, and $r(\pi) = \Omega(n)$ as a possible class of instances of problem 1 not solvable classically in polynomial time. \end{hypothesis} \vspace{6pt} \end{minipage} Although this hypothesis contains many restrictions on $\pi$, it is clear that permutations satisfying all of these conditions exist. One simple example is the permutation that transposes 1 with $n$. \section{Characters of the Symmetric Group} \label{characters} Because characters do not depend on a choice of basis, the computational complexity of estimating characters is especially interesting. Hepler\cite{Hepler} showed that computing the characters of the symmetric group exactly is \#P-hard. It is clear that an algorithm for efficiently approximating matrix elements of a representation can aid in approximating the corresponding character. Specifically, the quantum algorithm for problem 1 yields an efficient solution for the following problem. \\ \\ \noindent \begin{minipage}[c]{\textwidth} \textbf{Problem 2:} Approximate a character for the symmetric group $S_n$.\\ \textbf{Input:} A Young diagram $\lambda$ specifying the irreducible representation, a permutation $\pi$ from $S_n$, and a polynomially small parameter $\epsilon$.\\ \textbf{Output:} Let $\chi^\lambda(\pi)$ be the character, and let $d_\lambda$ be the dimension of the irreducible representation. The output $\chi_{\mathrm{out}}$ must satisfy $|\chi_{\mathrm{out}} - \chi^\lambda(\pi)/d_\lambda| \leq \epsilon$ with high probability.\\ \end{minipage} However, as we show in this section, problem 2 is efficiently solvable using only classical randomized computation. Thus the techniques used for problem 1 do not offer immediate benefit for problem 2. Although this is in some sense a negative result, it provides an interesting illustration of the difference in complexity between estimating individual matrix elements of representations and estimating the characters. We can reduce problem 2 to problem 1 by sampling uniformly at random from the standard Young tableaux compatible with Young diagram $\lambda$. For each Young tableau sampled we estimate the corresponding diagonal matrix element of $\rho_\lambda(\pi)$, as described in problem 1. By averaging the diagonal matrix elements for polynomially many samples, we obtain the normalized character to polynomial precision. The problem of sampling uniformly at random from the standard Young tableaux of a given shape is nontrivial but it has been solved. Greene, Nijenhuis, and Wilf proved in 1979 that their ``hook-walk'' algorithm produces the standard Young tableaux of any given shape with uniform probability\cite{Greene}. Examination of \cite{Greene} shows that the time needed by the hook-walk algorithm to produce a random standard Young tableaux compatible with a Young diagram of $n$ boxes is upper bounded by $O(n^2)$. By averaging over diagonal matrix elements we lose some information contained in the individual matrix elements. This observation gives the intuition that it should often be harder to estimate individual matrix elements of a representation than to estimate its trace. Jones polynomials provide an example in which this intuition is confirmed. As discussed in \cite{Shor_Jordan}, computing the Jones polynomial of the trace closure of a braid reduces to computing the normalized character of a certain representation of the braid group. The problem of additively approximating this normalized character is only DQC1-complete. In contrast, the individual matrix elements of this representation yield the Jones polynomial of the plat closure of the braid and are BQP-complete to approximate. We see a very similar phenomenon in the symmetric group; problem 2 is is solvable by a randomized polynomial-time classical algorithm, whereas problem 1 is not, as far as we know. To construct a classical algorithm for problem 2, first recall that the character of a given group element depends only on the element's conjugacy class. We can think of any $\pi \in S_n$ as acting on the set $\{1,2,\ldots,n\}$. The sizes of the orbits of the elements of $\{1,2,\ldots,n\}$ under repeated application of $\pi$ form a partition of the integer $n$. For example, consider the permutation $\pi \in S_5$ defined by \[ \begin{array}{lllll} \pi(1) = 2 & \pi(2) = 3 & \pi(3) = 1 & \pi(4) = 5 & \pi(5) = 4. \end{array} \] This divides the set $\{1,2,3,4,5\}$ into the orbits $\{1,2,3\}$ and $\{4,5\}$. Thus it corresponds to the partition $(3,2)$ of the integer $5$. Two permutations in $S_n$ are conjugate if and only if they correspond to the same partition. Thus, we can introduce the following notation. For any two partitions $\mu$ and $\lambda$ of $n$ define $\chi_\mu^\lambda$ to be the irreducible character of $S_n$ corresponding to the Young diagram of $\lambda$ evaluated at the conjugacy class corresponding to $\mu$. To obtain an efficient classical solution to problem 2 we use the following theorem due to Roichman\cite{Roichman2}. \begin{theorem}[From \cite{Roichman2}] \label{Roichman_rule} For any partitions $\mu = (\mu_1,\ldots,\mu_l)$ and $\lambda = (\lambda_1,\ldots,\lambda_k)$ of $n$, the corresponding irreducible character of $S_n$ is given by \[ \chi_\mu^\lambda = \sum_\Lambda W_\mu(\Lambda) \] where the sum is over all standard Young tableaux $\Lambda$ of shape $\lambda$ and \[ W_\mu(\Lambda) = \prod_{\stackrel{1 \leq i \leq k}{i \notin B(\mu)}} f_\mu(i,\Lambda) \] where $ B(\mu) = \{\mu_1 + \ldots + \mu_r | 1 \leq r \leq l \} $ and \[ f_\mu(i,\Lambda) = \left\{ \begin{array}{rl} -1 & \textrm{box $i+1$ of $\Lambda$ is in the southwest of box $i$} \\ 0 & \textrm{$i+1$ is in the northeast of $i$, $i+2$ is in the southwest of $i+1$, and $i+1 \notin B(\mu)$} \\ 1 & \textrm{otherwise} \end{array} \right. \] \end{theorem} By using the hook walk algorithm we can sample uniformly at random from the standard Young tableaux $\Lambda$ of shape $\lambda$. By inspection of theorem \ref{Roichman_rule} we see that for each $\Lambda$ sampled we can compute $W_\mu(\Lambda)$ classically in $\mathrm{poly}(n)$ time. By averaging the values of $W_\mu(\Lambda)$ obtained during the course of the sampling we can thus obtain a polynomially accurate additive approximation the the normalized character, thereby solving problem 2. Some readers may notice that theorem \ref{Roichman_rule} is similar in form to the much older and better-known Murnaghan-Nakayama rule. However, the Murnaghan-Nakayama rule is based on a sum over all ``rim-hook tableaux'' of shape $\lambda$ (see \cite{Roichman2}). It is not obvious how to sample uniformly at random from the rim-hook tableaux of a given shape. Thus, it is not obvious how to use the Murnaghan-Nakayame rule to obtain a probabilistic classical algorithm for problem 2. \section{Lie Groups} \subsection{Introduction} \label{lieintro} Because $U(n)$, $SU(n)$ and $SO(n)$ are compact linear groups, all of their representations are unitary given the right choice of basis\cite{Artin9}. In section \ref{Schurxform} we described how to efficiently approximate the matrix elements from certain unitary irreducible representation of $U(n)$. Here we present a more direct approach to this problem, which can handle a larger set of representations of $U(n)$ and also extends to some other compact Lie groups: $SU(n)$ and $SO(n)$. $U(n)$, $SU(n)$, and $SO(n)$ are subgroups of $GL(n)$, the group of all invertible $n \times n$ matrices. All of the irreducible representations of $U(n)$ and $SU(n)$ can be obtained by restricting the irreducible representations of $GL(n)$ to these subgroups. The best classical algorithms for computing irreducible representations of $GL(n)$ and $U(n)$ appear to be those of \cite{Burgisser} and \cite{Grabmeier}. These classical algorithms work by manipulating matrices whose dimension equals the dimension of the representation. Thus, they do not provide a polynomial time algorithm for computing matrix elements from representations whose dimension is exponentially large. The implementation of irreducible representations of $SO(3)$ and $SU(2)$ by quantum circuits has been studied previously by Zalka\cite{Zalka}. \subsection{Gel'fand-Tsetlin representation of $U(n)$} \label{Gelfand} The irreducible representations of the Lie group $U(n)$ are most easily described in terms of the corresponding Lie algebra $u(n)$. It is not necessary here delve into the theory of Lie groups and Lie algebras, but those who are interested can see \cite{Gilmore}. For now it suffices to say that $u(n)$ is the set of all antihermitian $n \times n$ matrices, and for any $u \in U(n)$ there exists $h \in u(n)$ such that $u = e^h$. Given any representation $a:u(n) \to u(m)$ one can construct a representation $A:U(n) \to U(m)$ as follows. For any $u \in U(n)$ find a corresponding $h(u) \in u(n)$ such that $e^h = u$, and set $A(u) = e^{a(h(u))}$. If $a$ is an antihermitian representation of $u(n)$ then $A$ is a unitary representation of $U(n)$. Furthermore, it is clear that $A$ is irreducible if and only if $a$ is irreducible. It turns out that the irreducible representations of the algebra $gl(n)$ of all $n \times n$ complex matrices remain irreducible when restricted to the subalgebra $u(n)$. Furthermore, all of the irreducible representations of $u(n)$ are obtained this way. Let $E_{ij}$ be the $n \times n$ matrix with all matrix elements equal to zero except for the matrix element in row $i$, column $j$, which is equal to one. The set of all $n^2$ such matrices forms a basis over $\mathbb{C}$ for $gl(n)$. Thus to describe a representation of $gl(n)$ it suffices to describe its action on each of the $E_{ij}$ matrices. As described in chapter 18, volume 3 of \cite{Vilenkin}, explicit matrix representations of $gl(n)$ were constructed by Gel'fand and Tsetlin. (See also \cite{Gelfand_works}.) In their construction, one thinks of the representation as acting on the formal span of a set of combinatorial objects called Gel'fand patterns. The Gel'fand-Tsetlin representations of $E_{p,p-1}$ and $E_{p-1,p}$ are sparse and simple to compute for all $p \in \{2,3,\ldots,n\}$. This property makes the Gel'fand-Tsetlin representations particularly useful for quantum computation. A Gel'fand pattern of width $n$ consists of $n$ rows of integers\footnote{Some sources omit the top row, as it is left unchanged by the action of the representation.}. The $j\th$ row (from bottom) has $j$ entries $m_{1,j},m_{2,j},\ldots,m_{j,j}$. (Note that, in contrast to matrix elements, the subscripts on the entries of Gel'fand patterns conventionally indicate column first, then row.) These entries must satisfy \[ m_{j,n+1} \geq m_{j,n} \geq m_{j+1,n+1}. \] Gel'fand patterns are often written out diagrammatically. For example the Gel'fand pattern of width 3 with rows \[ \begin{array}{ccc} m_{1,3} = 4 & m_{2,3} = 1 & m_{3,3} = 0 \\ m_{1,2} = 3 & m_{2,2} = 0 & \\ m_{1,1} = 2 & & \end{array} \] is represented by the diagram \[ \left( \begin{array}{ccccc} 4 & & 1 & & 0\\ & 3 & & 0 & \\ & & 2 & & \end{array} \right). \] This notation has the advantage that the entries that appear directly to the upper left and upper right of a given entry form the upper and lower bounds on the values that entry is allowed to take. We call the top row of a Gel'fand pattern its weight\footnote{It is actually the \emph{highest} weight of the representation\cite{Vilenkin}, but for brevity I just call it the weight throughout this paper.}. To each weight of width $n$ corresponds one irreducible representation of $gl(n)$. This irreducible representation acts on the formal span of all Gel'fand patterns with that weight (of which there are always finitely many). To describe the action of the representation of $gl(n)$ on these patterns let \begin{eqnarray} \label{r1} l_{p,q} & = & m_{p,q}-p \\ \label{r2} a^j_{p-1} & = & \left| \frac{\prod_{i=1}^p (l_{i,p} - l_{j,p-1}) \prod_{i=1}^{p-2} (l_{i,p-2}-l_{j,p-1}-1)} {\prod_{i \neq j} (l_{i,p-1} - l_{j,p-1}) (l_{i,p-1} - l_{j,p-1} -1)} \right|^{1/2} \\ \label{r3} b^j_{p-1} & = & \left| \frac{\prod_{i=1}^p (l_{i,p} - l_{j,p-1} + 1) \prod_{i=1}^{p-2} (l_{i,p-2}-l_{j,p-1})} {\prod_{i \neq j} (l_{i,p-1} - l_{j,p-1}) (l_{i,p-1} - l_{j,p-1}+1) } \right|^{1/2}. \end{eqnarray} Let $M$ be a Gel'fand pattern and let $M_p^{+j}$ be the Gel'fand pattern obtained from $M$ by replacing $m_{j,p}$ with $m_{j,p}+1$. Similarly, let $M_p^{-j}$ be the Gel'fand pattern in which $m_{j,p}$ has been replaced with $m_{j,p}-1$. The representation $a_{\vec{m}}$ of $gl(n)$ corresponding to weight $\vec{m} \in \mathbb{Z}^n$ is defined by the following rules\footnote{Warning: \cite{Vilenkin} contains a misprint, in which the sums in equations \ref{r4} and \ref{r5} are taken up to $j=p$ instead of $j=p-1$.}, known as the Gel'fand-Tsetlin formulas. \begin{eqnarray} \label{r4} a_{\vec{m}}(E_{p-1,p})M & = & \sum_{j=1}^{p-1} a^j_{p-1} M_{p-1}^{+j} \\ \label{r5} a_{\vec{m}}(E_{p,p-1})M & = & \sum_{j=1}^{p-1} b^j_{p-1} M_{p-1}^{-j} \\ \label{r6} a_{\vec{m}}(E_{p,p})M & = & \left( \sum_{i=1}^p m_{i,p} - \sum_{j=1}^{p-1} m_{j,p-1} \right) M \end{eqnarray} These formulas give implicitly a representation for all of $gl(n)$, because any $E_{ij}$ can be obtained from operators of the form $E_{p-1,p}$ and $E_{p,p-1}$ by using the commutation relation $[E_{ik},E_{kl}] = E_{il}$. By restricting the representation $a_{\vec{m}}$ to antihermitian subalgebra of $gl(n)$ and taking the exponential, one obtains an irreducible group representation $A_{\vec{m}}:U(n) \to U(d_{\vec{m}})$, where $d_{\vec{m}}$ is the number of Gel'fand patterns with weight $\vec{m}$. It should be noted that some references claim that the set of allowed weights for representations of $GL(n)$ is $\mathbb{N}^n$, whereas others identify, as we do, $\mathbb{Z}^n$ as the allowed set of weights. The reason for this is that irreducible representations of $GL(n)$ in which the entries $m_{n,1}, m_{n,2}, \ldots, m_{n,n}$ of the weight are all nonnegative are polynomial invariants\cite{Konig37}. That is, for any $g \in GL(n)$ and any $\vec{m} \in \mathrm{N}^n$, each matrix element of the representation $\rho_{\vec{m}}(u)$ is a polynomial function of the $n^2$ matrix elements of $u$. The representations involving negative weights are called holomorphic representations, and many sources choose to neglect them. In the case that $\vec{m} \in \mathbb{N}^n$, the Gel'fand diagrams of width $n$ bijectively correspond to the semistandard Young tableaux of $n$ rows (\emph{cf.} \cite{Difrancesco}, pg. 517). \subsection{Quantum Algorithm for U(n)} \label{ualg} In this section we obtain an efficient quantum circuit implementation of any irreducible representation of $U(n)$ in which the entries $m_{1,n},\ldots,m_{n,n}$ of the highest weight are all at most polynomially large. The dimension of such representations can grow exponentially with $n$. Unlike the Schur transform, the method here does not require $m_{1,n},\ldots,m_{n,n}$ to be nonnegative. We start by finding a quantum circuit implementing the Gel'fand-Tsetlin representation of an $n \times n$ unitary matrix of the form \[ u_0 = \left[ \begin{array}{ccccc} u_{11} & u_{12} & & & \\ u_{21} & u_{22} & & & \\ & & 1 & & \\ & & & \ddots & \\ & & & & 1 \end{array} \right], \] where all off-diagonal matrix elements not shown are zero. After that we describe how to extend the construction to arbitrary $n \times n$ unitaries. For a given weight $\vec{m} \in \mathbb{Z}^n$ we wish to implement the corresponding representation $A_{\vec{m}}(u_0)$ with a quantum circuit. To do this, we first find an $n \times n$ Hermitian matrix $H_0$ such that $e^{i H_0} = u_0$. It is not hard to see that $H_0$ can be computed in polynomial time and takes the form \[ H_0 = \left[ \begin{array}{cccccc} h_{11} & h_{12} & & & \\ h_{12}^* & h_{22} & & & \\ & & 0 & & \\ & & &\ddots & \\ & & & & 0 \end{array} \right]. \] Thus, \begin{equation} \label{decomp1} H_0 = h_{11} E_{11} + h_{12} E_{12} + h_{12}^* h_{21} + h_{22} E_{22}. \end{equation} Hence, \begin{equation} \label{decomp} a_{\vec{m}}(H_0) = h_{11} a_{\vec{m}}(E_{11}) + h_{12} a_{\vec{m}}(E_{12}) + h_{12}^* a_{\vec{m}}(E_{21}) + h_{22} a_{\vec{m}}(E_{22}). \end{equation} To implement $A_{\vec{m}}(u_0)$ with a quantum circuit, we think of $a_{\vec{m}}(H_0)$ as a Hamiltonian and simulate the corresponding unitary time evolution $e^{-i a_{\vec{m}}(H_0)t}$ for $t=-1$. The Hamiltonian $a_{\vec{m}}(H_0)$ has exponentially large dimension in the cases of computational interest. However, examination of equation \ref{decomp1} shows that $H_0$ is a linear combination of operators of the form $E_{p,p-1}$ and $E_{p-1,p}$. Thus, by the Gel'fand-Tsetlin rules of section \ref{Gelfand}, $a_{\vec{m}}(H_0)$ is sparse and that its individual matrix elements are easy to compute. Under this circumstance, one can use the general method for simulating sparse Hamiltonians proposed in \cite{Aharonov_Tashma}. Define row-sparse Hamiltonians to be those in which each row has at most polynomially many nonzero entries. Further, define row-computable Hamiltonians to be those such that there exists a polynomial time algorithm which, given an index $i$, outputs a list of the nonzero matrix elements in row $i$ and their locations. Clearly, all row computable Hamiltonians are row-sparse. As shown in \cite{Aharonov_Tashma}, the unitary $e^{-iHt}$ induced by any row-computable Hamiltonian can be simulated in polynomial time provided that the spectral norm $\|H\|$ and the time $t$ are at most polynomially large. We have already noted that $a_{\vec{m}}(H_0)$ is row-computable. $a_{\vec{m}}(H_0)$ is row sparse, and because we are considering only polynomial highest weight, the entries of the Gel'fand patterns, and hence the matrix elements of $a_{\vec{m}}(H_0)$ are only polynomially large. Thus, by Gershgorin's circle theorem $\|a_{\vec{m}}(H_0)\|$ is at most $\mathrm{poly}(n)$. Having shown that a quantum circuit of $\mathrm{poly}(n)$ gates can implement the Gel'fand-Tsetlin representation of an $n \times n$ unitary of the form $u_0$, the remaining task is to extend this to arbitrary $n \times n$ unitaries. Examination of the preceding construction shows that it works just the same for any unitary of the form \[ u_p = \mathds{1}_p \oplus u \oplus \mathds{1}_{n-p-2}, \] where $\mathds{1}_p$ denotes the $p \times p$ identity matrix and $u$ is a $2 \times 2$ unitary. Corresponding to $u_p$ is again an antihermitian matrix of the form \[ H_p = 0_p \oplus h \oplus 0_{n-p-2} \] where $0_p$ is the $p \times p$ matrix of all zeros and $h$ is a $2 \times 2$ antihermitian matrix such that $e^{h} = u$. The only issue to worry about is whether $\|a_{\vec{m}}(H_p)\|$ is at most $\mathrm{poly(n)}$. By symmetry, one expects that $\|a_{\vec{m}}(H_p)\|$ should be independent of $p$. However, this is not obvious from examination of equations \ref{r1} through \ref{r6}. Nevertheless, it is true, as shown in appendix \ref{normindep}. Thus, the norm is no different than in the $p=0$ case, \emph{i.e.} $H_0$. By concatenating the quantum circuits implementing $A_{\vec{m}}(u_1), A_{\vec{m}}(u_2),\ldots, A_{\vec{m}}(u_L)$, one can implement $A_{\vec{m}}(u_1 u_2 \ldots u_L)$. We next show that any $n \times n$ unitary can be obtained as a product of $\mathrm{poly}(n)$ matrices, each of the form $u_p$, thus showing that the quantum algorithm is completely general and always runs in polynomial time. For any $2 \times 2$ matrix $M$, let $\mathcal{E}(M,i,j)$ be the $n \times n$ matrix in which $M$ acts on the $i\th$ and $j\th$ basis vectors. In other words, the $k,l$ matrix element of $\mathcal{E}(M,i,j)$ is \[ \mathcal{E}(M,i,j)_{kl} = \left\{ \begin{array}{ll} M_{11} & \textrm{if $k=i$ and $l=i$} \\ M_{12} & \textrm{if $k=i$ and $l=j$} \\ M_{21} & \textrm{if $k=j$ and $l=i$} \\ M_{22} & \textrm{if $k=j$ and $l=j$} \\ \delta_{kl} & \textrm{otherwise} \end{array} \right. . \] Thus \[ u_p = \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right], m+1,m+2 \right). \] Next note that, \[ \begin{array}{l} \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right],m+1,m+3 \right) = \vspace{3pt} \\ \mathcal{E} \left( \left[ \begin{array}{cccc} 0 & 1 \\ 1 & 0 \end{array} \right],m+2,m+3 \right) \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right], m+1,m+2 \right) \mathcal{E} \left( \left[ \begin{array}{cccc} 0 & 1 \\ 1 & 0 \end{array} \right],m+2,m+3 \right). \end{array} \] Thus the matrix \[ \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right],m+1,m+3 \right) \] is obtained as a product of three matrices of the form $u_p$. By repeating this conjugation process, one can obtain \begin{equation} \label{twolevel} \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right],i,j \right) \end{equation} for arbitrary $i,j$ as a product of one matrix of the form \[ \mathcal{E} \left( \left[ \begin{array}{cccc} u_{11} & u_{12} \\ u_{21} & u_{22} \end{array} \right],p+1,p+2 \right) \] for some $p$ and at most $O(n)$ matrices of the form \[ \mathcal{E} \left( \left[ \begin{array}{cccc} 0 & 1 \\ 1 & 0 \end{array} \right],q+1,q+2 \right) \] with various $q$. A matrix of the form shown in equation \ref{twolevel} is called a two-level unitary. As shown in section 4.5.1 of \cite{Nielsen_Chuang}, any $n \times n$ unitary is obtainable as a product of $\mathrm{poly}(n)$ two-level unitaries. Thus we obtain $A_{\vec{m}}(U)$ for any $n \times n$ unitary $U$ using $\mathrm{poly}(n)$ quantum gates. One can then obtain any matrix element of $A_{\vec{m}}(U)$ to precision $\pm \epsilon$ by repeating the Hadamard test $O(1/\epsilon^2)$ times. \subsection{Special Orthogonal Group} The special orthogonal group $SO(n)$ consists of all $n \times n$ real orthogonal matrices with determinant equal to one. The irreducible representations of $SO(n)$ are closely related to those of $U(n)$ and can also be expressed unitarily using a Gel'fand-Tsetlin basis. As discussed in chapter 18, volume 3 of \cite{Vilenkin}, the nature of the representations of $SO(n)$ depends on whether $n$ is even or odd. Following \cite{Vilenkin} and \cite{Gelfand_works}, we therefore introduce an integer $k$ and consider $SO(2k+1)$ and $SO(2k)$ separately. The irreducible representations of $SO(2k+1)$ are in bijective correspondence with the set of allowed weight vectors $\vec{m}$ consisting of $k$ entries, each of which is an integer or half-integer. Furthermore, the entries must satisfy \[ m_{1,n} \geq m_{2,n} \geq \ldots \geq m_{k,n} \geq 0. \] The irreducible representations of $SO(2k)$ correspond to the weight vectors $\vec{m}$ with $k-1$ entries, each of which must be an integer or half integer, and which must satisfy \[ m_{1,n} \geq m_{2,n} \geq \ldots \geq m_{k-1,n} \geq |m_{k,n}|. \] As in the case of $U(n)$, the set of allowed Gel'fand patterns is determined by rules for how a row can compare to the one above it. For $SO(n)$ these rules are slightly more complicated, and the rule for the $j\th$ row depends on whether $j$ is odd or even. Specifically the even rule for $j=2k$ is \[ m_{1,2k+1} \geq m_{1,2k} \geq m_{2,2k+1} \geq m_{2,3k} \geq \ldots \geq m_{k,2k+1} \geq m_{k,2k} \geq -m_{k,2k-1}, \] and the odd rule for $j=2k-1$ is \[ m_{1,2k} \geq m_{1,2k-1} \geq m_{2,2k} \geq m_{2,2k-1} \geq \ldots \geq m_{k-1,2k} \geq m_{k-1,2k-1} \geq |m_{k,2k}|. \] The Lie algebra $so(n)$ corresponding to the Lie group $SO(n)$ is the algebra of all antisymmetric $n \times n$ matrices. For any $G \in SO(n)$ there exists a $g \in so(n)$ such that $e^{g} = G$. The Lie algebra $so(n)$ is the space of all $n \times n$ real traceless antisymmetric matrices. Thus it is spanned by operators of the form \[ I_{k,i} = E_{i,k}-E_{k,i} \quad 1 \leq i < k \leq n. \] We can fully specify a representation of $so(n)$ by specifying the representations of the operators of the form $I_{q+1,q}$ because these generate $so(n)$. That is, any element of $so(n)$ can be obtained as a linear combination of commutators of such operators. The Gel'fand-Tsetlin representation $b_{\vec{m}}$ of these operators depends on whether $q$ is even or odd, and is given by the following formulas. \begin{eqnarray*} A_{2p}^j(M) & = & \frac{1}{2} \left| \frac{\prod_{r=1}^{p-1} \left[(l_{r,2p-1}-\frac{1}{2})^2-(l_{j,2p}+\frac{1}{2})^2\right] \prod_{r=1}^p \left[(l_{r,2p+1}-\frac{1}{2})^2 - (l_{j,2p}+\frac{1}{2})^2 \right]} {\prod_{r \neq j} (l_{r,2p}^2 - l_{j,2p}^2) (l_{r,2p}^2-(l_{j,2p}+1)^2)} \right|^{1/2} \\ B_{2p+1}^j(M) & = & \left| \frac{\prod_{r=1}^p (l_{r,2p}^2-l_{j,2p+1}^2) \prod_{r=1}^{p+1} (l_{r,2p+2}^2-l_{j,2p+1}^2)} {l_{j,2p+1}^2 (4l_{j,2p+1}^2-1) \prod_{r \neq j} (l_{r,2p+1}^2-l_{j,2p+1}^2) (l_{j,2p+1}^2 - (l_{r,2p+1}-1)^2)} \right|^{1/2} \\ C_{2p}(M) & = & \frac{\prod_{r=1}^p l_{r,2p} \prod_{r=1}^{p+1} l_{r,2p+2}}{\prod_{r=1}^p l_{r,2p+1} (l_{r,2p+1}-1)} \\ b_{\vec{m}}(I_{2p+1,2p}) M & = & \sum_{j=1}^p A_{2p}^j(M) M_{2p}^{+j} - \sum_{j=1}^p A_{2p}^j(M_2p^{-j}) M_{2p}^{-j} \\ b_{\vec{m}}(I_{2p+2,2p+1}) M & = & \sum_{j=1}^p B_{2p+1}^j(M) M_{2p+1}^{+j} - \sum_{j=1}^p B_{2p+1}^j(M_{2p+1}^{-j}) M_{2p+1}^{-j} + i C_{2p}(M) M \end{eqnarray*} By applying these rules to the set of allowed Gel'fand patterns described above one obtains the irreducible representations of the algebra $so(n)$. By exponentiating these, one then obtains the irreducible representations of the group $SO(n)$. Thus the quantum algorithm for approximating the matrix elements of the irreducible representations of $SO(n)$ is analogous to that for $U(n)$. \subsection{Special Unitary Group} \label{sun} The irreducible representations of $SU(n)$ can be easily constructed from the irreducible representations of $U(n)$, using the following facts taken from chapter 10 of \cite{BR}. The representations of $U(n)$ can be partitioned into a set of equivalence classes of projectively equivalent representations. Two representations of $U(n)$ with weights $\vec{l} =(l_1,l_2,\ldots,l_n)$ and $\vec{m}=(m_1,m_2,\ldots,m_n)$ are projectively equivalent if and only if there exists some integer $s$ such that $m_i = l_i + s$ for all $1 \leq i \leq n$. Any irreducible representation of $U(n)$ remains irreducible when restricted to $SU(n)$. Furthermore, by choosing one representative from each class of projectively equivalent representations of $U(n)$ and restricting to $SU(n)$ one obtains a complete set of inequivalent irreducible representations of $SU(n)$. The Lie algebra $su(n)$ corresponding to the Lie group $SU(n)$ is easily characterized; it is the space of all traceless $n \times n$ antihermitian matrices. Thus the matrix elements of the irreducible representations of $SU(n)$ are obtained by essentially the same quantum algorithm given for $U(n)$ in section \ref{ualg}. \subsection{Characters of Lie Groups} As always, an algorithm for approximating matrix elements immediately gives us an algorithm for approximating the normalized characters. However, the characters of $U(n)$, $SU(n)$, and $SO(n)$ are classically computable in $\mathrm{poly}(n)$ time. As discussed in \cite{Fulton_Harris}, the characters of any compact Lie group are given by the Weyl character formula. In general this formula may involve sums of exponentially many terms. However, in the special cases of $U(n)$, $SU(n)$, and $SO(n)$ the formula reduces to simpler forms\cite{Fulton_Harris}, given below. Because characters depend only on conjugacy class, the character $\chi_{\vec{m}}(u)$ depends only on the eigenvalues of $u$. For $u \in U(n)$ let $\lambda_1,\ldots,\lambda_n$ denote the eigenvalues. Let $\vec{m} = (m_1,m_2,\ldots,m_n) \in \mathbb{Z}^n$ be the weight of a representation of $U(n)$. Let \begin{equation} \label{l} l_i = m_i + n - i \end{equation} for each $i \in \{1,2,\ldots,n\}$. The character of the representation of weight $\vec{m}$ is \[ \chi^{U(n)}_{\vec{m}}(u) = \frac{\det A}{\det B} \] where $A$ and $B$ are the following $n \times n$ matrices \begin{eqnarray*} A_{ij} & = & \lambda_i^{l_j} \\ B_{ij} & = & \lambda_i^{n-j}. \end{eqnarray*} This formula breaks down if $u$ has a degenerate spectrum. However, the value of the character for degenerate $u$ can be obtained by taking the limit as some eigenvalues converge to the same value. As shown in \cite{Weyl7}, one can obtain the dimension $d_{\vec{m}}$ of the representation corresponding to a given weight $\vec{m}$ by calculating $\lim_{u \to \mathds{1}} \chi_{\vec{m}}(u)$. Specifically, by choosing $\lambda_j = e^{ij \epsilon}$ for each $1 \leq j \leq n$ and taking the limit as $\epsilon \to 0$ one obtains \[ d_{\vec{m}} = \frac{\prod_{i<j} (l_j - l_i)} {\prod_{i<j} (j-i)}, \] where $l_i$ is as defined in equation \ref{l}. As discussed in section \ref{sun}, the irreducible representations of $SU(n)$ are restrictions of irreducible representations of $U(n)$, therefore the characters of $SU(n)$ are given by the same formula as the characters of $U(n)$. $SO(n)$ consists of real matrices. The characteristic polynomials of these matrices have real coefficients, and thus their roots come in complex conjugate pairs. Thus, the eigenvalues of an element $g \in SO(2k+1)$ take the form \[ \lambda_1, \lambda_2, \ldots, \lambda_k, 1, \lambda_1^*, \lambda_2^*, \ldots, \lambda_k^*, \] and for $g \in SO(2k)$, the eigenvalues take the form \[ \lambda_1, \lambda_2, \ldots, \lambda_k, \lambda_1^*, \lambda_2^*, \ldots, \lambda_k^*. \] As discussed in \cite{Fulton_Harris}, the characters of the special orthogonal group are given by \[ \chi_{\vec{m}}^{SO(2k+1)}(g) = \frac{\det C}{\det D} \] and \[ \chi_{\vec{m}}^{SO(2k)}(g) = \frac{\det E + \det F} {\det G} \] where $C$ and $D$ are the following $k \times k$ matrices \begin{eqnarray*} C_{ij} & = & \lambda_j^{m_i+n-i+1/2} - \lambda_j^{-(m_i+n-i+1/2)} \\ D_{ij} & = & \lambda_j^{n-i+1/2} - \lambda_j^{-(n-i+1/2)} \end{eqnarray*} and $E,F,G$ are the following $(k-1) \times (k-1)$ matrices \begin{eqnarray*} E_{ij} & = & \lambda_j^{l_i} + \lambda_j^{-l_i}\\ F_{ij} & = & \lambda_j^{l_i} - \lambda_j^{-l_i}\\ G_{ij} & = & \lambda_j^{n-i} + \lambda_j^{-(n-i)}, \end{eqnarray*} where $l_i$ is as defined in equation \ref{l}. As with $U(n)$, the character of any element with a degenerate spectrum can be obtained by taking an appropriate limit. \subsection{Open Problems Regarding Lie groups} The quantum circuits presented in the preceeding sections efficiently implement the irreducible representations of $U(n)$, $SU(n)$, and $SO(n)$ that have polynomial highest weight and polynomial $n$. It is an interesting open problem to implement irreducible representations with quantum circuits that scale polynomially in the number of digits used to specify the highest weight. Alternatively, one could try to implement an Schur transform to handle exponential highest weight, which is also an open problem. It is even concievable that Schur-like transforms could be efficiently implemented for exponential $n$. That is, there could exist a quantum circuit of $\mathrm{polylog}(n)$ gates implementing a unitary transform $V$ such that for any $U \in U(n)$, $V U V^{-1}$ is a direct sum of irreducible representations of $U$. Of course, if $n$ is exponentially large, than we cannot have an explicit description of $U$, rather the group element $U$ could itself be defined by a quantum circuit. A completely different open problem is presented by the symplectic group. Having constructed quantum circuits for $SO(n)$ and $SU(n)$, the symplectic group is the only ``classical'' Lie group remaining to be analyzed. Thus it is natural to ask whether its irreducible representations can be efficiently implemented by quantum circuits. Two different groups can go by the name symplectic group depending on the reference. Connected non-compact simple Lie groups have no nontrivial finite-dimensional unitary representations (see \cite{BR}, theorem 8.1.2). This applies to one of the groups that goes by the name of symplectic. On the other hand, the irreducible representations of the compact symplectic group seem promising for implementation by quantum circuits. The main task seems to be finding a basis for these representations that is subgroup adapted and makes the representations unitary. A non-unitary subgroup-adapted basis is given in \cite{Molev}. \section{Alternating Group} \label{althyp} In section \ref{fourier}, we described a method to approximate matrix elements of the irreducible representations of the symmetric group using the symmetric group quantum Fourier transform. Here we take a more direct approach to this problem, which extends to the alternating group. To do this we must first explicitly describe the Young-Yamanouchi representation of the symmetric group. \subsection{Young-Yamanouchi Representation} \label{Young} For a given Young diagram $\lambda$, let $\mathcal{V}_\lambda$ be the vector space formally spanned by all standard Young tableaux compatible with $\lambda$. For example, if \[ \lambda = \begin{array}{l} \includegraphics[width=0.3in]{tetris.eps} \end{array} \] then $\mathcal{V}_\lambda$ is the 3-dimensional space consisting of all formal linear combinations of \[ \begin{array}{l} \includegraphics[width=1.4in]{threetabs.eps} \end{array} \] For any given Young diagram $\lambda$, the corresponding irreducible representation in the Young-Yamanouchi basis is a homomorphism $\rho_\lambda$ from $S_n$ to the group of orthogonal linear transformations on $\mathcal{V}_\lambda$. It is not easy to directly compute $\rho_\lambda(\pi)$ for an arbitrary permutation $\pi$. However, it is much easier to compute the representation of a transposition of neighbors. That is, we imagine the elements of $S_n$ as permuting a set of objects $1,2,\ldots,n$, arranged on a line. A neighbor transposition $\sigma_i$ swaps objects $i$ and $i+1$. It is well known that the set $\{ \sigma_1,\sigma_2,\ldots,\sigma_{n-1} \}$ generates $S_n$. The matrix elements for the Young-Yamanouchi representation of transpositions of neighbors can be obtained using a single simple rule: Let $\Lambda$ be any standard Young tableau compatible with Young diagram $\lambda$ then \begin{equation} \label{rule} \rho_\lambda(\sigma_i) \Lambda = \frac{1}{\tau_i^\Lambda} \Lambda + \sqrt{1-\frac{1}{(\tau_i^\Lambda)^2}} \Lambda', \end{equation} where $\Lambda'$ is the Young tableau obtained from $\Lambda$ by swapping boxes $i$ and $i+1$, and $\tau_i^\Lambda$ is the axial distance from box $i+1$ to box $i$. That is, we are allowed to hop vertically or horizontally to nearest neighbors, and $\tau$ is the number of hops needed to get from box $i+1$ to box $i$, where going down or left counts as $+1$ hop and going up or right counts as $-1$ hop. To illustrate the use of equation \ref{rule}, some examples are given in figure \ref{examples}. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{examples_nocap.eps} \caption{\label{examples} The above matrices are irreducible representations in the Young-Yamanouchi basis with Young diagram \protect \includegraphics[width=0.18in]{tetris2.eps}. Here $\sigma_i$ is the permutation in $S_4$ that swaps $i$ with $i+1$.} \end{center} \end{figure} In certain cases, starting with a standard Young tableau and swapping boxes $i$ and $i+1$ does not yield a standard Young tableau, as illustrated below. \[ \includegraphics[width=2.7in]{validity.eps} \] Some thought shows that all such cases are of one of the two types shown above. In both of these types, the axial distance is $\pm 1$. By equation \ref{rule}, the coefficient on the invalid Young tableau is $\sqrt{1-\frac{1}{(\pm 1)^2}} = 0$. Thus the representation lies strictly within the space of standard Young tableaux. \subsection{Direct Quantum Algorithm for $S_n$} \label{algorithm} We can directly implement the irreducible representations of $S_n$ by first decomposing the given permutation into a product of transposition of neighbors. The classical bubblesort algorithm achieves this efficiently. For any permutation in $S_n$, it yields a decomposition consisting of at most $O(n^2)$ transpositions. As seen in the previous section, the Young-Yamanouchi representation of any transposition is a direct sum of $2 \times 2$ and $1 \times 1$ blocks, and the matrix elements of these blocks are easy to compute. As shown in \cite{Aharonov_Tashma}, any unitary with these properties may be implemented by a quantum circuit with polynomially many gates. By concatenating at most $O(n^2)$ such quantum circuits we obtain the representation of any permutation in $S_n$. The Hadamard test allows a measurement to polynomial precision of the matrix elements of this representation. \subsection{Algorithm for Alternating Group} \label{alt} Any permutation $\pi$ corresponds to a permutation matrix with matrix element $i,j$ given by $\delta_{\pi(i),j}$. The determinant of any permutation matrix is $\pm 1$, and is known as the sign of the permutation. The permutations of sign $+1$ are called even, and the permutations of sign $-1$ are called odd. This is because a transposition has determinant $-1$, and therefore any product of an odd number of transpositions is odd and any product of an even number of transpositions is even. The even permutations in $S_n$ form a subgroup called the alternating group $A_n$, which has size $n!/2$. $A_n$ is a simple group (\emph{i.e.} it contains no normal subgroup) and it is the only normal subgroup of $S_n$ other than $\{ \mathds{1} \}$ and $S_n$. As one might guess, the irreducible representations of the alternating group are closely related to the irreducible representations of the symmetric group. Consequently, as shown in this section, the quantum algorithm of section \ref{algorithm} can be easily adapted to approximate any matrix element of any irreducible representation of $A_n$ to within $\pm \epsilon$ in $\mathrm{poly}(n,1/\epsilon)$ time. Explicit orthogonal matrix representations of the alternating group are worked out in \cite{Thrall} and recounted nicely in \cite{Headley}. Any representation $\rho$ of $S_n$ is automatically also a representation of $A_n$. However an irreducible representation $\rho$ of $S_n$ may no longer be irreducible when restricted to $A_n$. Each irreducible representation of $S_n$ either remains irreducible when restricted to $A_n$ or decomposes into a direct sum of two irreducible representations of $A_n$. All of the irreducible representations of $A_n$ are obtained in this way. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{conjugate.eps} \caption{\label{conjugate} To obtain the conjugate $\hat{\lambda}$ of Young diagram $\lambda$, reflect $\lambda$ about its diagonal. In other words the number of boxes in the $i\th$ column of $\hat{\lambda}$ is equal to the number of boxes in the $i\th$ row of $\lambda$.} \end{center} \end{figure} The conjugate of Young diagram $\lambda$ is obtained by reflecting $\lambda$ about the main diagonal, as shown in figure \ref{conjugate}. If $\lambda$ is not self-conjugate then the representation $\rho_{\lambda}$ of $S_n$ remains irreducible when restricted to $A_n$. In this case we can simply use the algorithm of section \ref{algorithm}. If $\lambda$ is self-conjugate then the representation $\rho_\lambda$ of $S_n$ becomes reducible when restricted to $A_n$. It is a direct sum of two irreducible representations of $A_n$, called $\rho_{\lambda+}$ and $\rho_{\lambda-}$. The two corresponding invariant subspaces of the reducible representation are the $+1$ and $-1$ eigenspaces, respectively, of the ``associator'' operator $S$ defined as follows. \begin{figure} \begin{center} \includegraphics[width=0.12\textwidth]{typewriter.eps} \caption{\label{typewriter} For a given Young diagram, there is a unique Young tableau in ``typewriter'' order, in which the boxes are numbered from left to right across the top row then from left to right across the next row, and so on, as illustrated in the example above.} \end{center} \end{figure} Let $\lambda$ be a self-conjugate Young diagram of $n$ boxes. Let $\Lambda_0$ be the ``typewriter-order'' Young tableau obtained by numbering the boxes from left to right across the first row, then left to right across the second row, and so on, as illustrated in figure \ref{typewriter}. For any standard Young tableau $\Lambda$ of shape $\lambda$, let $w_\Lambda \in S_n$ be the permutation that brings the boxes into typewriter order. That is, $w_{\Lambda} \Lambda = \Lambda_0$. Let $\hat{\Lambda}$ be the conjugate of $\Lambda$, obtained by reflecting $\Lambda$ about the main diagonal. If $\Lambda$ is standard then so is $\hat{\Lambda}$. Let $d(\lambda)$ be the length of the main diagonal of $\lambda$. $S$ is the linear operator on $\mathcal{V}_\lambda$ defined by \begin{equation} \label{S} S \Lambda = i^{(n-d(\lambda))/2} \mathrm{sign}(w_\Lambda) \hat{\Lambda}. \end{equation} An orthonormal basis for each of the eigenspaces of $S$ can be easily constructed from the Young-Yamanouchi basis. When $(n-d(\lambda))/2$ is odd, every standard Young tableau $\Lambda$ of shape $\lambda$ has the property $\mathrm{sign}(w_\Lambda) = -\mathrm{sign}(w_{\hat{\Lambda}})$, and $S$ is a direct sum of $2 \times 2$ blocks of the form \[ \left[ \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right] \] interchanging $\Lambda$ and $\hat{\Lambda}$. In this case, the linear combinations $ \frac{1}{\sqrt{2}} (\Lambda + i \hat{\Lambda}) $ for each conjugate pair of standard Young tableaux form an orthonormal basis for the $+1$ eigenspace of $S$, and the linear combinations $ \frac{1}{\sqrt{2}}(\Lambda - i \hat{\Lambda}) $ form an orthonormal basis for the $-1$ eigenspace of $S$. Similarly, when $(n-d(\lambda))/2$ is even, $\mathrm{sign}(w_\Lambda) = \mathrm{sign}(w_{\hat{\Lambda}})$ for all standard Young tableaux $\Lambda$ of shape $\lambda$. Thus $S$ is a direct sum of $2 \times 2$ blocks of the form \[ \left[ \begin{array}{cc} 0 & -1 \\ -1 & 0 \end{array} \right] \] interchanging $\Lambda$ and $\hat{\Lambda}$. In this case the linear combinations $ \frac{1}{\sqrt{2}} (\Lambda - \hat{\Lambda}) $ form an orthonormal basis for the $+1$ eigenspace of $S$ and the linear combinations $ \frac{1}{\sqrt{2}} (\Lambda + \hat{\Lambda}) $ form an orthonormal basis for the $-1$ eigenspace of $S$. Suppose $\lambda$ is self-conjugate and $(n-d(\lambda))/2$ is even. Any matrix element of the irreducible representation $\rho_{\lambda+}$ of $A_n$ is given by \[ \frac{1}{2} (\Lambda + \hat{\Lambda}) \rho_\lambda(\pi) (\Gamma + \hat{\Gamma}), \] where $\Lambda, \Gamma$ is some pair of standard Young tableaux and $\pi$ is some element of $A_n$. This is a linear combination of only four Young-Yamanouchi matrix elements of $\rho_\lambda(\pi)$. One can use the algorithm of section \ref{algorithm} to calculate each of these and then simply add them up with the appropriate coefficients. The cases where $(n-d(\lambda))/2$ is odd and/or we want a matrix element of $\rho_{\lambda-}$ are analogous. \section{Acknowledgements} I thank Greg Kuperberg and anonymous referees for suggesting the approaches described in sections \ref{fourier} and \ref{Schurxform}. I thank Daniel Rockmore, Cris Moore, Andrew Childs, Aram Harrow, John Preskill, and Jeffrey Goldstone for useful discussions. I thank Isaac Chuang and Vincent Crespi for comments that helped to inspire this work, and anonymous referees for useful comments. Parts of this work were completed the Center for Theoretical physics at MIT, the Digital Materials Laboratory at RIKEN, and the Institute for Quantum Information at Caltech. I thank these institutions as well as the Army Research Office (ARO), the Disruptive Technology Office (DTO), the Department of Energy (DOE), and Franco Nori and Sahel Ashab at RIKEN.
2,877,628,089,272
arxiv
\section{Introduction} \label{sec:introduction} Estimating Regions of Attraction (RoA) of a dynamical system is needed in robotics for understanding the conditions under which a controller can be safely applied to solve a task. It is also needed for composing controllers and forming hybrid solutions that work from a wider swath of the underlying state space. Estimating such RoAs, however, is challenging. Computing a Lyapunov function (LF) can provide an RoA but obtaining an analytical expression for an LF is difficult for general non-linear systems. This motivated numerical solutions, which often still require access to the system's differential equation for computing an LF. Recent advances in data-driven control provide effective learned controllers \cite{Haarnoja2018SoftAO,gillen2020combining}), which do not have analytical expressions. Machine learning methods have also been proposed to learn RoAs \cite{chen2021learning_nonlinear,berkenkamp2016safe} without access to the control law's expression as well as for composing controllers \cite{perkins2002lyapunov,7330913}. These methods, however, tend to be sensitive to parameters, are computationally demanding in time and memory while they lack guarantees. Therefore, there is a need for identifying RoAs, especially of data-driven systems, in a robust, mathematically rigorous, and computationally efficient way. {\bf Contributions:} This work sidesteps the estimation of an LF. It builds on top of recent progress in combinatorial dynamics and order theory \cite{kalies:mischaikow:vandervorst:14,kalies:mischaikow:vandervorst:15,kalies:mischaikow:vandervorst:21} to propose a combinatorial analysis of the global dynamics of black-box robot controllers and describe attractors and their corresponding RoAs. The approach is based on a finite combinatorial representation of the state space and its nonlinear dynamics. It only requires access to a discrete time representation of the dynamics and can handle analytical, data-driven and hybrid controllers. For instance, consider the phase space of a pendulum as in Fig. \ref{fig:pendulum12} given an LQR controller for driving the system to the $(0,0)$ state. The proposed framework generates the combinatorial representation on the right, where nodes correspond to regions in a state space decomposition. This information is then summarized in more compact, annotated, acyclic directed graphs called Morse graphs, shown on the left. The nodes of Morse graphs can contain attractors of interest. The associated RoAs can also be inferred automatically from the combinatorial representation. The resulting information is finite and graphical in nature, thus, it can be easily queried and understood by a person. The accompanying evaluation shows that the proposed tools are relatively computationally efficient, provide a more global, explainable understanding of the dynamics, often achieve higher accuracy and provide stronger guarantees than alternatives. They also allow the composition of hybrid controllers with wider RoAs. {\bf Related Work:} Multiple numerical methods exist for estimating RoAs given direct access to the system's expression \cite{giesl2015review}. For instance, maximal Lyapunov functions (LFs) \cite{vannelli1985maximal} compute the RoA by increasing the estimate in each iteration. Construction of an ellipsoidal approximation reduces to a linear matrix inequalities (LMIs) problem \cite{pesterev2017attraction,pesterev2019estimation}. This has been applied to wheeled robots \cite{rapoport2008estimation} and for NASA's generic transport model \cite{pandita2009reachability}. There are also convex formulations that rely on LMI relaxations to to solve a convex linear program and approximate the RoA of systems with polynomial dynamics and semialgebraic inputs \cite{henrion2013convex}. An LF can be restricted to be a sum-of-squares (SoS) polynomial so as to be constructed via semi-definite programming \cite{parrilo2000structured}. SoS approaches can be used to build randomized stabilized trees with LQR feedback \cite{tedrake2010lqr}, precomputing funnel libraries \cite{majumdar2017funnel} and acquiring certificates of stability of rigid bodies with impacts and friction \cite{posa2013lyapunov}. State space samples satisfying a Lyapunov-type inequality can be used to construct neighborhoods where the candidate LF is certified \cite{bobiti2016sampling}. The proposed approach does not require access to the expression of the underlying control law and avoids computing an LF. Reachability analysis \cite{bansal2017hamilton} is also relevant as it computes the backward reachable tube of a dynamical system and can recover the maximal RoA without imposing shape restrictions. Applications include computing the RoA of dynamical walkers \cite{choi2022computation} and, combined with machine learning, can maintain the system's safety over a specified horizon \cite{gillulay2011guaranteed}. Barrier functions ensure safety given an unknown dynamical system. They can be learned with a GP and conduct reachability analysis to obtain safe control policies \cite{akametalu2014reachability}. Barrier certificates (BCs) can identify areas needing more exploration to expand the safe set~\cite{wang2018safe}. Machine learning can be used to compute both LFs and BCs. One approach is to alternate between a learner and a verifier to search within a set of LFs \cite{chen2021learning_hybrid}. An alternative approximates the dynamics map by a piecewise linear neural network and uses a counterexample-guided method as a verifier to synthesize the LF \cite{chen2021learning_nonlinear}. Construction of LFs can be performed out of stable data-driven Koopman operators \cite{mamakoukas2020learning}. LFs and BCs can be synthesized by combining training a neural network and using an SMT solver as verifier \cite{abate2021fossil}. LFs for piecewise linear dynamical systems can be synthesized as the outputs of neural networks with leaky ReLU activations \cite{dai2020counter}. For general dynamical systems and an initial safe set, a neural network LF is trained to adapt to the RoA's shape \cite{richards2018lyapunov}. Gaussian processes can be used to obtain a Lyapunov-like function \cite{lederer2019local}. Finally, LFs can be synthesized while learning a controller to prove the controller's stability and generate counter-examples to improve the controller \cite{dai2021lyapunov}. This paper compares performance against a state-of-the-art ML approach that computes an LF \cite{richards2018lyapunov}. This work builds on top of topological tools. Topology has been used for various problems in robotics before, such as deformable manipulation \cite{bhattacharya2015topological,antonova2021sequential}, robot perception \cite{ge2021enhancing}, multi-robot problems \cite{varava2017herding}, determining homotopy-inequivalent trajectories in motion planning \cite{pokorny2016high} and to extract higher-order dynamics for motion prediction \cite{carvalho2019long}. To the best of the authors' knowledge, this is the first application of recent advancements in topology to summarize the global dynamics of robot controllers. \section{Problem Setup} \label{sec:setup} This work aims to systematically analyze the global dynamics of robot controllers based on combinatorial dynamics and order theory \cite{kalies:mischaikow:vandervorst:14,kalies:mischaikow:vandervorst:15,kalies:mischaikow:vandervorst:21}. The prior theory is very general and applies to any continuous dynamical system defined over a locally compact metric space. The material is adopted and applied to the restricted setting of robot control problems. In particular, consider a nonlinear continuous-time system: \vspace{-.15in} \begin{equation}\label{eq:dyn} \dot{x} = f(x,u), \vspace{-.05in} \end{equation} where $x(t)\in X$ is the state at time $t$ in a domain $X\subseteq \mathbb{R}^n$, $u: X \rightarrow \mathbb{U} \subseteq \mathbb{R}^m$ is a Lipschitiz continuous control as defined by a control policy $u(x)$, and $f:X\times \mathbb{U} \rightarrow \mathbb{R}^n$ is a Lipschitiz continuous function, where $\mathbb{U}$ is an open set in $\mathbb{R}^m$. The dynamical system consists of the model $f(\cdot)$, which can be accessed but it is not necessarily known analytically, and a control policy $u=u(x)$ that can be either analytical or learned from data. For a given time $\tau >0$, let $\phi_\tau: X \rightarrow X$ denote the function derived from solving Eq. \eqref{eq:dyn} forward in time for duration $\tau$ from everywhere. Given that $f$ and $u$ are Lipschitz continuous, $\phi_\tau$ is also Lipschitz continuous. Denote the global Lipschitz constant of $\phi_\tau$ by $L_\tau$. Observe that a RoA for Eq. \eqref{eq:dyn} is a RoA under $\phi_\tau$. Therefore and w.l.o.g, the rest of this work focuses on the dynamics of $\phi_\tau$, which is not assumed, however, to be computable and available. The objective is to identify a combinatorial approach, which can capture meaningful aspects of the dynamics of interest according to $\phi_\tau\colon X\to X$, which are continuous in nature. In this context, a subset of the state space $N\subset X$ is an \emph{attracting block} for $\phi_\tau$, if $\phi_\tau(N)\subset \mathrm{int}(N)$, where $\mathrm{int}$ denotes topological interior. This means that the system of Eq. (1) will not escape the subset $N$ once it has entered it. Denote the set of attracting blocks of $\phi_\tau$ by ${\mathsf{ ABlock}}(\phi_\tau)$. Given $N\in {\mathsf{ ABlock}}(\phi_\tau)$, its \emph{omega limit set} is an invariant set for $\phi_\tau$ defined as: \vspace{-.1in} \[ \omega(N):= \bigcap_{n \in \mathbb{Z}^+} \mathrm{cl}\left(\bigcup_{k=n}^\infty \phi_\tau^k(N)\right) \vspace{-.125in} \] where $\phi_\tau^k$ is the composition $\phi_\tau\circ \cdots \circ\phi_\tau$ ($k$ times) and $\mathrm{cl}$ is topological closure. \emph{The attracting block $N$ is a RoA for $\omega(N)$}. In general ${\mathsf{ ABlock}}(\phi_\tau)$ is huge, containing uncountably many elements, and is too large to work with. Thus, the {\bf problem} is to systematically identify a minimal finite subset of ${\mathsf{ ABlock}}(\phi_\tau)$ that both represents as tightly as possible the attractors and captures the maximal RoAs of these attractors. {\bf Running Example:} For exposition purposes, the following discussion will use the second-order pendulum as an example to explain the corresponding definitions and the proposed method (Fig~\ref{fig:pendulum12}). The pendulum is modeled by the differential equation $m\ell^2 \Ddot{\theta} = mG \ell \sin{\theta} - \beta\theta + u$, with state $x := (\theta, \dot\theta)$, where $\theta$ is the angle from the upright equilibrium $\mathbf{\theta}_o = 0$, $u$ is the input torque, $m$ is the pendulum mass, $G$ is the gravitational acceleration, $\ell$ is the pole length, and $\beta$ is the friction coefficient. The control $u$ in the running example is computed by the LQR approach described in Section \ref{sec:results}. In Fig.~\ref{fig:pendulum12}, we use the time-$1$ map $\phi_1$ of the flow of the pendulum under the LQR controller. The proposed method is not limited to this or similar low-dimensional systems/controllers. \section{Proposed Framework and Method} \label{sec:FM} {\bf Overview:} The method first approximates $\phi_\tau$ by decomposing the state space $X$ into regions $\xi$. For multiple initial states within each $\xi$ the system is propagated forward for time $\tau$ to identify regions reachable from $\xi$. Given the reachability information and the Lipschitz continuity of $\phi_\tau$, a directed multi-valued graph representation ${\mathcal F}$ stores each region $\xi$ as a vertex and edges point from $\xi$ to all regions in an outer (conservative) approximation of its true reachable set. The method then computes the strongly connected components (SCC) of ${\mathcal F}$. An SCC is a maximal set of vertices of ${\mathcal F}$ such that every pair of vertices in the SCC are reachable from each other. The non-trivial SCCs of ${\mathcal F}$, i.e., those with at least one edge, are called \emph{recurrent sets}, and capture the recurrent dynamics of $\phi_\tau$. Every region $\xi$ not in a recurrent set exhibits non-recurrent behavior. The same algorithm that computes SCCs also provides a topological sort of the vertices in ${\mathcal F}$, which allows to define reachability relationships between recurrent sets and non-recurrent regions. This gives rise to a condensation graph $\mathsf{CG}({\mathcal F})$, where all SCCs are condensed to a single vertex and edges reflect reachability according to the topological sort. Typically, $\mathsf{CG}({\mathcal F})$ is roughly the same size as ${\mathcal F}$ and cumbersome to maintain. The implementation avoids explicitly storing either graph. The method succinctly captures the recurrent and non-recurrent dynamics in the Morse graph ${\mathsf{ MG}}({\mathcal F})$, whose vertices are the recurrent sets of ${\mathcal F}$ and whose edges reflect reachability according to the topological sort. Overall, the proposed method can be divided into the four steps described below: \vspace{-.1in} \begin{itemize} \item {\bf Step 1.} State space decomposition and generation of input to represent $\phi_\tau$; \item {\bf Step 2.} Construction of the combinatorial representation ${\mathcal F}$ of the dynamics given an outer approximation of $\phi_\tau$; \item {\bf Step 3.} Computation of Condensation Graph $\mathsf{CG}({\mathcal F})$ and Morse Graph ${\mathsf{ MG}}({\mathcal F})$ via identification of recurrent sets/SCCs of ${\mathcal F}$ and topological sort; \item {\bf Step 4.} Derivation of RoAs for the recurrent sets from $\mathsf{CG}({\mathcal F})$; \end{itemize} \vspace{-.05in} \noindent {\bf Step 1 - State Space Decomposition and Generation of Input Data:} This paper considers the control system of Eq. \eqref{eq:dyn} restricted to a state space given by an orthotope $X = \prod_{i=1}^n [a_i, b_i]$, allowing for the possibility of periodic boundary conditions. This allows for torus-like spaces, such as for the running example of the 2$^{nd}$-order pendulum. For simplicity, the accompanying implementation is using a uniform discretization of the state space based on $2^{k_i}$ subdivisions in the $i$-th component resulting in a decomposition of the state space into $\prod_{i=1}^n 2^{k_i}$ cubes of dimension $n$. The term ${\mathcal X}$ denotes the collection of these cubes. The method then generates the set of values of $\phi_\tau$ at the corner points of cubes in ${\mathcal X}$. More precisely, let $V({\mathcal X})$ denote the set of all corner points of cubes in ${\mathcal X}$. The method computes the set of ordered pairs $\Phi_\tau({\mathcal X}):=\{(v,\phi_\tau(v)) \mid v \in V({\mathcal X})\}$, by forward propagating the dynamics for time $\tau$ from all $V({\mathcal X})$. In this way, this work does not assume exact, analytical knowledge of $\phi_\tau$, but rather exploits its existence. It only requires the ability to generate the set $\Phi_\tau({\mathcal X})$. \noindent {\bf Step 2 - Combinatorial Representation ${\mathcal F}$ via outer approximation:} The dynamics of the continuous $\phi_\tau$ are approximated by a \emph{combinatorial multivalued map} ${\mathcal F} \colon {\mathcal X}\rightrightarrows {\mathcal X}$, where vertices are $n$-cubes $\xi \in {\mathcal X}$. The map ${\mathcal F}$ contains directed edges $\xi \to \xi', \forall\ \xi' \in \Phi_\tau(\xi)$. The set of cubes identified by ${\mathcal F}(\xi)$ are meant to capture the possible states of $\phi_\tau(\xi)$. To obtain mathematically rigorous results about the dynamics of $\phi_\tau$, it is sufficient for ${\mathcal F}$ to be an \emph{outer approximation} of $\phi_\tau$, i.e., for \vspace{-.125in} \begin{equation} \label{eq:OuterApproximation} \phi_\tau(\xi) \subset \mathrm{int}\left({\mathcal F}(\xi)\right)\quad \text{for all $\xi\in{\mathcal X}$} \vspace{-.025in} \end{equation} where $\mathrm{int}$ denotes topological interior. The left side of Eq. \eqref{eq:OuterApproximation} indicates the set of states that can be achieved in time $\tau$ according to \eqref{eq:dyn}, which is unknown exactly. On the right side, ${\mathcal F}(\xi)$ is a list of vertices that can be identified with a set of $n$-cubes in $X$. The inclusion relation and $\mathrm{int}$ indicate the constraint that a large enough collection of cubes is chosen in ${\mathcal F}(\xi)$ to enclose $\phi_\tau(\xi)$ and at least an arbitrarily small overestimation is needed. The minimal outer approximation of $\phi_\tau$ \cite{kalies:mischaikow:vandervorst:05} is: \vspace{-.125in} \[ {\mathcal F}_{min}(\xi) := \{ \xi' \in {\mathcal X} \mid \xi'\cap \phi_\tau(\xi) \neq \emptyset \}, \vspace{-.025in} \] that is, ${\mathcal F}_{min}(\xi)$ indicates the minimal set of cubes that contain the set of all states that can be reached in time $\tau$ starting in $\xi$. See Fig. \ref{fig:outer_approx}(left) for an illustration. Even with complete knowledge of $\phi_\tau$, computation of ${\mathcal F}_{min}$ is typically prohibitively expensive. Nevertheless, from a mathematical perspective it suffices to work with any ${\mathcal F}\colon {\mathcal X}\rightrightarrows {\mathcal X}$ that satisfies ${\mathcal F}_{min}(\xi) \subset {\mathcal F}(\xi)$ for all $\xi\in{\mathcal X}$. In general, the objective is to achieve a tight outer approximation, as the larger the size of the images of ${\mathcal F}$, the less the information about the dynamics of $\phi_\tau$. \begin{wrapfigure}{r}{0.6\textwidth} \vspace{-.3in} \includegraphics[width=0.295\columnwidth]{figs/Multivalued_map6.pdf} \includegraphics[width=0.295\columnwidth]{figs/Multivalued_map4.pdf} \vspace{-0.25in} \caption{\small The set $\phi_{\tau}(\xi)$ denotes the reachable states from all states in the cell $\xi$ after time $\tau$. Multivalued maps: (left) minimal/ideal outer approximation ${\mathcal F}_{min}(\xi)$ and (right) outer approximation obtained by using a Lipschitz constant.} \vspace{-.3in} \label{fig:outer_approx} \end{wrapfigure} The assumption for an outer approximation is that ${\mathcal F}(\xi)\neq\emptyset$ for all $\xi\in{\mathcal X}$. In practice it does not need to hold. Determining ${\mathcal F}$ represents the major computational bottleneck as it involves numerical simulations of Eq. \eqref{eq:dyn} or obtaining real-world experiments with the robotic system. The flexibility in the definition of an outer approximation provides flexibility in its construction. This work computes ${\mathcal F}$ as follows: Recall that $\phi_\tau$ is Lipschitz with constant $L = L_{\tau}$ ($L$ depends on $\tau$). Given $x\in X$, let $\overline{B(x,\delta)} = \{x'\in X\mid \|x-x'\| \leq \delta \}$ denote the $\delta$-closed ball at state $x$. Define the \emph{diameter of $\xi \in {\mathcal X}$} by $d(\xi) := \max_{x,x'\in \xi}\|x-x'\|$ and the \emph{diameter of ${\mathcal X}$} by $d := \max_{\xi \in {\mathcal X}} d (\xi)$. Note that for a uniform grid, $d = d(\xi)$, independently of the choice of $\xi$. Let $V(\xi)$ be the set of corner points of the cube $\xi$ and: \vspace{-.075in} \[ {\mathcal F}(\xi) := \left\{ \xi' \mid \xi' \cap \overline{B\left(\phi_\tau(v),Ld/2 \right)} \neq \emptyset\ \text{for some $v\in V(\xi)$} \right\}. \vspace{-.075in} \] Fig. \ref{fig:outer_approx}(right) provides a relevant illustration. The definition of ${\mathcal F}$ above is guaranteed to provide an outer approximation if $L$ is (an upper bound for) the Lipschitz constant $L_\tau$. In practice, however, only an estimate for the Lipschitz constant $L_\tau$ is available. In this case, an outer approximation can be obtained by evaluating $\phi_\tau$ on a fine enough grid in $\xi$ instead of just the corner points. \noindent {\bf Step 3 - Identification of recurrent \& non-recurrent behavior of ${\mathcal F}$:} Identifying all the recurrent sets ${\mathcal M}$ of ${\mathcal F}$ is performed using Tarjan's Strongly Connected Components (SCC) algorithm, which is linear in the number of elements of ${\mathcal X}$ plus the number of edges in ${\mathcal F}$. The accompanying implementation uses a modified version of the algorithm, which does not store the whole digraph ${\mathcal F}$ in memory and yet evaluates ${\mathcal F}$ only once for each node \cite{Bush:Gameiro:Harker,CMGDB}. An indexing set ${\mathsf P}$ is introduced in order to distinguish all the recurrent sets and helps to enumerate them: $\{{\mathcal M}(p)\mid p\in {\mathsf P}\}$. A partial order relation $\leq$ on ${\mathsf P}$ is imposed on the corresponding recurrent sets ${\mathcal M}(p)$. In particular, $q \leq p$ if there exists a path in ${\mathcal F}$ from a $\xi \in {\mathcal M}(p)$ to a $\xi' \in {\mathcal M}(q)$. Identifying the partial order $\leq$ for two recurrent sets is a question of reachability on ${\mathcal F}$ between the two recurrent sets and is done taking advantage of the fact that Tarjan's algorithm also performs a topological sort on the vertices. For the examples of this paper, the number of recurrent sets is in the order of tens. \begin{figure}[h] \vspace{-.25in} \centering \includegraphics[width=0.9\columnwidth]{figs/morse_graph_1d_full.png} \vspace{-.15in} \caption{\small From left to right: An example decomposition of a state space into regions and transitions between regions. The corresponding multivalued map ${\mathcal F}$. The condensation graph $\mathsf{CG}({\mathcal F})$ where SCCs have been condensed to a single vertex. The corresponding Morse graph were node 0 corresponds to the SCC \{1,2\} with RoA=\{0,1,2,3\} and node 1 corresponds to the SCC \{5,6\} with RoA=\{5,6,7\}. Initial conditions in region 4 (identified with node 2) may end up either in regions \{1,2\} (node 0) or \{5,6\} (node 1).} \label{fig:my_label} \vspace{-.3in} \end{figure} Thus, the output of the SCC algorithm is a new graph representation, a condensation graph $\mathsf{CG}({\mathcal F})$ of ${\mathcal F}$, which is formed by contracting each strongly connected component of ${\mathcal F}$ into a single vertex. The condensation graph $\mathsf{CG}({\mathcal F})$ is by definition a directed acyclic graph. The reachability on ${\mathcal F}$ defines the direction of the edges in the condensation graph $\mathsf{CG}({\mathcal F})$ and relates to the partial order relation, where $v \leq w$, if there is a directed edge $w \to v$ in $\mathsf{CG}({\mathcal F})$. While the graph $\mathsf{CG}({\mathcal F})$ has condensed all SCCs into a single vertex, it is still a huge graph representation as it stores all the vertices of ${\mathcal F}$ that are not in a recurrent set. For this reason, the proposed method outputs the sub-graph derived only from the recurrent sets (i.e., the non-trivial SCCs) as these components are the only possible candidates for containing the attractors of RoAs given the level of discretization. This is referred to as the \emph{Morse graph} ${\mathsf{ MG}}({\mathcal F})$ of ${\mathcal F}\colon {\mathcal X}\rightrightarrows {\mathcal X}$ (shown in Fig.~\ref{fig:pendulum12}) and is the partially ordered set: \vspace{-.1in} \begin{equation} \label{eq:MorseDec} {\mathsf{ MG}}({\mathcal F}) = \{{\mathcal M}(p)\subset {\mathcal X}\mid p\in ({\mathsf P},\leq)\}. \vspace{-.1in} \end{equation} Since $({\mathsf P},\leq)$ is a partially ordered set, ${\mathsf{ MG}}({\mathcal F})$ can be represented as a directed graph. The \emph{Morse graph} ${\mathsf{ MG}}({\mathcal F})$ of ${\mathcal F}\colon {\mathcal X}\rightrightarrows {\mathcal X}$ is the Hasse diagram of $({\mathsf P},\leq)$, i.e., the minimal directed graph from which $({\mathsf P},\leq)$ can be reconstructed. The Morse graph for the inverted pendulum is presented in Fig.~\ref{fig:pendulum12} and is indexed by ${\mathsf P} = \{ 0, \ldots, 6 \}$ with order relations: $p < q$, iff there is a path from $q$ to $p$ in the digraph. Hence, $4$, $2$, and $0$ are the minimal elements and $1 < 3$ and $5 < 6$. The ${\mathsf{ MG}}({\mathcal F})$ is computed by Algorithm~\ref{alg:MorseGraph}, which takes as input the decomposition ${\mathcal X}$, the dataset $\Phi_\tau({\mathcal X})$ (representing the map $\phi_\tau$), and an estimate for the Lipschitz constant $L$ of $\phi_\tau$. The dataset $\Phi_\tau({\mathcal X})$ is used to compute the outer approximation ${\mathcal F}$, as described in Step 2. When the condensation graph is computed, the non-trivial SCCs (components with at least one edge) are flagged, as they become nodes of the Morse graph. Then, the only step remaining is to determine the reachability of recurrent sets as discussed above. \begin{algorithm}[H] \small \DontPrintSemicolon ${\mathcal F} \gets \FuncSty{OuterApproximation}({\mathcal X}, \Phi_\tau({\mathcal X}), L)$ \tcp*[h]{${\mathcal F}$ as a digraph but not stored in memory explicitly}\; $\mathrm{SCC}({\mathcal F}) \gets \FuncSty{StronglyConnectedComponents}({\mathcal F})$\; $\mathrm{CG}({\mathcal F}) \gets \FuncSty{CondensationGraph}(\mathrm{SCC}({\mathcal F}))$ \tcp*[h]{flag non-trivial SCCs}\; ${\mathsf{ MG}}({\mathcal F}) \gets \FuncSty{Reachability}(\mathrm{CG}({\mathcal F}))$\; \KwRet{${\mathsf{ MG}}({\mathcal F}), \mathrm{CG}({\mathcal F})$} \caption{\FuncSty{MorseGraph}(${\mathcal X}$, $\Phi_\tau({\mathcal X})$, $L$)} \label{alg:MorseGraph} \end{algorithm} \noindent {\bf Step 4 - Derivation of RoAs:} Define $O_\bullet\colon {\mathcal X} \rightrightarrows {\mathsf P}$ and $O^\bullet\colon {\mathcal X} \rightrightarrows {\mathsf P}$ as:\\ $O_\bullet(\xi) := \min \{p\in {\mathsf P} \mid \text{there exists a path in}~ {\mathcal F} ~\text{from}~ \xi ~\text{to}~ \xi' \in {\mathcal M}(p)\}$ and\\ $O^\bullet(\xi) := \max \{p\in {\mathsf P} \mid \text{there exists a path in}~ {\mathcal F} ~\text{from}~ \xi ~\text{to}~ \xi' \in {\mathcal M}(p)\}$. Note that since ${\mathsf P}$ is a poset it is possible that $O_\bullet(\xi)$ and $O^\bullet(\xi)$ have multiple values. Under the assumption that ${\mathcal F}$ is an outer approximation of $\phi_\tau$, then, if $p \not\in O_\bullet(\xi)$, it is true that for every $x\in \xi$ and any $n\geq 0$, $\phi_\tau^n(x)\not\in {\mathcal M}(p)$. \vspace{-.1in} \begin{theorem} \label{thrm:RoA} If $p$ is a minimal element of $({\mathsf P},\leq)$ and $O^\bullet(\xi) = \{ p \}$, then for every $x\in \xi$, there exists $n\geq 0$ such that $\phi_\tau^n(x)\in {\mathcal M}(p)$. As a consequence if $p$ is a minimal element of $({\mathsf P},\leq)$, then $\left\{ \xi\in {\mathcal X} \mid O^\bullet(\xi) = \{ p \} \right\}$ is the maximal RoA for ${\mathcal M}(p)$ that can be rigorously identified using ${\mathcal F}$. \vspace{-.1in} \end{theorem} Returning to the example of Fig.~\ref{fig:pendulum12}, $O^\bullet(\xi) = p$ for $\xi$ is the region corresponding to the Morse set ${\mathcal M}(p)$, for $p=0, \ldots, 6$. For $\xi$ in the RoA indicated by $a$, $b$, $c$, $d$, and $e$ the following map arises: $O^\bullet(\xi) = 0$ for $a$, $O^\bullet(\xi) = 2$ for $b$, $O^\bullet(\xi) = 4$ for $c$, $O^\bullet(\xi) = 3$ for $d$, and $O^\bullet(\xi) = 6$ for $e$. This is indicated by the graph in Fig.~\ref{fig:pendulum12}(right). $O^\bullet(\xi)$ is the Morse node reachable from the corresponding region in the graph. It follows from Theorem~\ref{thrm:RoA} that the regions in Fig.~\ref{fig:pendulum12} labeled $4$ and $c$ form the maximal RoA of $4$, the regions labeled $2$ and $b$ form the maximal RoA of $2$, and the regions labeled $0$ and $a$ form the maximal RoA of $0$. To obtain $O^\bullet$, the graph $\mathsf{CG}({\mathcal F})$ is explored with a depth first search (DFS) approach and for each visited vertex $v \in \mathsf{CG}({\mathcal F})$ the maximal reachable Morse nodes are identified. That is, the collection of $p\in {\mathsf P}$ such that there exists a path from $v$ to a cube $\xi\in{\mathcal M}(p)$. See Appendix \ref{sec:appendix-algorithm} for an implementation of the DFS. \noindent {\bf Relationship to Continuous Dynamics:} The \emph{condensation graph} $\mathsf{CG}({\mathcal F})$ and the \emph{Morse graph} ${\mathsf{ MG}}({\mathcal F})$ of ${\mathcal F}$ are highlighted in the right column of Fig.~\ref{diag:framework} as the combinatorial objects computed given the outer approximation ${\mathcal F}$ of $\phi_\tau$. Each element of $\mathsf{CG}({\mathcal F})$ is identified with a region of $X$. This collection of regions is denoted by ${\mathsf T}$ and, as shown in Fig~\ref{diag:framework}, it is isomorphic as a poset to $\mathsf{CG}({\mathcal F})$. To find attracting blocks, the method uses the following fact \cite{kalies:mischaikow:vandervorst:15}. Let ${\mathcal F}$ be an \emph{outer approximation} of $\phi_\tau$: If $x\in T\in {\mathsf T}$ and $\phi_\tau(x)\in T'\in {\mathsf T}$, then $T'\leq T$ where $\leq$ is the order relation on ${\mathsf T}$, i.e., forward orbits under $\phi_\tau$ can be tracked by descending the order relation on ${\mathsf T}$. Stated more concisely: ${\mathsf O}(T)\in {\mathsf{ ABlock}}(\phi_\tau)$, where ${\mathsf O}(T):= \{T'\in {\mathsf T} \mid T'\leq T\}$ is the \emph{downset} of $T\in{\mathsf T}$. \begin{wrapfigure}{l}{0.4\textwidth} \vspace{-.35in} \centering \begin{tikzpicture}[scale=1.4] \node[] at (0.15,1.8) {{\color{blue} Dynamics}}; \node[] at (1.6,1.8) {{\color{green} Combinatorics}}; \draw[green!60!white] (0.9,-0.6) to (0.9,1.6); \draw[green!60!white] (2.3,-0.6) to (2.3,1.6); \draw[green!60!white] (0.9,-0.6) to (2.3,-0.6); \draw[green!60!white] (0.9,1.6) to (2.3,1.6); \draw[blue!60!white] (0.65,-0.6) to (0.65,1.6); \draw[blue!60!white] (-0.2,-0.6) to (-0.2,1.6); \draw[blue!60!white] (-0.2,-0.6) to (0.65,-0.6); \draw[blue!60!white](-0.2,1.6) to (0.65,1.6); \filldraw[blue!70!white,opacity=0.1] (-0.2,-0.6) to (-0.2,1.6) to (0.65,1.6) to (0.65,-0.6) to (-0.2,-0.6); \filldraw[green!70!white,opacity=0.1] (2.3,-0.6) to (2.3,1.6) to (0.9,1.6) to (0.9,-0.6) to (2.3,-0.6); $ \begin{diagram} \node{{\mathsf T}} \node{\mathsf{CG}({\mathcal F})} \arrow{w,l}{\simeq} \\ \node{{\mathsf M}} \arrow{n,l,J}{} \node{{\mathsf{ MG}}({\mathcal F})} \arrow{n,l,J}{} \arrow{w,l}{\simeq} \\ \end{diagram} $ \end{tikzpicture} \caption{\small Diagram relating the discretized \emph{dynamics} in state space (left) to the proposed \emph{combinatorial representation} of the global dynamics (right). The arrows $\xhookrightarrow{}$ represent inclusion maps and the arrows $\xrightarrow{\simeq}$ indicate poset isomorphisms.} \label{diag:framework} \vspace{-.35in} \end{wrapfigure} The set ${\mathsf{ ABlock}}(\phi_\tau)$ has the structure of a finite distributive lattice; that is, if $N, N'\in{\mathsf{ ABlock}}(\phi_\tau)$ then $N\cup N', N\cap N'\in {\mathsf{ ABlock}}(\phi_\tau)$. Furthermore, the collection $\{{\mathsf O}(T) \mid T\in {\mathsf T} \}$ generates a finite but huge sublattice of ${\mathsf{ ABlock}}(\phi_\tau)$. The elements of ${\mathsf T}$ that are identified by elements of ${\mathsf{ MG}}({\mathcal F})$ are denoted by ${\mathsf M}$ and referred to as \emph{Morse cells} (see Fig.~\ref{fig:pendulum12}). Given prior work \cite{kalies:mischaikow:vandervorst:15}, if ${\mathcal F}$ is an outer approximation of $\phi_\tau$ and $x\in X$ belongs to the chain recurrent set of $\phi_\tau$ (recurrence allowing for an arbitrarily small error \cite{conley:cbms}), then $x$ belongs to a Morse cell. Thus, the collection of recurrent sets of ${\mathcal F}$ identifies the location in state space of recurrent dynamics of $\phi_\tau$. As in Fig.~\ref{diag:framework}, ${\mathsf M}$ inherits a partial order $\leq$ from ${\mathsf{ MG}}({\mathcal F})$. Its dynamical implications are derived from the dynamical implications of the partial order on ${\mathsf T}$. If $M,M'\in {\mathsf M}$, $M < M'$ and $x$ is an initial condition that lies in $M$, then $\phi_\tau^n(x)\cap M' = \emptyset$ for all $n \geq 0$. \begin{comment} \subsection{Algorithms} Algorithm \ref{alg:CMGDB}, Conley Morse Graph DataBase (CMGDB), is the procedure described by the proposed framework. More specifically, line 1, 2, 3, 4 are the implementation of the description given by \textbf{S1}, \textbf{S2}, \textbf{S3} and \textbf{S4}, repectivelly. \begin{algorithm} \caption{CMGDB(X, $g$, $s$, $K$)} \label{alg:CMGDB} \small \DontPrintSemicolon ${\mathcal X} \gets$ \FuncSty{Grid}($X, s$) ${\mathcal F} \gets \FuncSty{OuterApproximation}({\mathcal X}, g, K)$ $\mathrm{CG}({\mathcal F}) \gets \FuncSty{StronglyConnectedComponents}({\mathcal X}, {\mathcal F})$ $({\mathsf M}({\mathcal F}), \mathrm{G}({\mathcal F})) \gets \mathrm{CG}({\mathcal F})$ ${\mathsf{ MG}}({\mathcal F}) \gets \FuncSty{Reachability}({\mathsf M}({\mathcal F}), \mathrm{CG}({\mathcal F}))$ \KwRet{${\mathsf{ MG}}({\mathcal F}), \mathrm{CG}({\mathcal F})$} \end{algorithm} \begin{remark} In practice, storing the edges of $\mathrm{CG}({\mathcal F})$ can be memory intensive, it queries for each vertex its set of out-edges exactly once. \end{remark} The following algorithm is a procedure to find the map $O^\bullet$. It is a modified depth first search, where for each vertex $\xi$ of the condensation graph $\mathrm{CG}({\mathcal F})$: it follows all the path that ends in ${\mathsf{ MG}}({\mathcal F})$, denote by $E$ the set of the end points; and labels $\xi$ by a maximal element of $E$. For instance, the vertex e in Fig. \ref{fig:pendulum12} has paths that ends in $\{4, 6, 2\}$ and the maximal element is $6$, then assign $\xi$ to $6$. \end{comment} \section{Results} \label{sec:results} \textbf{Robotic Systems:} The systems considered for evaluation are: (i) a second-order pendulum (\texttt{Pendulum}), (ii) a first-order car with Ackermann steering without reverse velocities (\texttt{Ackermann}) \cite{corke2011robotics}, and (iii) a second-order Acrobot (\texttt{Acrobot}) \cite{spong_acrobot}. The dynamics are simulated via numerical integration \cite{ML4KP}. The state and control space limits for each system are given in Table~\ref{table:systems}. \vspace{-.25in} \begin{table}[] \begin{tabular}{|l|l|l|l|l|} \hline \textbf{System} & \textbf{Bounds on $X$} & \textbf{Bounds on $\mathbb{U}$} & \textbf{Controllers} & \textbf{Goal state} \\ \hline \texttt{Pendulum} & $[\{$-$\pi,$-$2\pi\},\{\pi,2\pi\}]$ & $[$-$0.6372,0.6372]$ & \texttt{Learned, LQR} & $[0,0]$ \\ \hline \texttt{Ackermann} & $[\{$-10,-10,-$\pi\},\{10,10,\pi\}]$ & $[\{$-$1.05,0\},\{1.05,30\}]$ & \makecell{{\tt Learned, LQR}, \\{\tt Corke}} & $[0,0,\pi/2]$ \\ \hline \texttt{Acrobot} & $[\{0,$-$\pi,$-6,-6$\},\{2\pi,\pi,6,6\}]$ & $[$-$14,14]$ & \texttt{Hybrid, LQR} & $[0,\pi,0,0]$ \\ \hline \end{tabular} \caption{\small Systems and controllers considered in the evaluation.} \label{table:systems} \vspace{-.45in} \end{table} \textbf{Controllers:} For each system, alternative controllers are considered:\\ (i) An \texttt{LQR} controller linearizes the system around the goal: $\dot{x} = Ax + Bu\ (A \in \mathbb{R}^{n \times n}, B \in \mathbb{R}^{m \times n})$ and the controller $u = -Kx$ minimizes the cost $\mathcal{J}_{LQR} = x(T)^TFx(T) + \int_{0}^T (x(t)^TQx(t) + u(t)^TRu(t)) dt$ ($F, Q \in \mathbb{R}^{n \times n}, R \in \mathbb{R}^{m \times m}$).\\ (ii) A \texttt{Learned} controller trained using the Soft Actor-Critic (SAC) \cite{Haarnoja2018SoftAO} algorithm to maximize the expected return $\mathcal{J}(\pi) = \mathbb{E}_{\tau \sim \rho_\pi} [\sum_{t=0}^T \mathcal{R}(x_t)]$, where the reward function is $\mathcal{R}: X \rightarrow \{0,1\}$. $\mathcal{R}(x_t) = 1$ iff $x_t$ is within an $\epsilon$ distance from the goal state and 0 otherwise.\\ (iii) A {\tt Hybrid} controller for the \texttt{Acrobot} uses the {\tt Learned} controller to drive the system to a relaxed goal distance, from where {\tt LQR} takes over.\\ (iv) The \texttt{Corke} controller for \texttt{Ackermann} \cite{corke2011robotics} transforms the state into polar coordinates $(\rho, \alpha, \beta)$ to define linear control laws for the velocity $v=K_{\rho}\rho$ and steering angle $\omega= K_{\alpha}\alpha + K_{\beta}\beta$. It's able to drive a reverse-capable system to the goal assuming $K_{\rho} > 0, K_{\beta} < 0, K_{\alpha} - K_{\rho} > 0$. The experiments, however, consider only positive velocities limiting its reachability. \textbf{Comparison Methods:} \texttt{L-LQR} and \texttt{L-SoS} use a linearized, unconstrained form of the dynamics to compute a Lyapunov function (LF) for the controller being considered. \texttt{L-LQR} uses the solution of the Lyapunov equation $v_{LQR}(x)=x^TPx$ for the linearized, unconstrained version of the system. \texttt{L-SoS} computes the LF according to $v_{sos}(x)=m(x)^TQm(x)$ where $m(x)$ are monomials on $x$ and $Q$ is a positive semidefinite matrix. It uses SOSTOOLS \cite{prajna2002introducing} with SeDuMi \cite{sturm1999using} as the SDP solver. These methods cannot be used with data-driven controllers (like {\tt Learned}) since they require a closed-form expression for the controller. The Lyapunov Neural Network ({\tt L-NN}) \cite{richards2018lyapunov} is a state-of-the-art, machine learning approach with available software for identifying RoAs of black-box controllers. It returns a parametrized function that is trained to adapt to the RoA of a closed-loop dynamical system. Given an initial safe set around the desired equilibrium, a subset of non-safe states are forward propagated, classified and used to reshape the Lyapunov candidate in each iteration. The method needs access to a known safe-set and there is no guarantee the safe-region won't shrink. \textbf{``Ground Truth'' RoAs:} For each benchmark (i.e., a controller-system pair), an approximation of the ground truth RoA for the goal state is computed by high-resolution discretization of the state space, and forward propagating the controller for a very long, fixed time horizon, or until the goal is reached. Appendix \ref{sec:appendix-experiments} provides the parameters for this ground truth evaluation. \textbf{Metrics:} Given the ``ground truth'' RoA, the following metrics are reported:\\ (a) Table~\ref{tab:tp} provides the ratio of $X$'s volume correctly identified to belong to the RoA (True Positives - TP) - its complement gives the ratio of $X$'s volume \textit{incorrectly} identified as not being in the RoA (False Negatives - FN); \begin{wraptable}{r}{0.6\textwidth} \centering \vspace{-.35in} \small \begin{tabular}{|c||c|c|c|c|} \hline \textbf{Benchmark} & {\tt L-NN} & {\tt L-LQR} & {\tt L-SOS} & Ours: {\tt MG} \\ \hline \hline Pend (LQR) & \textbf{97.54\%} & 69.91\% & 3.07\% & 97.49\% \\ \hline Pend (Learned) & 30.18\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{98.5\%} \\ \hline Acro (LQR) & 89.06\% & 26.84\% & 25.66\% & \textbf{96.36\%} \\ \hline Acro (Hybrid) & 13.79\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{98.75\%} \\ \hline Ack (LQR) & 7.55\% & \textbf{21.78\%} & 2.43\% & 0\% \\ \hline Ack (Corke) & 10.23\% & \cellcolor{gray!} & 41.36\% & \textbf{86.69\%} \\ \hline Ack (Learned) & 91.47\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{100\%} \\ \hline \end{tabular} \vspace{-.1in} \caption{\small RoA ratios identified by the different methods. Best values per row in bold.} \label{tab:tp} \vspace{-.35in} \end{wraptable} \noindent (b) Table~\ref{tab:tn} provides the ratio of $X$'s volume for which the dynamics have not been identified (Unidentified);\\ (c) Table~\ref{tab:steps} provides the amount of computational resources required for {\tt L-NN} and {\tt Morse Graph} ({\tt MG}) measured using the number of forward propagations performed (Steps,), which is the dominant computational primitive. Given their analytical nature, the {\tt L-LQR} and {\tt L-SOS} alternatives do not require forward propagations of the system and tend to be computationally faster but require access to an expression for the controller. \begin{wraptable}{r}{0.6\textwidth} \centering \small \begin{tabular}{|c||c|c|c|c|} \hline \textbf{Benchmark} & {\tt L-NN} & {\tt L-LQR} & {\tt L-SOS} & Ours: {\tt MG} \\ \hline \hline Pend (LQR) & 61.33\% & 61.33\% & 61.33\% & \textbf{1.66\%} \\ \hline Pend (Learned) & 81.44\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{1.25\%} \\ \hline Acro (LQR) & 10.94\% & 73.16\% & 74.34\% & \textbf{3.64\%} \\ \hline Acro (Hybrid) & 86.21\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{1.25\%} \\ \hline Ack (LQR) & 83.03\% & \textbf{82.87\%} & \textbf{82.87\%} & 100\% \\ \hline Ack (Corke) & 27.81\% & \cellcolor{gray!} & 29.35\% & \textbf{24.5\%} \\ \hline Ack (Learned) & 8.53\% & \cellcolor{gray!} & \cellcolor{gray!} & \textbf{0.00\%}\\ \hline \end{tabular} \vspace{-.15in} \caption{\small $X$'s ratio returned as \textit{unidentified} by the different methods. Best values per row in bold.} \label{tab:tn} \vspace{-.35in} \end{wraptable} \noindent \textbf{Quantitative Results:} The proposed method tends to estimate larger volumes of the RoA compared to alternatives per Table \ref{tab:tp}. It also consistently identifies the dynamics for larger volumes of $X$ compared to the comparison points, which cover lower volumes of $X$ per Table \ref{tab:tn}. Moreover, the proposed method is broadly applicable to different controllers, and finds a larger volume of the RoA when compared to {\tt L-NN} for the Pendulum (Learned), Acrobot (Hybrid), Ackermann (Learned). \begin{wraptable}{l}{0.45\textwidth} \centering \vspace{-.35in} \small \begin{tabular}{|c||c|c|} \hline \textbf{Benchmark} & {\tt L-NN} & Ours: {\tt MG} \\ \hline \hline Pend (LQR) & 667.1M & \textbf{6.6M} \\ \hline Pend (Learned) & 341.9M & \textbf{6.6M} \\ \hline Acro (LQR) & 5.7B & \textbf{1.1B} \\ \hline Acro (Hybrid) & \textbf{533M} & 2.1B \\ \hline Ack (LQR) & \textbf{9.9M} & 520M \\ \hline Ack (Corke) & 37.5M & \textbf{13M} \\ \hline Ack (Learned) & 704.6M & \textbf{520M} \\ \hline \end{tabular} \vspace{-.1in} \caption{\small Number of propagation required. Best values per row in bold.} \label{tab:steps} \vspace{-.35in} \end{wraptable} The computational needs of the Morse Graph is one or two orders of magnitude less than that of the {\tt L-NN}. The learned controllers benefit the most from the topological approach as it provides in all cases a higher coverage of the RoA with fewer propagations. There are two cases where {\tt MG} takes a larger number of propagation steps compared to {\tt L-NN}. In the case of Ackermann (LQR), this is because {\tt MG} is unable to find a unique attractor for the system (see discussion below). In the case of Acrobot (Hybrid), {\tt L-NN} fails to identify the true RoA accurately. Some of the comparison points may incorrectly identify a volume of $X$ as belonging to RoA (False Positives - FP). This is not true for the Pendulum, since the attractor of interest for both controllers is not at the boundary of the RoA. But for the LQR and Corke controllers of the Ackermann, the desired goal is not an attractor since some trajectories close to the goal region escape $X$ (this is a consequence of not allowing negative velocities). Therefore, the comparison methods fail to conservatively estimate the RoA, resulting in FPs. The topological framework, however, does not result in FPs (explained below as well). The Ackermann Learned controller and the Acrobot controllers present no FPs since the goal region is an attractor. \begin{figure}[h!] \vspace{-.2in} \centering \includegraphics[width=0.49\textwidth]{figs/pendulum_roa_1.png} \includegraphics[width=0.49\textwidth]{figs/pendulum_roa_2.png} \vspace{-.1in} \caption{\small RoA estimated by {\tt L-NN} (1$^{st}$ and 3$^{th}$ image) and {\tt Morse Graph} (2$^{nd}$ and 4$^{th}$ image) for the LQR (left) and Learned controllers (right) of the pendulum.} \label{fig:pendulum-qual} \end{figure} \textbf{Pendulum Study:} The RoAs compute controllers for the pendulum are shown in Fig~\ref{fig:pendulum-qual}. For {\tt Morse Graph}, the attractor discovered is shown at the center of the state space, and it contains the goal region. Note that {\tt L-NN} does not find the attractor, but assumes one exists containing the goal region. Moreover, {\tt L-NN} is unable to cover the full RoA for the Learned controller, making it an undesirable solution. {\tt Morse Graph} is also shown to work well when (a subset of) the state space is periodic. For instance, the RoA of the Pendulum (LQR) also includes the regions at the corners of the planar representation of the cylinders in Fig~\ref{fig:pendulum-qual} (Left). This is captured by {\tt MorseGraph}, but not by {\tt L-NN}. \begin{figure}[h!] \vspace{-.3in} \centering \includegraphics[width=1\textwidth]{figs/pendulum_mult_taus.png} \vspace{-.3in} \caption{\small RoA for the Pendulum when the upper torque bound is: (left) $0.637$; (center) $0.724$; (right) $0.736$. For all torques the RoA found is $97\%$ of the true one.} \label{fig:pendulum-torques} \vspace{-.35in} \end{figure} \emph{Pendulum with different torques:} Fig~\ref{fig:pendulum-torques} illustrates the robustness of the topological framework to different bounds for the allowed torque of the LQR controller. The proposed method consistently finds the RoA and the attractor that contains the goal region. Moreover, it persistently covers $99\%$ of the true RoA even when the dynamics change, for instance in Fig. \ref{fig:pendulum-torques} one attractor and one saddle eventually collide and disappear. \begin{comment} \begin{figure}[!h] \vspace{-.3in} \centering \includegraphics[width=0.44\textwidth]{figs/lc_ackermann_roa_lyapnn.png} \includegraphics[width=0.54\textwidth]{figs/Torus_lc_Ack.png} \vspace{-.1in} \caption{\small (left) RoA of the {\tt Learned} controller for Ackermann computed by L-NN. (right) Same benchmark: The entire state space is correctly identified by the Morse Graph as the RoA for the shown Unique Morse set (Torus-like shape).} \label{fig:ackermann_sys} \vspace{-.35in} \end{figure} \end{comment} \begin{wrapfigure}{r}{0.45\textwidth} \centering \vspace{-0.45in} \includegraphics[width=0.44\textwidth]{figs/Torus_lc_Ack.png} \vspace{-0.1in} \label{fig:ackermann_sys} \caption{\small For Ackermann (Learned), the entire state space is correctly identified by the Morse Graph as an RoA for the shown Unique Morse set (Torus-like shape).} \vspace{-0.35in} \end{wrapfigure} \noindent \textbf{Ackermann Study: }\\ \emph{Learned controller for Ackermann:} Although the RoA is the full state space, only the Morse Graph identifies this. The unique attractor obtained (right) also provides insight into how the controller works. For a small resolution of the discretization of $X$, it presents as a torus-like shape, which suggests recurrent behavior. This shape can be explained by the behavior of the car when it gets close to the goal region with the wrong orientation, where it tries to fix this orientation by performing a loop. For a more refined discretization, the method can distinguish long trajectories from recurrent behavior, finding a smaller attractor that contains the goal region. \emph{LQR controller for Ackermann:} The Morse Graph results in a large, unique attractor that contains $74\%$ of $X$ as the discretization is proven insufficient. The comparison points fail by producing false positives (FP). The proposed method is conservative and safe. It avoids FPs but needs more subdivisions for a more comprehensive understanding of these global dynamics. \begin{figure}[!h] \centering \vspace{-.25in} \includegraphics[width=0.75\columnwidth]{figs/ackermann_corke_multi.png} \vspace{-.15in} \caption{\small Morse Graph, Morse sets and 2D projection of the RoA for the Corke controller applied to the Ackermann. (top) Corke controller with the goal is set to be $(0, 0, 1.57)$ and (bottom) Corke controller with the goal set to be $(6, -10, -1.57)$.} \label{fig:ackermann_sys2} \vspace{-.25in} \end{figure} \emph{Corke controller for Ackermann:} In this experiment, some trajectories leave the bounds of $X$, and as a result, some modifications are needed to compute the RoA. A node $\star$ is included in $\mathsf{CG}({\mathcal F})$, and for every cube $\xi \in {\mathcal X}$ s.t. $\Phi_{\tau}(\xi) \cap X^c \neq \emptyset$, edges are added from $\xi$ to $\star$ before computing the Morse Graph. As a result, $\star$ is a minimal node of ${\mathsf{ MG}}({\mathcal F})$. Then, a modified version of the RoA computation is applied where $O^\bullet$ and $\max$ are changed to $O_\bullet$ and $\min$, respectively, and the resulting output is $O_\bullet$. Finally, for each element $R \in \mathsf{mRoA}$, the method computes $R_\bullet = R - R_\star$, where $R_\star = \{\xi \in {\mathcal X} \ |\ O_\bullet(\xi) = \{\star\}\}$. Consequently, for each maximal RoA, the cubes in $R_\star$ are removed since they have some trajectories that escape $X$. Thus, $R_\bullet$ is a conservative estimate of the RoA. In Fig~\ref{fig:ackermann_sys2}, the white region is the collection of cubes in $R_\star$. \emph{Devising a Hybrid controller for Ackermann given the Morse Graph output: } Given the information from the Morse Graph, it is possible to synthesize a hybrid controller that has a bigger RoA than the original ROA$_\text{Corke}$ of the Corke Controller $u_\text{init}$. The strategy selects a state in ROA$_\text{Corke}$, different than the original goal, as the goal for a new Corke controller $u_\text{inter}$. Define as RoA$_{\text{inter}}$ the RoA of the new Corke controller. If RoA$_{\text{inter}}$ overlaps with $X - $RoA$_{\text{Corke}}$, the hybrid controller is defined as: for states in ROA$_\text{Corke}$, apply $u_\text{init}$, and for states in $X - $RoA$_{\text{Corke}}$, apply $u_\text{inter}$, and then apply $u_\text{init}$ when the system enters ROA$_\text{Corke}$. Fig~\ref{fig:ackermann_sys2} (bottom) shows the RoA for $u_\text{inter}$ with goal (6, -10, -1.57), which is in the ROA$_\text{Corke}$. A new controller $u_\text{inter}$ was devised for this goal and its RoA$_{\text{inter}}$ contained $X -$RoA$_{\text{Corke}}$. The integration of $u_\text{inter}$ with the original $u_\text{init}$ result in a hybrid solution that covers the entire state space, which was verified empirically using the Morse Graph. \begin{wrapfigure}{r}{0.07\textwidth} \centering \vspace{-0.45in} \includegraphics[width=0.06\textwidth]{figs/2_nodes_MG.png} \vspace{-0.1in} \caption{} \label{fig:2_node_MG} \vspace{-0.35in} \end{wrapfigure} \textbf{Acrobot study:} The Morse graph representing the dynamics of both controllers for the Acrobot is shown in Fig~\ref{fig:2_node_MG}. Both controllers have a high success rate, with the estimated RoA for the node 0 covering more than $93\%$ of the true RoA. The possible recurrent dynamics described by node $1$ are long trajectories interpreted as recurrent by the proposed method. This can be addressed by increasing the discretization of the state space $X$ for an additional computational cost. \emph{Hybrid Controller for Acrobot:} The Acrobot learned controller is unable to find a solution within a given time horizon for a goal condition: $\mathcal{B}(x_G,0.1)$. Hence, a hybrid solution is proposed. The learned controller is first applied until the system reaches a relaxed goal: $\mathcal{B}(x_G,0.6)$. The proposed method finds that the RoA for the relaxed goal condition is 100\% of the state space, and is inside the RoA of the LQR controller. Hence, once the trajectory reaches $\mathcal{B}(x_G,0.6)$ using the learned controller, LQR is applied to get into the smaller region $\mathcal{B}(x_G,0.1)$. This results in a hybrid controller with a 100\% success rate and a reduction in trajectory length relatively to just executing LQR of around 50\%. \section{Conclusion} \label{sec:conclusion} This work present a novel method based on topology to identify attractors and their RoAs for robotic systems controlled by black-box controllers. Experimental evaluation on simulated benchmarks shows that the proposed method efficiently identifies the global dynamics with fewer samples from the dynamics model compared to data-driven alternatives. It does not require knowledge of the system or controller dynamics, such as differentiability or the guaranteed presence of an attractor for the system's goal region. This makes it suitable for data-driven controllers, where it significantly outperforms alternatives in identifying their RoA. Moreover, the proposed method provides a compact description of the global dynamics, which allows to compose multiple controllers into hybrid solutions that reach the goal from the full state space. The evaluation section presents two such hybrid controllers designed based on the Morse Graph output that yield notable properties: one increased the RoA to the whole state space; and the other decreased the length of solution trajectories by half. Even though the proposed method requires less number of samples from the dynamics model, it still explores the entirety of the state space. This can potentially be mitigating for large and high-dimensional state spaces. Future work will explore extensions of the current topological approach, where the focus will be on finding the RoA for a single attractor, thereby requiring less computational resources. Integration with Gaussian Processes and machine learning primitives can help identify a smaller set of states where the system is propagated from so as to further reduce data requirements. \bibliographystyle{splncs04}
2,877,628,089,273
arxiv
\section{Introduction} This document is a template for \LaTeXe. If you are reading a paper or PDF version of this document, please download the electronic file \texttt{ifacconf.tex}. You will also need the class file \texttt{ifacconf.cls}. Both files are available on the IFAC web site. Please stick to the format defined by the \texttt{ifacconf} class, and do not change the margins or the general layout of the paper. It is especially important that you do not put any running header/footer or page number in the submitted paper.\footnote{ This is the default for the provided class file.} Use \emph{italics} for emphasis; do not underline. Page limits may vary from conference to conference. Please observe the page limits of the event for which your paper is intended. \section{Procedure for Paper Submission} Next we see a few subsections. \subsection{Review Stage} For submission guidelines, follow instructions on paper submission system as well as the event website. Note that conferences impose strict page limits, so it will be better for you to prepare your initial submission in the camera ready layout so that you will have a good estimate for the paper length. Additionally, the effort required for final submission will be minimal. \subsection{Equations} Some words might be appropriate describing equation~(\ref{eq:sample}), if we had but time and space enough. \begin{equation} \label{eq:sample} {{\partial F}\over {\partial t}} = D{{\partial^2 F}\over {\partial x^2}}. \end{equation} See \cite{Abl:56}, \cite{AbTaRu:54}, \cite{Keo:58} and \cite{Pow:85}. \subsubsection{Example.} This equation goes far beyond the celebrated theorem ascribed to the great Pythagoras by his followers. \begin{thm} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{thm} \begin{pf} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{pf} Of course LaTeX manages equations through built-in macros. You may wish to use the \texttt{amstex} package for enhanced math capabilities. \subsection{Figures} To insert figures, use the \texttt{graphicx} package. Although other graphics packages can also be used, \texttt{graphicx} is simpler to use. See Fig.~\ref{fig:bifurcation} for an example. \begin{figure} \begin{center} \includegraphics[width=8.4cm]{bifurcation} \caption{Bifurcation: Plot of local maxima of $x$ with damping $a$ decreasing} \label{fig:bifurcation} \end{center} \end{figure} Figures must be centered, and have a caption at the bottom. \subsection{Tables} Tables must be centered and have a caption above them, numbered with Arabic numerals. See table~\ref{tb:margins} for an example. \begin{table}[hb] \begin{center} \caption{Margin settings}\label{tb:margins} \begin{tabular}{cccc} Page & Top & Bottom & Left/Right \\\hline First & 3.5 & 2.5 & 1.5 \\ Rest & 2.5 & 2.5 & 1.5 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Final Stage} Authors are expected to mind the margins diligently. Papers need to be stamped with event data and paginated for inclusion in the proceedings. If your manuscript bleeds into margins, you will be required to resubmit and delay the proceedings preparation in the process. \subsubsection{Page margins.} See table~\ref{tb:margins} for the page margins specification. All dimensions are in \emph{centimeters}. \subsection{PDF Creation} All fonts must be embedded/subsetted in the PDF file. Use one of the following tools to produce a good quality PDF file: \subsubsection{PDFLaTeX} is a special version of LaTeX by Han The Thanh which produces PDF output directly using Type-1 fonts instead of the standard \texttt{dvi} file. It accepts figures in JPEG, PNG, and PDF formats, but not PostScript. Encapsulated PostScript figures can be converted to PDF with the \texttt{epstopdf} tool or with Adobe Acrobat Distiller. \subsubsection{Generating PDF from PostScript} is the classical way of producing PDF files from LaTeX. The steps are: \begin{enumerate} \item Produce a \texttt{dvi} file by running \texttt{latex} twice. \item Produce a PostScript (\texttt{ps}) file with \texttt{dvips}. \item Produce a PDF file with \texttt{ps2pdf} or Adobe Acrobat Distiller. \end{enumerate} \subsection{Copyright Form} IFAC will put in place an electronic copyright transfer system in due course. Please \emph{do not} send copyright forms by mail or fax. More information on this will be made available on IFAC website. \section{Units} Use SI as primary units. Other units may be used as secondary units (in parentheses). This applies to papers in data storage. For example, write ``$15\,\mathrm{Gb}/\mathrm{cm}^2$ ($100\,\mathrm{Gb}/\mathrm{in}^2$)''. An exception is when English units are used as identifiers in trade, such as ``3.5 in disk drive''. Avoid combining SI and other units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity in an equation. The SI unit for magnetic field strength $\mathbf{H}$ is $\mathrm{A}/\mathrm{m}$. However, if you wish to use units of $\mathrm{T}$, either refer to magnetic flux density $\mathbf{B}$ or magnetic field strength symbolized as $\mu_0\,\mathbf{H}$. Use the center dot to separate compound units, e.g., ``$\mathrm{A} \cdot \mathrm{m}^2$''. \section{Helpful Hints} \subsection{Figures and Tables} Figure axis labels are often a source of confusion. Use words rather than symbols. As an example, write the quantity ``Magnetization'', or ``Magnetization M'', not just ``M''. Put units in parentheses. Do not label axes only with units. For example, write ``Magnetization ($\mathrm{A}/\mathrm{m}$)'' or ``Magnetization ($\mathrm{A} \mathrm{m}^{-1}$)'', not just ``$\mathrm{A}/\mathrm{m}$''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature ($\mathrm{K}$)'', not ``$\mbox{Temperature}/\mathrm{K}$''. Multipliers can be especially confusing. Write ``Magnetization ($\mathrm{kA}/\mathrm{m}$)'' or ``Magnetization ($10^3 \mathrm{A}/\mathrm{m}$)''. Do not write ``Magnetization $(\mathrm{A}/\mathrm{m}) \times 1000$'' because the reader would not know whether the axis label means $16000\,\mathrm{A}/\mathrm{m}$ or $0.016\,\mathrm{A}/\mathrm{m}$. \subsection{References} Use Harvard style references (see at the end of this document). With \LaTeX, you can process an external bibliography database using \texttt{bibtex},\footnote{In this case you will also need the \texttt{ifacconf.bst} file, which is part of the \texttt{ifaconf} package.} or insert it directly into the reference section. Footnotes should be avoided as far as possible. Please note that the references at the end of this document are in the preferred referencing style. Papers that have not been published should be cited as ``unpublished''. Capitalize only the first word in a paper title, except for proper nouns and element symbols. \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IFAC, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write ``C.N.R.S.'', not ``C. N. R. S.'' Do not use abbreviations in the title unless they are unavoidable (for example, ``IFAC'' in the title of this article). \subsection{Equations} Number equations consecutively with equation numbers in parentheses flush with the right margin, as in (\ref{eq:sample}). To make your equations more compact, you may use the solidus ($/$), the $\exp$ function, or appropriate exponents. Use parentheses to avoid ambiguities in denominators. Punctuate equations when they are part of a sentence, as in \begin{equation} \label{eq:sample2} \begin{array}{ll} \int_0^{r_2} & F (r, \varphi ) dr d\varphi = [\sigma r_2 / (2 \mu_0 )] \\ & \cdot \int_0^{\inf} exp(-\lambda |z_j - z_i |) \lambda^{-1} J_1 (\lambda r_2 ) J_0 (\lambda r_i ) d\lambda \end{array} \end{equation} Be sure that the symbols in your equation have been defined before the equation appears or immediately following. Italicize symbols ($T$ might refer to temperature, but T is the unit tesla). Refer to ``(\ref{eq:sample})'', not ``Eq. (\ref{eq:sample})'' or ``equation (\ref{eq:sample})'', except at the beginning of a sentence: ``Equation (\ref{eq:sample}) is \ldots''. \subsection{Other Recommendations} Use one space after periods and colons. Hyphenate complex modifiers: ``zero-field-cooled magnetization''. Avoid dangling participles, such as, ``Using (1), the potential was calculated'' (it is not clear who or what used (1)). Write instead: ``The potential was calculated by using (1)'', or ``Using (1), we calculated the potential''. A parenthetical statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) Avoid contractions; for example, write ``do not'' instead of ``don' t''. The serial comma is preferred: ``A, B, and C'' instead of ``A, B and C''. \section{Conclusion} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \begin{ack} Place acknowledgments here. \end{ack} \section{Introduction} Model predictive control (MPC) approaches have been considered a viable modern control design method that can handle complex dynamic behavior and constraints. The presence of uncertainties in the system dynamics led to various forms of MPC designs robust to various non-idealities. Robust MPC (RMPC) approaches such as \citep{chisci2001systems,mayne2005robust} consider bounded disturbances and use constraint set tightening approaches to provide stability guarantees. \cite{pin2009robust} proposed a tube-based approach for nonlinear systems with state-dependent uncertainties. An event-triggered robust MPC approach has been discussed in \cite{liu2017aperiodic} for constrained continuous-time nonlinear systems. Robust MPC approaches, however, do not necessarily exploit the existing statistical properties of the uncertainty, thereby leading to conservative designs. Stochastic MPC (SMPC), on the other hand, accounts for some admissible levels of constraint violation by employing chance constraints. An overview of SMPC based approaches can be found in \cite{mesbah2016stochastic}. Works such as \cite{fleming2018stochastic,lorenzen2017stochastic,farina2016stochastic,streif2014stochastic} developed methodologies that use a sufficient number of randomly generated samples to satisfy the chance constraints with probabilistic guarantees. Others~\citep{GOULART2006523,Primbs2009,Hokayem2009} have proposed feedback parametrization to cast the underlying stochastic optimal control problem tractable. Some authors proposed approximation schemes that allow obtaining explicit solutions to stochastic MPC problems with linear~\citep{Drgona_SEMPC2013} or nonlinear~\citep{Grancharova2010} system models via parametric programming solvers. Unfortunately, these explicit stochastic MPC methods do not scale to systems with a large number of states and input of constraints. Recently, more focus has been given toward Learning-based MPC (LBMPC) methods~\citep{ASWANI20131216}. These methods primarily learn the system dynamics model from data following the framework of classical adaptive MPC~\citep{aswani2014practical}. A recent review on the LBMPC approaches can be found in ~\cite{Hewing2020} and references therein. Combining LBMPC and RMPC approaches include formulation of robust MPC with state-dependent uncertainty for data-driven linear models~\citep{SOLOPERTO2018442}, iterative model updates for linear systems with bounded uncertainties and robustness guarantees~\citep{Bujarbaruah2018AdaptiveMF}, Gaussian process-based approximations for tractable MPC~\citep{hewing2019cautious}, or approximate model predictive control via supervised learning~\citep{DRGONA2018,LUCIA2018511}. Backpropagating through the learned model parametrized via convex neural networks~\citep{AmosXK16} was investigated in~\cite{chen2018optimal}. \cite{Zanon2019,ZanonRLMPC2019} use reinforcement learning (RL) and automatic differentiation (AD) for tuning of the MPC parameters. While \cite{Lenz2015DeepMPCLD} discussed a recurrent neural model for learning latent dynamics for MPC. In a learning-based MPC setting, various forms of probabilistic guarantees are discussed in recent works such as \cite{hertneck2018learning, rosolia2018stochastic, Karg2021}. \textit{Contributions.} In this paper, we provide a computationally efficient, offline learning algorithm for obtaining parametric solutions to stochastic explicit model predictive control problems. In particular, we combine the classical ideas of deterministic sampling of chance-constrained together with feedback parametrization from stochastic MPC literature with recent ideas from learning-based approaches for obtaining approximate MPC policies via sampling the parametric space. However, as opposed to supervised learning-based methods such as~\cite{hertneck2018learning,LUCIA2018511} we propose an unsupervised sampling-based approach to solve the underlying stochastic parametric optimal control problem by leveraging differentiable programming~\citep{DiffProg2019}. The proposed method called stochastic parametric differentiable predictive control (SP-DPC) is an extension of the deterministic DPC policy optimization algorithm introduced in~\cite{drgona2021learning,DRGONA202114}. The main idea of DPC is based on casting the MPC problem as a differentiable program implemented in a programming framework supporting AD for efficient computation of gradients of the underlying constrained optimization problem. The differentiability in DPC allows us to backpropagate the MPC objective and constraints penalties through the unrolled closed-loop system consisting of the system dynamics model and neural control policy to compute the sensitivities of control objectives and constraints to the change in the policy weights. From a theoretical perspective, we provide stochastic feasibility and stability guarantees based on the chance constraint framework and Hoeffding’s inequality. Specifically, we adapt probabilistic performance guarantees as introduced in~\cite{hertneck2018learning} in the context of unsupervised learning of the constrained control policies for our the proposed DPC method with chance constraints. From an empirical perspective, we demonstrate the computational efficiency of the proposed SP-DPC policy optimization algorithm compared to implicit MPC via online solvers, with scalability beyond the limitations of explicit MPC via classical parametric programming solvers. In three numerical examples, we demonstrate the stochastic robustness of the proposed SP-DPC method to additive uncertainties and its capability to deal with a range of control tasks such as stabilization of unstable systems, stochastic reference tracking, and stochastic parametric obstacle avoidance with nonlinear constraints. \section{Method} \subsection{Problem Statement} Let us consider following uncertain discrete-time linear dynamical system: \begin{equation} \label{eq:truth:model:uncertain} {\bf x}_{k+1} = {\bf A} {\bf x}_k + {\bf B } {\bf u}_k + \boldsymbol \omega_k, \end{equation} where, ${\bf x}_k \in \mathbb{X} \subseteq \mathbb{R}^{n_{x}}$ are state variables, and ${\bf u}_k \in \mathbb{U} \subseteq \mathbb{R}^{n_{u}}$ are control inputs. The state matrices $\bf A$, and $\bf B$ are assumed to be known, however, the dynamical system is corrupted by $\boldsymbol \omega_k \in \Omega \subseteq \mathbb{R}^{n_{\omega}}$ which is time-varying additive unmeasured uncertainty. We consider that states and inputs are subject to joint parametric nonlinear chance constraints in their standard form: \begin{subequations} \label{eq:chance_con} \begin{align} \textbf{Pr}( h({\bf x}_k, {\bf p}_k) \le {\bf 0}) \ge \beta, \\ \textbf{Pr}( g({\bf u}_k, {\bf p}_k) \le {\bf 0}) \ge \beta, \end{align} \end{subequations} where $ {\bf p}_k \in \Xi \subset \mathbb{R}^{n_{p}}$ represents vector of parameters, and $\beta \in (0,1]$ is the user-specified probability requirement. $\beta = 1$ is the hard constraint where the state and parameter dependent constraints have to satisfy over all times. However, with relaxed probabilistic consideration, i.e., with $\beta < 1$ we can allow for constraint violation with probability $1-\beta$, thereby trading-off performance with the robustness. Unfortunatelly, joint chance constraints~\eqref{eq:chance_con} are in general non-convex and intractable~\citep{BAVDEKAR2016270}. One of the approaches for dealing with joint chance constraints in constrained optimization is to use tractable deterministic surrogates via sampling~\citep{streif2014stochastic,Drgona_SEMPC2013}: \begin{subequations} \label{eq:det_chance_con} \begin{align} h({\bf x}_k^j, {\bf p}_k) & \le {\bf 0}, \ j \in \mathbb{N}_1^{s} \\ g({\bf u}_k^j, {\bf p}_k) & \le {\bf 0}, \ j \in \mathbb{N}_1^{s} \end{align} \end{subequations} where $j$ represents the index of the deterministic realization of the chance constraints with uncertainties sampled from the distribution $\boldsymbol\omega_k^j \thicksim P_{\boldsymbol \omega}$ with $s$ number of samples. For control, we consider an arbitrary differentiable performance metric given as a function of states, control actions, and problem parameters $ \ell( {\bf x}_k, {\bf u}_k, {\bf p}_k) $. A standard example is the parametric reference tracking objective: \begin{equation} \label{eq:param_obj} \ell( {\bf x}_k, {\bf u}_k, {\bf p}_k) = || {\bf r}_{k} - {\bf x}_k ||_{Q_{r}}^2 + || {\bf u}_k ||_{Q_{u}}^2 \end{equation} where the reference signal ${\bf r}_{k} \in \mathbb{X} \subset \mathbb{R}^{n_{x}}$ belongs to the parameter vector ${\bf p}_k$. With $|| \cdot||_Q^2$ is weighted squared $2$-norm with a scalar factor $Q$. Given the system dynamics model~\eqref{eq:truth:model:uncertain}, our aim is to find parametric predictive optimal control policy represented by a deep neural network $ \pi_{\boldsymbol \theta}({\bf x}_0, \boldsymbol \xi): \mathbb{R}^{n_x + Nn_{p}} \to \mathbb{R}^{Nn_u} $~\eqref{eq:dnn} that minimizes the parametric control objective function~\eqref{eq:param_obj} over finite prediction horizon with $N$ steps, while satisfying the parametric chance constraints~\eqref{eq:chance_con}. We assume fully connected neural network policy given as: \begin{subequations} \label{eq:dnn} \begin{align} {\bf U} = \boldsymbol{\pi}_{ \boldsymbol \theta}({\bf x}_0, \boldsymbol {\xi}) & = \mathbf{W}_{L} \mathbf{z}_L + \mathbf{b}_{L} \\ \mathbf{z}_{l} &= \boldsymbol\sigma(\mathbf{W}_{l-1} \mathbf{z}_{l-1} + \mathbf{b}_{l-1}) \label{eq:dnn:layer}\\ \mathbf{z}_0 &= [ {\bf x}_0^T, \boldsymbol {\xi}^T ] \end{align} \end{subequations} Where ${\bf U} = [{\bf u}_0, \ldots, {\bf u}_{N-1} ]$ is an open-loop control sequence, and $\boldsymbol {\xi} = [{\bf p}_0, \ldots, {\bf p}_{N-1} ]$ represents a forecast of the problem parameters, e.g., reference and constraints preview, over the N-step ahead prediction horizon, respectively. Vector ${\bf x}_0 = {\bf x}(t)$ represents a full state feedback of the initial conditions measured at time $t$. In the policy architecture, $\mathbf{z}_i$ represent hidden states, $\mathbf{W}_i$, and $\mathbf{b}_i$ being weights and biases of the $i$-th layer, respectively, compactly represented as policy parameters $\boldsymbol \theta$ to be optimized. The nonlinearity $\boldsymbol\sigma: \mathbb{R}^{n_{z_l}} \rightarrow \mathbb{R}^{n_{z_l}}$ is given by element-wise execution of a differentiable activation function $\sigma: \mathbb{R} \rightarrow \mathbb{R}$. Our objective is to optimize the explicit neural control policy~\eqref{eq:dnn} by solving the following stochastic parametric optimal control problem: \begin{subequations} \label{eq:SPOCP} \begin{align} \min_{\boldsymbol \theta} \ & \mathbb{E} \ \sum_{k=0}^{N-1}\ell( {\bf x}_k, {\bf u}_k, {\bf p}_k), & \\ \text{s.t.} \ & {\bf x}_{k+1} = {\bf A} {\bf x}_k + {\bf B } {\bf u}_k + \boldsymbol \omega_k, \ k \in \mathbb{N}_{0}^{N-1}, & \\ \ & [{\bf u}_0, \ldots, {\bf u}_{N-1} ] = \boldsymbol{\pi}_{ \boldsymbol \theta}({\bf x}_0, \boldsymbol {\xi}), \\ & \textbf{Pr}( h({\bf x}_k, {\bf p}_k) \le {\bf 0}) \ge \beta, \\ & \textbf{Pr}( g({\bf u}_k, {\bf p}_k) \le {\bf 0}) \ge \beta, \\ \ & {\bf x}_0 \thicksim P_{{\bf x}_0}, \\ \ &\boldsymbol \xi = [{\bf p}_0, \ldots, {\bf p}_{N-1} ] \thicksim P_{\boldsymbol \xi}, \\ \ &\boldsymbol \omega_k \thicksim P_{\boldsymbol \omega}, \ k \in \mathbb{N}_{0}^{N-1}. \end{align} \end{subequations} Before we present our policy optimization algorithm that solves the problem~\eqref{eq:SPOCP} we consider the following assumptions. \textit{Assumption 1:} The nominal system dynamics model~\eqref{eq:truth:model:uncertain} is controllable. \textit{Assumption 2:} The parametric control performance metric $ \ell( {\bf x}_k, {\bf u}_k, {\bf p}_k) $ and constraints $ h({\bf x}_k, {\bf p}_k)$, and $ g({\bf u}_k, {\bf p}_k)$, respectively, are at least once differentiable functions. \textit{Assumption 3:} The input disturbances are bounded, i.e., $\omega_k \in \Omega : \{ \omega_k : ||\omega_k||_\infty \leq \nu \}$ for any time instant $k \in \mathbb{N}_{0}^{N-1}$. \textit{Assumption 4: \citep{hertneck2018learning}} There exist a local Lyapunov function $V(\bf x_k) = || {\bf x}_k ||_{P}^2$ with the terminal set $\mathcal{X}_T = \{ x_k : V(\bf x_k) \leq \kappa \}$ with control law $\pi_{\boldsymbol \phi}({\bf x}_k, \boldsymbol \xi)$ such that $\forall \bf x_k \in \mathcal{X}_T$, ${\bf A} {\bf x}_k + {\bf B } \pi_{\boldsymbol \phi}({\bf x}_k, \boldsymbol \xi) + \boldsymbol \omega_k \in \mathcal{X}_T$, $\omega_k \in \Omega$, and the decrease in the Lyapunov function is bounded by $ -\ell( {\bf x}_k, {\bf u}_k, {\bf p}_k) $ in the terminal set $\mathcal{X}_T$. Assumption $4$ helps us to guarantee that once the chance constraints are satisfied during the transient phase of the closed-loop, and when the trajectories reach the terminal set $\mathcal{X}_T$ under the bounded disturbances, it will be robustly positive invariant. To simplify the exposition, Table~\ref{tab:variables} summarizes the notation used in this paper. \begin{table}[htb!] \begin{center} \caption{Overview of the notation used.}\label{tab:variables} \begin{tabular}{cccc} notation & meaning & sampled from & belongs to \\ \hline ${\bf U}^{i,j}$ & control trajectories & - & $\mathbb{U}$ \\ ${\bf x}_0^{i,j}$ & initial system states & $P_{{\bf x}_0}$ & $\mathbb{X}$ \\ ${\bf p}_k^i$ & problem parameters & $P_{\boldsymbol \xi}$ & $\Xi$\\ $\boldsymbol \omega^i_k$ & additive uncertainties & $P_{\boldsymbol \omega}$ & $\Omega$\\ $i$ & parametric scenario & - & $\mathbb{N}_{1}^{m}$ \\ $j$ & uncertainty scenario & - & $\mathbb{N}_{1}^{s}$ \\ $k$ & time index & - & $\mathbb{N}_{0}^{N}$ \\ \hline \end{tabular} \end{center} \end{table} \subsection{Stochastic Parametric Differentiable Predictive Control} In this section, we present stochastic parametric differentiable predictive control (SP-DPC) method that is cast as the following parametric stochastic optimal control problem in the Lagrangian form: \begin{subequations} \label{eq:DPC} \begin{align} \min_{\boldsymbol \theta} \ & \mathbb{E} \ J({\bf x}_0, \boldsymbol \xi, \boldsymbol\Omega) & \label{eq:DPC:objective} \\ \text{s.t.} \ & {\bf x}_{k+1}^{i,j} = {\bf A} {\bf x}_k^{i,j} + {\bf B } {\bf u}_k^{i,j} + \boldsymbol \omega_k^j, \ k \in \mathbb{N}_{0}^{N-1} \label{eq:dpc:x} & \\ \ & [{\bf u}_0^{i,j}, \ldots, {\bf u}_{N-1}^{i,j} ] = \boldsymbol{\pi}_{ \boldsymbol \theta}({\bf x}^{i,j}_0, \boldsymbol {\xi}^i) \label{eq:dpc:pi} \\ \ & {\bf x}_0^{i,j} \thicksim P_{{\bf x}_0}, \label{eq:dpc:x0} \\ \ &\boldsymbol \xi^i = [{\bf p}_0^i, \ldots, {\bf p}_{N-1}^i ] \thicksim P_{\boldsymbol \xi}, \label{eq:dpc:xi} \\ \ & \boldsymbol \Omega^j = [\boldsymbol \omega_0^j, \ldots, \boldsymbol \omega_{N-1}^j] \thicksim P_{\boldsymbol \omega} \label{eq:dpc:omega} \\ \ & k \in \mathbb{N}_{0}^{N-1}, \ i \in \mathbb{N}_{1}^{m}, \ j \in \mathbb{N}_{1}^{s} \end{align} \end{subequations} The stochastic evolution of the state variables ${\bf x}_k^{i,j}$ is obtained via the rollouts of the system dynamics equation~\eqref{eq:dpc:x} with initial conditions sampled from the distribution $P_{x_0}$. Vector ${\bf p}_k^i$ represents realization of the parameters of the optimal control problem sampled from the distribution $P_{\boldsymbol \xi}$. Samples of the initial conditions~\eqref{eq:dpc:x0} and problem parameters~\eqref{eq:dpc:xi} define $m$ number of known unique environment scenarios. On the other hand, $\boldsymbol \omega_k^j$ represents unmeasured additive uncertainties independently sampled from distrubution $P_{\boldsymbol \omega}$, and define $s$ number of unique disturbance scenarios with each scenario leads to one uncertain episode. Overall, the problem~\eqref{eq:DPC} has $ms$ unique scenarios indexed by a tuple $(i,j)$ that are parametrizing by expected value of the loss function~\eqref{eq:DPC:objective} through their effect on the system dynamics rollouts over $N$ steps via the model equation~\eqref{eq:dpc:x}. Hence, the parametric loss function is defined over sampled distributions of problem's initial conditions, measured time-varying parameters, and unmeasured additive and parametric uncertainties as follows: \begin{equation} \begin{split} \label{eq:policy_loss} J({\bf x}_0, \boldsymbol \xi, \boldsymbol\Omega) = \frac{1}{msN} \sum_{i=1}^{m} \sum_{j=1}^{s} \sum_{k=0}^{N-1} \big( \ell( {\bf x}_k^{i,j}, {\bf u}_k^{i,j}, {\bf p}_k^i ) + \\ p_x(h({\bf x}_k^{i,j}, {\bf p}_k^i)) + p_u(g({\bf u}_k^{i,j}, {\bf p}_k^i)) \big) \end{split} \end{equation} where $ \ell( {\bf x}_k, {\bf u}_k, {\bf p}_k): \mathbb{R}^{n_x + n_u + n_p} \to \mathbb{R} $ defines the control objective, while $p_x(h({\bf x}_k, {\bf p}_k)): \mathbb{R}^{n_h + n_{p}} \to \mathbb{R} $, and $ p_u(g({\bf u}_k, {\bf p}_k)): \mathbb{R}^{n_g + n_{p}} \to \mathbb{R} $ define penalties of parametric state and input constraints, defined as: \begin{subequations} \label{eq:ReLU_ineq} \begin{align} p_x(h({\bf x}_k, {\bf p}_k) & = || \texttt{ReLU}(h({\bf x}_k, {\bf p}_k)||_{Q_h}^2 \\ p_u(g({\bf u}_k, {\bf p}_k ) & = || \texttt{ReLU}(g({\bf u}_k, {\bf p}_k)||_{Q_g}^2 \end{align} \end{subequations} where $|| \cdot||_Q^2$ represents squared $2$-norm weighted by a scalar factor $Q$, and $\texttt{ReLU}$ defines rectifier linear unit function. \subsection{Stochastic Parametric DPC Policy Optimization} The advantage of the model-based approach of DPC is that we can directly compute the policy gradient using automatic differentiation. For a simpler exposition of the policy gradient, we start by defining the following proxy variables: \begin{subequations} \begin{align} L & = \frac{1}{msN} \sum_{i=1}^{m} \sum_{j=1}^{s} \sum_{k=0}^{N-1}\ell( {\bf x}_k^{i,j}, {\bf u}_k^{i,j}, {\bf p}_k^i) \\ P_x & = \frac{1}{msN} \sum_{i=1}^{m} \sum_{j=1}^{s} \sum_{k=0}^{N-1} p_x(h({\bf x}_k^{i,j}, {\bf p}_k^i)) \\ P_u & = \frac{1}{msN} \sum_{i=1}^{m} \sum_{j=1}^{s} \sum_{k=0}^{N-1} p_u(g({\bf u}_k^{i,j}, {\bf p}_k^i)) \end{align} \end{subequations} Where $L$, $P_x$, and $P_u$ represent scalar values of the control objective, state, and input constraints, respectively, evaluated on the $N$-step ahead rollouts of the closed-loop system dynamics over the batches of sampled problem parameters ($i$-th index) and uncertainties ($j$-th index). This parametrization now allows us to express the policy gradient by using the chain rule as follows: \begin{equation} \label{eq:grad} \begin{split} \nabla_{{\bf W}} J = \frac{ \partial L}{\partial {\bf W}} + \frac{ \partial P_x}{\partial {\bf W}} + \frac{ \partial P_u }{\partial {\bf W}} = \\ \frac{ \partial L}{\partial {\bf x}} \frac{ \partial {\bf x}}{\partial {\bf u}} \frac{ \partial {\bf u}}{\partial {\bf W}} + \frac{ \partial L}{\partial {\bf u}} \frac{ \partial {\bf u}}{\partial {\bf W}} + \frac{ \partial P_x}{\partial {\bf x}} \frac{ \partial {\bf x}}{\partial {\bf u}} \frac{ \partial {\bf u}}{\partial {\bf W}} + \frac{ \partial P_u}{\partial {\bf u}} \frac{ \partial {\bf u}}{\partial {\bf W}} \end{split} \end{equation} Here $\frac{ \partial {\bf u}}{\partial {\bf W}}$ are partial derivatives of the neural policy outputs that can be computed via standard backpropagation through the neural network architecture. Having fully parametrized policy gradient~\eqref{eq:grad} now allows us to train the neural control policies offline to obtain the approximate solution of the stochastic parametric optimal control problems~\eqref{eq:DPC} using gradient-based optimization. Specifically, we propose a policy optimization algorithm~\eqref{algo:DPC_optim} that is based on joint sampling of the problem parameters and initial conditions from distributions $P_{\boldsymbol \xi}$, and $P_{{\bf x}_0}$, creating $m$ parametric scenarios, and independent sampling of the uncertainties from the distribution $P_{\boldsymbol \omega}$, creating $s$ uncertainty scenarios. Then we construct a differentiable computational graph of the problem~\eqref{eq:DPC} with known matrices of the nominal system model~\eqref{eq:truth:model:uncertain}, randomly initialized neural control policy~\eqref{eq:dnn}, and parametric loss function~\eqref{eq:policy_loss}. This model-based approach now allows us to directly compute the policy gradients~\eqref{eq:grad} by backpropagating the loss function values~\eqref{eq:policy_loss} through the unrolled closed-loop system dynamics model over the $N$ step prediction horizon window. The computed policy gradients are then used to train the weights of the control policy $\boldsymbol \theta$ via gradient-based optimizer $\mathbb{O}$, e.g., stochastic gradient descent and its popular variants. \begin{algorithm}[htb!] \caption{Stochastic parametric differentiable predictive control (SP-DPC) policy optimization.}\label{algo:DPC_optim} \begin{algorithmic}[1] \State \textbf{input} training datasets with $m$ samples of initial conditions ${\bf x}_0$ and problem parameters $\boldsymbol \xi$ sampled from the distributions $P_{{\bf x}_0}$, and $P_{\boldsymbol \xi}$, respectively \State \textbf{input} training datasets of $s$ realizations of uncertainties $\boldsymbol \Omega$ sampled from the distribution $P_{\boldsymbol \omega}$ \State \textbf{input} nominal system model $({\bf A} ,{\bf B })$ \State \textbf{input} neural feedback policy architecture $\pi_{\boldsymbol \theta}({\bf x}_0, \boldsymbol \xi)$ \State \textbf{input} stochastic parametric DPC loss function $J$~\eqref{eq:policy_loss} \State \textbf{input} gradient-based optimizer $\mathbb{O}$ \State \textbf{differentiate} parametric DPC loss $J$~\eqref{eq:policy_loss} over the sampled distributions of the initial conditions, problem parameters, and uncertainties to obtain the policy gradient $ \nabla_{{\bf W}} J $~\eqref{eq:grad} \State \textbf{learn} policy $\pi_{\boldsymbol \theta}$ via optimizer $\mathbb{O}$ using gradient $ \nabla_{\boldsymbol \theta} J $ \State \textbf{return} optimized parameters $\boldsymbol \theta$ of the policy $\pi_{ \boldsymbol \theta}$ \end{algorithmic} \end{algorithm} The Algorithm~\ref{algo:DPC_optim} can be implemented in any programming language supporting reverse mode automatic differentiation on user-defined computational graphs, e.g., Julia, Swift, or Jax. In our case, we implemented the proposed algorithm using Pytorch~\citep{paszke2019pytorch} in Python. \textit{Remark 1}: Thanks to the known system dynamics model, and known parametric forms of the control objective and chance constraints in the proposed policy optimization Algorithm~\ref{algo:DPC_optim}, there is no need for value approximation based on explicit reward signal feedback as in the case of actor critic reinforcement learning (RL) algorithms. Instead, the parametric Lagrangian loss~\eqref{eq:policy_loss} represents the value function that can be directly sampled without approximation error. \section{Probabilistic Guarantees} In a learning-based MPC setting, recent works such as \citep{hertneck2018learning, rosolia2018stochastic, Karg2021} discuss a few probabilistic considerations. In this paper, we bring in a novel stochastic sampling-based design for differentiable predictive control architecture along with chance constraints for closed-loop state evolution and provide appropriate probabilistic guarantees motivated from \citep{hertneck2018learning}. We use the differentiable predictive control policy optimization Algorithm~\ref{algo:DPC_optim} to solve the stochastic optimization problem~\eqref{eq:DPC}. The algorithm uses a forward simulation of the trajectories, which follows the uncertain dynamical behavior. The system is perturbed by the bounded disturbance input $\omega_k^j$. Therefore, for a given initial condition, the uncertainty sources belong to $ \boldsymbol \omega_k^j \in \Phi,$ where $\Phi$ represents a bounded set. Moreover, during the batch-wise training, we have considered different trajectories belonging to different parametric scenarios with varying initial conditions ${\bf x}_0^i \in \mathbb{X} \subseteq \mathbb{R}^{n_{x}}$ and problem parameters $ \boldsymbol \xi^i \in \boldsymbol \Xi \subseteq \mathbb{R}^{Nn_{p}}$. We considered the variation of initial conditions ~\eqref{eq:dpc:x0} and different parametric scenarios~\eqref{eq:dpc:xi} contain $m$ number of known unique environment scenarios during the training, denoted with superscript $i$, and variations in different disturbance scenarios are denoted with superscript $j$ with $s$ different scenarios. Before discussing the feasibility and stability theorem, let us denote the set of state trajectories over the stochastic closed-loop system rollouts ${\bf X}^{i,j}$, with control action sequences generated using the learned explicit SP-DPC policy $[{\bf u}_0^{i,j}, \ldots, {\bf u}_{N-1}^{i,j}] = \pi_{\boldsymbol \theta}({\bf x}_0^{i,j}, \boldsymbol \xi^i)$, for given samples of parametric $\ i \in \mathbb{N}_{1}^{m}$ and disturbance $j \in \mathbb{N}_{1}^{s} $ scenarios, i.e., \begin{align} {\bf X}^{i,j}: \begin{cases} &\boldsymbol \{ {\bf x}_k^{i,j}\}, \ \forall k \in \mathbb{N}_{0}^{N-1}, \nonumber\\ &{\bf x}_{k+1}^{i,j} = {\bf A} {\bf x}_k^{i,j} + {\bf B } {\bf u}_k^{i,j} + \boldsymbol \omega_k^j. \end{cases} \end{align} We also compactly denote the constraints satisfaction metric over the sampled trajectories, \begin{align} {\bf P}^{i,j}= \mbox{True} : \begin{cases} h({\bf x}_k^{i,j}, {\bf p}_k^i) \le {\bf 0}, \ \forall k \in \mathbb{N}_{0}^{N-1}, \\ g({\bf u}_k^{i,j}, {\bf p}_k^i) \le {\bf 0}, \ \forall k \in \mathbb{N}_{0}^{N-1}, \\ {\bf x}_N^{i,j} \in \mathcal{X}_T \end{cases} \end{align} Now we define the indicator function, \begin{align} \mathcal{I}({\bf X}^{i,j}) := \begin{cases} \label{eq:Indicator} 1 \;\; \text{if} \;\; {\bf P}^{i,j}=\mbox{True}.\\ 0, \;\; \text{otherwise}, \end{cases} \end{align} which signifies whether the learned control law $\pi_{\boldsymbol \theta}({\bf x}_0^{i,j}, \boldsymbol \xi^i)$ satisfies the constraints along a sample trajectory until the terminal set $\mathcal{X}_T$ is reached. \textbf{Theorem 1:} Consider the sampling-based approximation of the stochastic parametric constrained optimal control problem~\eqref{eq:DPC} along with assumptions $1-4$, and the SP-DPC policy optimization Algorithm~\ref{algo:DPC_optim}. Choose chance constraint violation probability $1-\beta$, and level of confidence parameter $\delta$. Then if the empirically computed risk $\tilde{\mu}$ on the indicator function \eqref{eq:Indicator} with sufficiently large number of sample trajectories $r=ms$ satisfy $\beta \le \tilde{\mu} - \sqrt{-\frac{\ln{\frac{\delta}{2}}}{2r}}$, then the learned SP-DPC policy $\pi_{\boldsymbol \theta}({\bf x}_0, \boldsymbol \xi)$ will guarantee satisfaction of the chance constraints~\eqref{eq:chance_con} and closed-loop stability in probabilistic sense. \qed \textit{Proof:} We consider $m$ different scenarios encompassing sampled initial conditions and parameters, and $s$ different disturbance scenarios, thereby generating $r = ms$ sample trajectories. Moreover, the initial conditions, parameters and disturbance scenarios are each sampled in an iid fashion, thereby making ${\bf X}^{i,j},\mathcal{I}({\bf X}^{i,j})$ iid with the learned control policy $\pi_{\boldsymbol \theta}({\bf x}_0^{i,j}, \boldsymbol \xi^i)$. The empirical risk over trajectories are defined as, \begin{align} \Tilde{\mu} = \frac{1}{ms}\sum_{i=1}^m\sum_{j=1}^s \mathcal{I}({\bf X}^{i,j}). \end{align} The constraint satisfaction and stability are guaranteed if for ${\bf x}_0^{i,j} \thicksim P_{{\bf x}_0}, \ \boldsymbol \xi^i \thicksim P_{\boldsymbol \xi}, \ \boldsymbol \omega_k^j \thicksim P_{\boldsymbol \omega}$, we have, $\mathcal{I}({\bf X}^{i,j}) = 1$, i.e., deterministic samples of chance constraints are satisfied, $h({\bf x}_k^{i,j}, {\bf p}_k^i) \le {\bf 0}$, $ g({\bf x}_k^{i,j}, {\bf p}_k^i) \le {\bf 0}$, ${\bf x}_N^{i,j} \in \mathcal{X}_T$, $\forall i \in \mathbb{N}_1^m$, and $\forall j \in \mathbb{N}_1^s$. Denoting $\mu := \textbf{Pr}(\mathcal{I}({\bf X}^{i,j}) = 1)$ we recall Hoeffding’s Inequality \citep{hertneck2018learning} to estimate $\mu$ from the empirical risk $\tilde{\mu}$ leading to: \begin{align} \textbf{Pr}(|\tilde{\mu} - \mu| \geq \alpha) \leq 2\mbox{exp}(-2r\alpha^2) \;\; \forall \alpha > 0. \end{align} Therefore, denoting $\delta := 2\mbox{exp}(-2r\alpha^2)$, with confidence $1 - \delta$ we will have, \begin{align} \label{eq:prob_guarantee} \textbf{Pr}(\mathcal{I}({\bf X}^{i,j}) = 1) = \mu \geq \tilde{\mu} - \alpha. \end{align} Thus for a chosen confidence $\delta$ and risk lower bound $\beta \leq \textbf{Pr}(\mathcal{I}({\bf X}^{i,j})) = 1$, we can evaluate the empirical risk bound: \begin{equation} \label{eq:mu_bound} \beta \le \tilde{\mu} - \alpha = \tilde{\mu} - \sqrt{-\frac{\ln{\frac{\delta}{2}}}{2r}} \end{equation} Thereby, for a fixed chosen level of confidence $\delta$ and risk lower bound $\beta$, the empirical risk $\tilde{\mu}$ and $\alpha$ can be computed for an experimental value of $r$. Thereby when \eqref{eq:mu_bound} holds for the policies trained via Algorithm~\ref{algo:DPC_optim}, then with confidence at least $1-\delta$, for at least a fraction of $\beta$ trajectories ${\bf X}^{i,j}$ we will have ${\bf P}^{i,j}=\mbox{True}$. Or in other words, the chance constraints~\eqref{eq:chance_con} are satisfied with the confidence $1-\delta$. Furthermore, along with the constraint satisfaction of the closed-loop trajectories, assumption $4$ guides us to the existence of a positive invariant terminal set in presence of bounded disturbances, thereby maintaining stability once the constraints are satisfied in probabilistic sense. This concludes the proof guaranteeing stability and constraint satisfaction of the learned policy using the SP-DPC optimization algorithm Algorithm~\ref{algo:DPC_optim}. \qed \section{Numerical Case Studies} In this section, we present three numerical case studies for showcasing the flexibility of the proposed SP-DPC method. In particular, we demonstrate stochastically robust stabilization of an unstable system, reference tracking, and scalability to stochastic systems with a larger number of states and control inputs, as well as stochastic parametric obstacle avoidance with nonlinear constraints. All examples are implemented in the NeuroMANCER~\citep{Neuromancer2021}, which is a Pytorch-based~\citep{paszke2019pytorch} toolbox for solving constrained parametric optimization problems using sampling-based algorithms such as the proposed Algorithm~\ref{algo:DPC_optim}. All examples below have been trained using the stochastic gradient descent optimizer AdamW~\citep{loshchilov2017decoupled}. \subsection{Stabilizing Unstable Constrained Stochastic System} \label{sec:ex_1_stabilize} In this example, we show the ability of the SP-DPC policy optimization Algorithm~\ref{algo:DPC_optim} to learn offline the stabilizing neural feedback policy for an unstable double integrator: \begin{equation} \label{eq:double_int_un} {\bf x}_{k+1} = \begin{bmatrix} 1.2 & 1.0 \\ 0.0 & 1.0\end{bmatrix} {\bf x}_k + \begin{bmatrix} 1.0 \\ 0.5 \end{bmatrix} {\bf u}_k + \boldsymbol \omega_k \end{equation} with normally distributed additive uncertainties $\boldsymbol \omega_k \thicksim \mathcal{N}(0, 0.01)$. For stabilizing the system~\eqref{eq:double_int_un} we consider the following quadratic control performance metric: \begin{equation} \label{eq:empc_qp} \ell( {\bf x}_k, {\bf u}_k, {\bf p}_k) = \sum_{k=0}^{N-1} \big( || {\bf x}_k||_{Q_x}^2 + || {\bf u}_k ||_{Q_u}^2 \big) \end{equation} with the prediction horizon $N=2$. We also consider the following static state and input constraints: \begin{subequations} \label{eq:double_int_constr} \begin{align} h({\bf x}_k, {\bf p}_k) & : \ -{\bf x}_k -{\bf 10} \le {\bf 0} , \ {\bf x}_k - {\bf 10} \le {\bf 0} \\ g({\bf u}_k, {\bf p}_k) & : \ -{\bf u}_k -{\bf 1} \le {\bf 0} , \ {\bf u}_k - {\bf 1} \le {\bf 0} \end{align} \end{subequations} Additionally, we consider the terminal constraint: \begin{equation} \label{eq:terminal_con} {\bf x}_N \in \mathcal{X}_f: -{\bf 0.1} \le {\bf x}_N \le {\bf 0.1}. \end{equation} We use known system dynamics matrices~\eqref{eq:double_int}, the control metric~\eqref{eq:empc_qp}, and constraints~\eqref{eq:double_int_constr}, and~\eqref{eq:terminal_con}, the loss function $J$~\eqref{eq:policy_loss} using the constraints penalties~\eqref{eq:ReLU_ineq}. For relative weights of the loss function terms, we use $Q_x = 5.0$, $Q_u = 0.2$, $Q_h = 10.0$, $Q_g = 100.0$, $Q_f = 1.0$, where $Q_f$ refers to the terminal penalty weight. This allows us to use a fully parametrized loss function $J$ in Algorithm~\eqref{algo:DPC_optim} to train the full-state feedback neural policy ${\bf u}_k = \pi_{\boldsymbol \theta}({\bf x}_k): \mathbb{R}^{2} \to \mathbb{R}^{1}$ with $4$ layers, each with $20$-hidden unites, and \texttt{ReLU} activations functions. For generating the synthetic training dataset we use $m=3333$ samples of the uniformly distributed initial conditions from the interval ${\bf x}_0^i \in [-{\bf 10}, {\bf 10}]$. Then for each initial condition ${\bf x}_0^i$ we sample $s=10$ realizations of the uncertainties $\boldsymbol \omega_k^j$ and form the dataset with total number of $sm = 100k$ iid samples. Thus in this example, the problem parameters represent the initial conditions $\boldsymbol \xi = [{\bf x}_0^T]$. Then for the given number of samples $r=ms$ and a chosen confidence $\delta=0.99$ we evaluate the empirical risk bound via~\eqref{eq:mu_bound}. Thus we can say with $99.0 \%$ confidence that we satisfy the closed-loop stability and chance constraints with probability at least $\Tilde{\mu} -\alpha = 0.9951 $. The stochastic closed-loop control trajectories of the system~\eqref{eq:double_int} controlled with trained neural feedback policy and $20$ realizations of the additive uncertainties is shown in Figure~\ref{fig:DPC_cl_ex1}. In this example, we demonstrate stochastically robust control performance of the stabilizing neural policy trained using SP-DPC policy optimization Algorithm~\ref{algo:DPC_optim}. \begin{figure}[htb!] \centering \includegraphics[width=.40\textwidth]{./figs/closed_loop_sdpc.pdf} \caption{Closed-loop trajectories of the stochastic double integrator system~\eqref{eq:double_int_un} controlled by stabilizing neural feedback policy trained using Algorithm~\ref{algo:DPC_optim} with DPC problem formulation~\eqref{eq:DPC}. Different colors represent $j$-th realization of additive uncertainty scenario. } \label{fig:DPC_cl_ex1} \end{figure} \subsection{Stochastic Constrained Reference Tracking } \label{sec:ex_2_tracking} In this case study, we demonstrate the scalability of the proposed SP-DPC policy optimization Algorithm~\ref{algo:DPC_optim} considering systems with a larger number of states and control actions. In particular, we consider the linear quadcopter model\footnote{https://osqp.org/docs/examples/mpc.html} with ${\bf x}_k \in \mathbb{R}^{12}$, and ${\bf u}_k \in \mathbb{R}^4$, subject to additive uncertainties $\boldsymbol \omega_k \in \mathbb{R}^{12} \thicksim \mathcal{N}(0, 0.02^2)$. The objective is to track the reference with the $3$-rd state, while keeping the rest of the states stable. Hence, for training the policy via Algorithm~\ref{algo:DPC_optim}, we consider the SP-DPC problem~\eqref{eq:DPC} with quadratic control objective: \begin{equation} \label{eq:reference} \ell({\bf x}_k, {\bf u}_k, {\bf p}_k ) = \sum_{k=0}^{N-1} \big( || {\bf y}_k - {\bf r}_k||_{Q_r}^2 + || {\bf x}_k ||_{Q_x}^2 \big) \end{equation} with horizon $N =10$, weights $Q_r = 20$, $Q_x = 5$. We consider the following state and input constraints: \begin{subequations} \label{eq:quadcopter_constr} \begin{align} h({\bf x}_k, {\bf p}_k) & : \ -{\bf x}_k -{\bf 10} \le {\bf 0} , \ {\bf x}_k - {\bf 10} \le {\bf 0} \\ g({\bf u}_k, {\bf p}_k) & : \ -{\bf u}_k -{\bf 1} \le {\bf 0} , \ {\bf u}_k - {\bf 2.5} \le {\bf 0} \end{align} \end{subequations} penalized via~\eqref{eq:ReLU_ineq} with weights $Q_{h} = 1$, and $Q_{g} = 2$, respectively. To promote the stability, we impose the contraction constraint with penalty weight $Q_{c} = 1$: \begin{equation} \label{eq:state_contract_con} h({\bf x}_k, {\bf p}_k) : \ || {\bf x}_{k+1} ||_p \le 0.8 || {\bf x}_{k} ||_p \end{equation} We trained the open-loop full state feedback neural policy~\eqref{eq:dnn} $ \pi_{{\bf W}}({\bf x}): \mathbb{R}^{12} \to \mathbb{R}^{N \times 4}$ with $2$ layers, $100$ hidden states, and \texttt{ReLU} activation functions using a training dataset with $m=3333$ samples of uniformly sampled initial conditions from the interval ${\bf x}_0^i \in [-{\bf 2}, {\bf 2}]$, with $s=10$ samples of uncertainties $\boldsymbol \omega_k \thicksim \mathcal{N}(0, 0.02^2)$ per each parametric scenario. Then for the given number of samples $r=ms$ and a chosen confidence $\delta=0.99$ we evaluate the empirical risk bound via~\eqref{eq:mu_bound}. Thus we can say with $99.0 \%$ confidence that we satisfy the closed-loop stability and chance constraints with probability at least $\Tilde{\mu} -\alpha = 0.9832$. Figure~\ref{fig:e1:DPC_cl} then shows the closed-loop simulations with the trained policy implemented using receding horizon control (RHC). We demonstrate robust performance in tracking the desired reference for the stochastic system while keeping the overall system stable under perturbation. \begin{figure}[htb!] \centering \includegraphics[width=.40\textwidth]{./figs/quadcopter_sdpc_psim30k_wsim3_sigma0.02.pdf} \caption{Closed-loop trajectories of the stochastic quadcopter model controlled by reference tracking neural feedback policy trained using Algorithm~\ref{algo:DPC_optim} with SP-DPC problem formulation~\eqref{eq:DPC}. Different colors represent $j$-th realization of additive uncertainty scenario. } \label{fig:e1:DPC_cl} \end{figure} In Table~\ref{tab:cpu_quadcopter} we demonstrate the computational scalability of the proposed approach compared to implicit deterministic MPC implemented in CVXPY~\citep{diamond2016cvxpy} and solved online via OSQP solver~\citep{osqp2020}. On average, we outperform the online deterministic MPC by roughly an order of magnitude in mean and maximum evaluation time. Please note that due to the larger prediction horizon and state and input dimensions, the problem being solved is far beyond the reach of classical parametric programming solvers. \begin{table}[htb!] \begin{center}\caption{Comparison of online computational time of the proposed SP-DPC policy against implicit MPC solved with OSQP.} \label{tab:cpu_quadcopter} \begin{tabular}{l|ll} {online evaluation time} & mean [$1e^{-3}$ s] & max [$1e^{-3}$ s] \\ \hline SP-DPC & 0.272 & 1.038 \\ MPC (OSQP) & 9.196 & 82.857 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Stochastic Parametric Obstacle Avoidance} \label{sec:ex_3_obstacle} Here we demonstrate that the proposed SP-DPC policy optimization Algorithm~\ref{algo:DPC_optim} can be applied to stochastic parametric obstacle avoidance problems with nonlinear constraints. We assume the double integrator system: \begin{equation} \label{eq:double_int} {\bf x}_{k+1} = \begin{bmatrix} 1.0 & 0.1 \\ 0.0 & 1.0\end{bmatrix} {\bf x}_k + \begin{bmatrix} 1.0 & 0.0 \\ 0.0 & 1.0 \end{bmatrix} {\bf u}_k + \boldsymbol \omega_k \end{equation} with ${\bf x} \in \mathbb{R}^2$ and ${\bf u} \in \mathbb{R}^2$ subject to the box constraints~\eqref{eq:double_int_constr}. Furthermore, let's assume an obstacle parametrized by the nonlinear state constraints: \begin{equation} \label{eq:obstacle} h({\bf x}_k, {\bf p}_k) : p^2 \le b ({\bf x}_{1,k} - c)^ 2 + ({\bf x}_{2,k} - d) ^ 2 \end{equation} where ${\bf x}_{i,k}$ depicts the $i$-th state, and $p$, $b$, $c$, $d$ are parameters defining the volume, shape, and center of the obstacle, respectively. For training the SP-DPC neural policy via Algorithm~\ref{algo:DPC_optim} we use the following objective: \begin{equation} \label{eq:obstacle_loss} \begin{split} \ell({\bf x}_k, {\bf u}_k, {\bf p}_k ) = || {\bf x}_N - {\bf r}_N||_{Q_r}^2 + \sum_{k=0}^{N-2} || {\bf u}_{k+1} - {\bf u}_k ||_{Q_{du}}^2 +\\ \sum_{k=0}^{N-1} \big( || {\bf x}_{k+1} - {\bf x}_k ||_{Q_{dx}}^2 + || {\bf u}_k ||_{Q_{u}}^2 \big) \end{split} \end{equation} with the prediction horizon $N=20$. The first term penalizes the deviation of the terminal state from the target position parametrized by ${\bf r}_N$. Wile the second and third terms of the objective penalize the change in control actions and states, thus promoting solutions with smoother trajectories. The last term is an energy minimization term on actions. We assume the following scaling factors $Q_r = 1.0$, $Q_{du} = 1.0$, $Q_{dx} =1.0$, $Q_u = 1.0$, and $Q_h = 100.0$, for reference, input and state smoothing, energy minimization, and constraints penalties, respectively. The uncertainty scenarios $\boldsymbol \omega_k^j $ are sampled from the normal distribution $\mathcal{N}(0, 0.01^2)$, with total number of samples $s = 100$. The vector of sampled parameters for this problem is given as $\boldsymbol\xi^i = [{\bf x}_0^T, {\bf r}_N^T, p, b, c, d]^i$, with total number of samples $m=1000$. Thus having total number of scenarios $r=100k$ out of which $33k$ is used for training. Due to the constraint~\eqref{eq:obstacle} the resulting parametric optimal control problem~\eqref{eq:DPC} becomes nonlinear. To demonstrate the computational efficiency of the proposed SP-DPC method, we evaluate the computational time of the neural policies trained via Algorithm~\ref{algo:DPC_optim} against the deterministic nonlinear MPC implemented in CasADi framework~\citep{Andersson2019} and solved online using the IPOPT solver~\citep{Wchter2006OnTI}. Table~\ref{tab:cpu_obstacle} shows the mean and maximum online computational time associated with the evaluation of the learned SP-DPC neural policy, compared against implicit nonlinear MPC. We show that the learned neural control policy is roughly $5$ times faster in the worst case and an order of magnitude faster on average than the deterministic NMPC solved online via IPOPT. \begin{table}[htb!] \begin{center}\caption{Online computational time of the SP-DPC neural policy, compared against implicit nonlinear MPC solved online via IPOPT.} \label{tab:cpu_obstacle} \begin{tabular}{l|ll} {online evaluation time} & mean [$1e^{-3}$ s] & max [$1e^{-3}$ s] \\ \hline SP-DPC & 2.555 & 10.144 \\ MPC (IPOPT) & 28.362 & 53.340 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[htb!] \centering \includegraphics[width=0.49\linewidth]{./figs/stoch_obstacle_2.pdf} \includegraphics[width=0.49\linewidth]{./figs/stoch_obstacle_1.pdf} \includegraphics[width=0.49\linewidth]{./figs/stoch_obstacle_3.pdf} \includegraphics[width=0.49\linewidth]{./figs/stoch_obstacle_4.pdf} \caption{Examples of trajectories with different parametric scenarios of the stochastic obstacle avoidance problem~\eqref{eq:obstacle} obtained using neural policy trained via SP-DPC Algorithm~\ref{algo:DPC_optim}. Nominal nonlinear MPC without uncertainties is computed online using IPOPT solver.} \label{fig:obstacle} \end{figure} \section{Summary} In this paper, we presented a learning-based stochastic parametric differentiable predictive control (SP-DPC) methodology for uncertain dynamic systems. We consider probabilistic chance constraints on the state trajectories and use a sampling-based approach encompassing variations in initial conditions, problem parameters, and disturbances. Our proposed SP-DPC policy optimization algorithm employs automatic differentiation for efficient computation of policy gradients for the constrained stochastic parametric optimization problem, which forms the basis for the differentiable programming paradigm for predictive control of uncertain systems. We provided rigorous probabilistic guarantees for the learned SP-DPC neural policies for constraint satisfaction and closed-loop system stability. Our approach enjoys better scalability than classical parametric programming solvers and is computationally more efficient than state-of-the-art nonlinear programming solvers in the online evaluation. We substantiate these claims in three numerical examples, including stabilization of unstable systems, parametric reference tracking, and parametric obstacle avoidance with nonlinear constraints. \begin{ack} This research was supported by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's “Data-Driven Decision Control for Complex Systems (DnC2S)” project. Pacific Northwest National Laboratory is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract No. DE-AC05-76RL01830. \end{ack}
2,877,628,089,274
arxiv
\section{Event Simulation} The signal process of interest is the production of various resonances in association with a jet, viz. $(pp \to X(\to jj)+ j\ ; X\epsilon\{Z^{\prime}_{\mu},C_{\mu},\Phi_{6},q^*,X^{\mu\nu},S_{8}\})$\footnote{We also performed an analogous analysis of the production of the resonances in association with a W boson, which will be reported in a future work.}, where the resonance is boosted sufficiently that its decay products lie within a single ``fat jet". The dominant background originates from QCD multijet events. The various resonance models were implemented in {\tt Feynrules}~\cite{Alloul:2013bka}. Parton level events for both signal and background were simulated using {\tt MADGRAPH\_AMC@NLO}~\cite{Alwall:2014hca} assuming 13 TeV LHC energy, with subsequent showering and hadronization performed using {\tt PYTHIA8}~\cite{Sjostrand:2014zea}. We use {\tt FASTJET}~\cite{Cacciari:2011ma} to reconstruct jets and calculate JECs. Additionally jet energy smearing and detector granularity are simulated using {\tt Delphes3}~\cite{deFavereau:2013fsa} with parameters similar to ATLAS. We use the Cambridge-Aachen algorithm~\cite{Dokshitzer:1997in} to construct fat-jets of radius $R=1.0$ and use the mass-drop tagger~\cite{Butterworth:2008iy} to resolve the fat jets into subjets to reconstruct the mass of the resonance $X$ within $\rm M_{X}\pm 20~GeV$ to help reduce the background. Importantly, we find that the mass-drop tagger does not significantly affect JEC distributions of unfiltered signal fat-jets. Further, the acceptance of the tagger does not depend significantly on the nature of the resonance. We require $\rm H_{T} = \Sigma p_{T} > 900\ \text{GeV}$ and $p^{\text{fatjet}}_{T} > 500~\text{GeV}$. We use {\tt MCFM}~\cite{Campbell:2015qma} to determine K-factors for NLO production of the $V+$jets, $t\bar{t}$ and single top backgrounds. NLO K-factors for the dijet production cross-section were determined using {\tt POWHEGBOX}~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd}. Further, we use the MLM~\cite{Mangano:2002ea} matching procedure in {\tt PYTHIA8} for multi-jet events that were generated in {\tt MADGRAPH\_AMC@NLO}. For the purpose of demonstration, the mass of the resonance is set to $M_{X} = 250$~GeV. The current 95 $\%$ CL bound on a 250 GeV leptophobic $Z'$ from $35.9 ~\text{fb}^{-1}$ of $13$~TeV data is $g_b \stackrel{<}{\sim} 1.5$ $(g_q\stackrel{<}{\sim} 0.22)$, compared with an expected bound of $g_b\stackrel{<}{\sim} 1.1$~\cite{Sirunyan:2017dnz}. We therefore consider a $Z'$ resonance with $g_b=0.6$, which is still allowed by the data. For this coupling, we find that the cross section after all cuts is $25$~fb. For all other resonances we adjust the value of the couplings such that for all resonances under consideration, the cross-section after cuts is $25$~fb. We find that our total background is $\sim 50$~pb with the dominant contribution coming from QCD multi-jet processes. We find that with our cuts we expect $S/\sqrt{B}\sim 1.9\sigma$ which is comparable to the expectations of experimental results. JECs were originally introduced in \cite{Banfi:2004yd,Jankowiak:2011qa} as a two point correlator, and generalized in \cite{Larkoski:2013eya}. Studies on JEC have focused on standard model processes, specially to distinguish quark jets from gluon jets. Additionally, JECs have been shown to be able to differentiate boosted Higgs and top quarks from QCD backgrounds \cite{Larkoski:2013eya}. The N-point generalized JEC is defined as \cite{Larkoski:2013eya}, % {\footnotesize \begin{equation}\label{eq:JEC} ECF(N,\beta)= \sum_{(i_1<..<i_N\in J)}\prod_{a=1}^{N} p_{T{i_a}}\Bigg(\prod_{b=1}^{N-1}\prod_{c=b+1}^{N}R_{i_{b}i_{c}}\Bigg)^{\beta}. \end{equation} } The sum runs over all objects (tracks\footnote{Here we define the JECs in terms of the individual particles in the ``fat jet" in the simulated event, after using the detector simulation as noted above.} or calorimeter cells) within a system J (individual jets or all final states of the collision). $p_{T_{i}}$ is the transverse momentum of each constituent object. The variable $R_{ij}= \sqrt{(\eta_{i}-\eta_{j})^{2} + (\theta_{i} - \theta_{j})^{2}}$, denotes a pairwise distance measure and is raised to the power $\beta$. Here $\eta_i$ is the pseudo-rapidity while $\theta_{i}$ is the azimuthal angle of particle $i$. The entire function is infrared and collinear safe for $\beta > 0$. Using Eq. \ref{eq:JEC} one can construct a dimensionless double ratio as \begin{equation} \label{eq:CNbeta} C_{N}^{(\beta)} = \frac{ECF(N+1,\beta)ECF(N-1,\beta)}{ECF(N,\beta)^{2}}\ . \end{equation} In general, $C_{N}^{(\beta)}$ quantifies radiation of higher order $\alpha_{s}^{n}$, emerging out of leading order hard sub-jets. In a boosted $Z^{\prime}\to j_{1} j_{2}$ like system, if $C_{2}^{(\beta)} < C_{1}^{(\beta)}$, the fat jet has two resolved hard subjets, and higher order substructure is mostly soft or collinear. With subsequent soft emissions of the final state, one can assume $p_{T}^{j_1}\simeq p_{T}^{j_2} >> p_{T}^{j_i}$, where $i> 2$. Thus, the leading approximation can be written as, \begin{equation}\label{eq:C11} C_{1}^{(1)}\simeq R_{12}/4\ . \end{equation} Since $R_{12}\simeq 2 m_{Z^{\prime}}/p_{T}^{j}$, $C_{1}^{(1)}$ is directly related to the boost of the resonance. We show the distribution of $C_{1}^{(1)}$ for various resonances in Fig.~\ref{fig:c11},. Since we require $R\le 1.0$ and $p_{T}^{\text{fat-jet}}> 500$~GeV, we see that $C_{1}^{(1)}\lesssim 0.25$. Further, since the $p_{T}$ spectrum is almost identical for all resonances under consideration the distribution for $C_{1}^{(1)}$ look the same. The $p_{T}$ distribution for $q^{\star}$ and $X^{\mu\nu}$ are slightly harder (and therefore $C_{1}^{(1)}$ is shifted to smaller values) since their interactions are mediated by dimension 5 operators. We would also like to point out here that information about the initial state and therefore the nature of the resonance can be gleaned by comparing the $p_T$ distribution for cases when the resonance is produced in association with other particles such as a $W$-boson.The lower end of the $C_{1}^{(1)}$ distribution is bounded by detector resolution. This is the minimal separation between subjets that can be resolved, and is encoded in our implementation of the mass drop tagger. \begin{figure}[t] \centering \includegraphics[width= 0.45 \textwidth]{Correlators/ptc11_j.png} \caption{The double ratio distribution for $C_{1}^{(1)}$ for the different kinds of resonances under consideration; $Z^{\prime}$ in pink (small-dashed), sextet-diquark $\Phi_{6}$ in black (dotted) , Coloron ($C_{\mu}$) in red (bold,thick), excited quark ($q^{\star}$, Xquark) in green (large-dashed), Spin-2 ($X^{\mu\nu}$) in blue (bold,thin), scalar color octet ($S_8$) in black (dot-dashed) . The cyan shaded region corresponds to the distribution of the multi-jet background. \label{fig:c11}} \end{figure} Higher point moments of the JEC depend crucially on the nature of the resonance, in particular, the color structure not only of the resonance but also its decay products -- in particular, since $C_F< C_A$, a color octet will radiate more widely than a color triplet. This implies that the correlator double ratios $C_{N}^{(\beta)}$ should in general be larger for a color octet than a color triplet and smallest for a color singlet. \begin{figure}[t] \centering \includegraphics[width= 0.45 \textwidth]{Correlators/ptc22_j.png} \caption{ The double ratio distributions $C_{2}^{(2)}$ for the resonances and the multi-jet background.\label{fig:c22}} \end{figure} In Fig.~\ref{fig:c22} we present distributions for the double ratios $C_{2}^{(2)}$. To understand the behavior of $C_{2}^{\beta}$, consider a simplified scenario of the two body hadronic decay of a resonance X with one soft emission-- $\rm X \to 1 + 2 + 3_{\text{soft}}$ where $3_{\text{soft}}$ originates from $1$. We also expect the distance measure $R_{13}$ to be small and $\rm p_{T}^{j1}\simeq p_{T}^{j2} (=p_{T}) >> p_{T}^{j3}$ in the soft and collinear approximation . $C_{2}^{(\beta)}$ can then be approximated as \begin{equation} C_{2}^{(\beta)} \simeq \frac{ 2 \varepsilon R_{12}^{\beta}R_{13}^{\beta}R_{23}^{\beta} } { (R_{12}^{\beta} + \varepsilon R_{13}^{\beta} + \varepsilon R_{23}^{\beta} )^{2} }~; \end{equation} note that $\varepsilon R_{13}= (p_{T}^{j3}/{p_{T}})R_{13} \ll 1$ is doubly suppressed since the third jet, $3_{soft}$, is both low-momentum and colinear with jet 1. We therefore expect $C^{(2)}_{2}$ to peak near $0$ as seen in Fig.~\ref{fig:c22}. As discussed earlier, a small $C_2^{(2)}$ implies that the event is mostly a two prong subjet system. In Fig.~\ref{fig:c22} we also see, as expected, that the color singlet $Z^{\prime}$ has the smallest values for $C_{2}^{(2)}$ whereas, due to the presence of more radiation, the colored objects have larger values. Although the spin-2 is a color singlet its distribution is not identical to $Z^{\prime}$ and instead has larger values of $C_{2}^{(2)}$. This is because the spin-2 predominantly decays to gluons, which themselves produce broader jets (since $C_F < C_A$), whereas the coloron and $Z^{\prime}$ decays to quarks, which produce narrower radiation patterns. As expected, the color octet scalar resonance has the largest values of $C_{2}^{(2)}$ since it is itself an octet which decays to a pair of octets (gluons). Also shown in Fig.~\ref{fig:c22} is the distribution of $C^{2}_{(2)}$ for the dominant multi-jet background. We see that its distribution is significantly different from most of the signal distributions and therefore the JECs can be used not only to discriminate between different signals but also to discriminate signal from background.\footnote{CMS~\cite{Sirunyan:2017dnz} uses JEC in its search to discriminate between a $Z^{\prime}$ and background. The behavior of $C_2^{(2)}$, suggests that in addition to enhancing $S/\sqrt{B}$ we can simultaneously use it to discriminate between resonances ($S_8$ being an exception).} The scalar octet behaves most like the QCD multi-jet background since, at low masses, the background is mostly gluonic in origin. \begin{figure}[t] \centering \includegraphics[width= 0.45 \textwidth]{Correlators/ptc23_j.png} \caption{The double ratio distributions for $C_{3}^{(2)}$ for the resonances and the multi-jet background.\label{fig:c23}} \end{figure} Further discrimination between resonances can be achieved by looking at the distribution for the higher moment correlator $C_{3}^{(\beta)}$ shown in Fig.~\ref{fig:c23}. In contrast to $C_{2}^{(\beta))}$ we see that the peak of the distribution is shifted away from $0$. This behavior can be better understood by considering the scenario where $\rm X \to 1 + 2 + 3_{\text{soft}} + 4_{\text{soft}}$. In this case, we assume that the transverse momentum distribution follows, $\rm p_{T}^{j1}\simeq p_{T}^{j2} (=p_{T}) \gg (p_{T}^{j3}, p_{T}^{j4} = p_{T'})$ We can then approximate $C_{3}^{\beta}$ as (up to order $\varepsilon = \frac{p_{T'}}{p_{T}}$) \begin{eqnarray} C_{3}^{(\beta)} & \simeq & \frac{[(R_{13}R_{14}R_{23}R_{24}R_{34})^{\beta}}{[(R_{13}R_{23} )^{\beta} + (R_{14}R_{24} )^{\beta}]^2} + \mathcal{O}(\epsilon) \label{eq:C3approx} \end{eqnarray} Thus the leading term is not proportional to $\varepsilon$, resulting in the peak that is shifted away from 0, and is determined by the relative opening angles. Similar to what we saw for the lower moment correlator, we find that the distribution of $C_{3}^{(2)}$ is shifted to larger values depending on the dimensionality of the $SU(3)$ representation of the resonance as well as its decay products. The color singlet $Z^{\prime}$ decaying to a pair of quarks peaks closer to $0$, whereas the distribution for the others, which either are octets or decay to gluons, is shifted away from $0$. \begin{figure}[t] \centering \includegraphics[width= 0.45 \textwidth]{figs/cplot3_s+b_j.png} \caption{ The $p$-values testing hypothetical identities of various resonances as a function of luminosity. Horizontal lines indicate $2$ and $3~\sigma$ exclusion of the alternate hypothesis. Vertical lines show where $S/\sqrt{B} =$ 3 or 4. }\label{fig:cls} \end{figure} An important point that should be noted finally is the dependence of the JEC on the exponent $\beta$. As $\beta \to 0$, the dependence on the relative angles vanishes, and the JEC double ratio approaches an (approximately) constant value away from 0. The exponent should therefore be viewed as a weighting factor that controls the size of the variation of the JEC. Note that we have not optimized $\beta$ for maximal discrimination in this analysis. Another aspect that we have not investigated and have reserved for future study is the use of JECs (or other jet observables) on unfiltered subjets to identify quark and gluonic jets. The ability to discern the decay products of these resonances would further enhance our ability to pinpoint the nature of the resonance. In order to test the ability of JECs to characterize the nature of the resonance we perform a multi-variable likelihood analysis. We do not include $C_{1}^{(1)}$ in our likelihood function, since we are trying to test the information provided by radiation patterns and not kinematics. We therefore include only $C_{2}^{(2)}$ and $C_{2}^{(3)}$ in our likelihood function and test the ability of these two jet observables in differentiating the resonances. The result of our analysis is shown in Fig.~\ref{fig:cls}. The horizontal dotted lines indicate where one can distinguish between various signal hypotheses at the $2~\sigma$ or $3~\sigma$ level; for example, one could tell a $Z^{\prime}$ from an excited quark at the $3~\sigma$ level with about $180~\text{fb}^{-1}$ of integrated luminosity. The vertical lines indicate the value $S/\sqrt{B}$ provided by a given integrated luminosity; for instance, achieving $S/\sqrt{B} = 3$ for our resonances (since we assume the signal size is $25~\text{fb}^{-1}$) would require $720~\text{fb}^{-1}$ of data. The figure shows that it is very easy to tell apart a coloron from a $Z^{\prime}$, whereas the weakest discrimination is that between a spin-2 and a diquark. In summary, we conclude that JECs are a powerful tool to both discover and identify new resonances at the LHC. \begin{acknowledgements} This material is based in part upon work supported by the National Science Foundation under Grant No. 1519045. We thank Wade Fisher and Joey Huston for useful discussions. \end{acknowledgements} \bibliographystyle{apsrev}
2,877,628,089,275
arxiv
\section{Introduction} \label{sec:introduction} In cyber-physical systems (CPS), we deal with the common task of directing a controllable entity through its environment towards a target state defined by local or global goals, without violating frame properties imposed by safety requirements. For applications of limited complexity with directly measurable environmental parameters (e.g., thermostats), practical control solutions already exist \cite{LuiY2017}. However, the environment of complex systems is often uncertain, changing, and not entirely measurable, rendering a comprehensive model of such environments for safety prediction infeasible. Deriving safety guarantees via formal methods imposes further limitations on feasible model complexities. Using a partial model instead, global safety guarantees might not hold anymore as soon as reality and model deviate. In practice, safety is only required for the actual trace of the real system. An \textit{online} approach based on model updates on the fly splits the task of deriving global guarantees into a series of safety verifications on locally valid models with limited scope. That way, we guarantee safety for all possible near futures of the current system state. However, three main problems persist: First, one still needs to guarantee that the system does not reach critical sections due to local deviations of the predicting model. Second, one needs to decide when to adapt the model or the real system. Third, among the locally safe solutions, one should choose those that keep the potential number of adaptations low. We approach these problems by splitting the environment into discrete regions, i.e., unknown, target, safe, critical, and detection regions, based on which we can derive safety-preserving entity actions and perform cost optimizations. Overall, we approach the safety problem of dynamic CPS by the following four contributions: \begin{enumerate}[noitemsep, leftmargin=*] \item We introduce online strategy synthesis (OnSS) based on frequent model updates and local synthesis \item We introduce a region interpretation of the environment (for the real system and model) to guarantee safety in the OnSS workflow \item We investigate options of optimal action plan choice from synthesized strategies \item We apply OnSS and the region interpretation to the application of autonomous medical needle steering \end{enumerate} The remaining paper is structured as follows: We provide preliminary definitions and related work in \cref{sec:preliminaries}. Then, in \cref{sec:online-strategy-synthesis}, we discuss the modeling and workflow, safety and optimization aspects of OnSS. Afterwards, we perform the needle steering experiments in \cref{sec:experiments}, and conclude our work in \cref{sec:conclusion}. \section{Preliminaries and Related Work} \label{sec:preliminaries} A \textbf{game} is a mathematical model of interaction between different decision makers. A common setting is that of two (usually one \textit{controllable} and one \textit{uncontrollable}) adversarial players with alternating turns. One established use case is a game of a controllable user against an uncontrollable environment; in needle steering, these entities are the needle and the tissue, respectively. Evaluating a game allows determining solutions for winning or losing of a particular party involved. A \textbf{timed automaton (TA)} is a finite-state machine extended by (real-valued) clocks \cite{Alur1994}. It is a simple type of hybrid system, with only one differential equation, i.e., a constant change rate for all clocks. While clocks usually provide a notion of time (where clock invariants in locations and clock guards and reset on edges determine the timed transitions between states), they can be used to model continuous variables in general, including physical variables and accumulated costs. A \textbf{timed game (TG)} extends the classical timed automaton with a notion of controllable and uncontrollable transitions. That way, one can model adversarial games between a controller and a (stochastic) environment in timed systems \cite{Bertrand2012}\cite{Bouyer2009}. The combined formalism applies well to autonomous systems in particular, as they perform actions over time in a (partially) unknown and reactive environment. \textbf{Offline strategy synthesis (OffSS)} is the automated approach of deriving a \textit{winning strategy}, i.e., a strategy satisfying a given target property \cite{Asarin1995}\cite{Cassez2005}\cite{Maler1995}. Strategy synthesis on timed games in particular is supported, e.g., by \textit{Uppaal Stratego} \cite{David2015}. There, a strategy consists of concrete decisions on actions in each state with controllable outgoing transitions. In needle steering, these decision points are motion actions, i.e., the push, rotate, and pull actions. OffSS provides the base for our online synthesis approach. Our problem is an instance of controller synthesis and verification of timed games with partial information (about the environment). Environmental abstractions in static models inevitably lead to incomplete representations of real systems and possibly wrong system behavior at run time \cite{Ferrando2021}. Several approaches therefore exist that consider environmental uncertainties already during the modeling phase. \cite{Cassez2007}\cite{David2009} present a transformation of a timed game with partial information into a timed game with complete information, \cite{Finkbeiner2012} introduces a template-based controller synthesis approach based on automatic abstraction refinement, and \cite{Bacci2021} presents a new framework for synthesizing strategies for weighted timed games \cite{Bouyer2004} with uncertainties restricted to weights. Other work introduces algebraic frameworks (e.g., based on risk factors \cite{Gleirscher2021}) for the design of correct safety controllers. Tools for static controller synthesis and verification include Kronos \cite{Daws1996}, FlySynth \cite{Altisen2002}, Synthia \cite{Peter2011}, and the tool suite UPPAAL \cite{Larsen2018}. Timed controller synthesis and verification has been applied, among others, to online floor heating \cite{Larsen2016} and vehicle rerouting \cite{Bischopink2020}\cite{Bilgram2021}. The latter involves uncertain traffic volume and periodically regenerates new strategies based on current traffic data. Our approach is similar but applies to cognitive or physiological uncertainties. Specifically, we transfer the static model of timed game needle steering \cite{Rogalla2020} based on a nonholonomic model \cite{Webster2006} into a model that is adapted online. \section{Online Strategy Synthesis}\label{sec:online-strategy-synthesis} \label{subsec:model-and-workflow} For online strategy synthesis, we use a single model combining the processes of synthesizing strategies and matching observation data. The general model consists of five components: \vspace{6pt} \noindent \begin{tabular}{@{}ll} 1. \textbf{Decision Maker} & The decision maker who gives instructions to the controlled device \\ 2. \textbf{Controlled Device} & The entity controlled by the decision maker \\ 3. \textbf{Environment} & The uncontrollable environment \\ 4. \textbf{State Checker} & The acceptor model which checks properties on individual states \\ 5. \textbf{Data Matcher} & The acceptor model which matches observations against the action plan \end{tabular} \vspace{6pt} \noindent The \texttt{Decision Maker}, \texttt{Controlled Device}, and \texttt{Environment} implement the concrete system entities, the \texttt{State Checker} accepts or rejects paths during strategy synthesis, and the \texttt{Data Matcher} validates the correctness of the current prediction model with observation data. Note that for autonomous devices, the \texttt{Decision Maker} and \texttt{Controlled Device} can be merged into a single \texttt{Actor}. In needle steering, the \texttt{Decision Maker} is the surgeon (manual) or a stochastic action selector (autonomous), the \texttt{Controlled Device} is the needle, and the \texttt{Environment} is the tissue. The \texttt{State Checker} checks if critical states are entered, and whether the target is still reachable or already reached, and the \texttt{Data Matcher} matches the observed needle position data against the model-predicted motion. In a game setting, the environment is especially important as it determines the safety and reachability of individual system states. It further determines when and to which extent particular characteristics can be measured. In an online setting, we need to know in which sections re-evaluation or adaptation of the model or real system is required to still ensure safe operation. Therefore, we split the complete state space into five regions based on the environment characteristics (see \cref{fig:needle-steering-workflow}). \vspace{6pt} \noindent \begin{tabular}{@{}ll} 1. \textbf{Unknown regions (UR)} & Regions not yet classified via a-priori knowledge or discovery \\ 2. \textbf{Safe regions (SR)} & Regions which do not violate safety \\ 3. \textbf{Critical regions (CR)} & Regions which violate safety \\ 4. \textbf{Detection regions (DR)} & Transitional regions between SRs and CRs where upcoming CRs \\ & (= safety violations) can be detected \\ 5. \textbf{Target regions (TR)} & Regions which we want to reach with the controllable entity \end{tabular} \vspace{6pt} \noindent In needle steering, the regions map to spatial areas, i.e., the SRs are the uncritical tissue areas, the CRs are hardened tissue and organs, the DRs are pre-rupture deformation sections at which force increases steadily, and the TR is the targeted placement position. \begin{figure}[t] \includegraphics[width=\textwidth]{./res/images/needle-steering-workflow} \caption{The workflow of the online needle steering application.} \label{fig:needle-steering-workflow} \end{figure} Based on the system model and the regions, the workflow of online strategy synthesis is shown in \cref{fig:needle-steering-workflow}. Generally speaking, the task is to reach a TR via SRs (or URs if knowledge on SRs is missing) without entering any CRs, which are early detected in surrounding DRs. The concrete workflow is as follows: First, the model is initialized (\texttt{Controlled Device}, \texttt{Environment}) with data of the real system's starting state. Then, an initial strategy is synthesized via reachability check on \texttt{State Checker}. Up to this point, the approach works offline. Afterwards, the system is executed and its tracked trace matched against the current motion plan model via \texttt{Data Matcher}. If an exceeding deviation is detected, the model is again updated, and a new strategy is synthesized. If no strategy can be found, or if a DR is reached, the system is \textit{readjusted}, i.e., rolled back to a previously visited and known state. In needle steering, a local pullback of the needle is used for readjustment. Furthermore, the model is updated to the new system state after readjustment. If the initial state is re-reached via rollbacks, and again no strategy is found, the process is aborted (and may be started anew under different conditions). If the target region is reached, the process succeeded. \vspace{-6pt} \paragraph{Online Safety Guarantees}\label{subsec:safety} The online strategy synthesis combines safety and reachability requirements but prioritizes safety of the system over reaching a TR. We assume that every CR has been constructed so that it is surrounded by a DR. For each CR, we define a safe margin, e.g., a minimum required distance so that the CR can be detected early enough and that proper reaction in its DR is possible. The region sizes are computed based on the underlying system; for safety guarantees, it is necessary that the DRs are larger than the measurable step size under the given time resolution to ensure to halt the system in time when detection-related parameters change. \begin{figure}[b] \newcommand\gr{\cellcolor{green}} \newcommand\bk{\cellcolor{black}} \newcommand\gy{\cellcolor{black!20}} \newcommand\bl{\cellcolor{blue}} \newcommand\rd{\cellcolor{red}} \small \centering \begin{minipage}{0.42\textwidth} \begin{tabular}[t]{cccccc} \hline & & \multicolumn{4}{c}{Real System} \\ & & SR & CR & DR & TR \\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{Model}} & UR & (3) & \gy (1) & (3,*) & (4) \\ & SR & (3) & \gy (1) & (3,*) & (4) \\ & CR & (2,3) & \gy (1,2) & (2,3) & (2) \\ & DR & (2,3) & \gy (1,2) & (2,3) & (2) \\ & TR & (3) & \gy (1) & (3,*) & (4) \\ \hline \end{tabular} \end{minipage} % \hfill \begin{minipage}{0.53\textwidth} \footnotesize \begin{itemize}[noitemsep, leftmargin=*] \item[(1)] Not possible as CRs not reachable in real system due to readjustment in surrounding DRs. \item[(2)] Not possible as plans leading to CRs or DRs in the model are discarded during synthesis. \item[(3)] Safe due to trivial safety of SRs and DRs. \item[(4)] TR was safely reached and the process finishes successfully. \item[(*)] Could affect termination (due to infinitely repeated readjustments), but is solved by safety margins. \end{itemize} \end{minipage} \caption{Case distinction on regions for safety proof. (Note: Column for unknown regions (UR) in real system is left out, as each state of the real system is either known in advance or, when reached, directly classified as SR, CR, or DR based on measured data. Furthermore, URs in the model are treated as ``safe'' during strategy synthesis until they are further classified.)} \label{fig:region-safety} \end{figure} \begin{theorem}[Safety]\label{thm:safety} A motion plan classified as safe via offline strategy synthesis is indeed safe under the momentary system assumptions (``local safety''). The system never reaches an unsafe state via online strategy synthesis (``global safety''). \end{theorem} Local safety follows directly from the soundness of the underlying strategy synthesizer. Global safety follows from the case distinction in \cref{fig:region-safety} if one can show that the safety margins are determined so, that deviations from the model cannot lead the real system into known CRs and DRs. The needle steering application indicates that the system assumptions are reasonable: We normally observe an increasing force when moving to another -- possibly critical -- tissue type due to deformations, which allows detection of upcoming CRs. Spanning around $5 - 6mm$, the deformation section is larger than the measurable step size of the used sensors, which provide new force and position data with a frequency of $150Hz$ (i.e., every $33.35{\mu}m$ of needle progress at a speed of $5mm/s$) with a conservative maximum error of $3mm$; the DRs are thus always measurable. Note that the error is mostly static and can thus be accounted for, unless optical tracking fails, e.g., due to air bubbles or surface reflections in gelatin, which may be prevented by ultrasound measurements in the future. Finally, rollbacks are possible as needle pullbacks will always follow the inversed insertion path, whose safety we discovered already. Rollbacks increase the chance of finding safe plans but the OnSS algorithm does not guarantee that every safe plan is discovered. At the same time, termination of OnSS becomes non-trivial in the presence of rollbacks since one has to show that only a finite number of DRs is added. Due to space restrictions, we just state the following property without further proof. \begin{theorem}[Incompleteness and termination]\label{thm:incompleteness} OnSS based on safety margins and on-the-fly discovered URs is incomplete. The OnSS instance for the needle-steering problem terminates. \end{theorem} \vspace{-6pt} \paragraph{Action Plan Optimization} \label{subsec:optimization} In general, one can distinguish between \textit{hard (H) and soft (S) requirements} for concrete action plans. In case of needle steering, the following requirements are given: \begin{itemize}[noitemsep, leftmargin=*] \begin{minipage}[t]{0.34\linewidth} \item \textbf{H1}: The target region is reached \item \textbf{S1}: As few rotations as possible are needed \item \textbf{S2}: The path is as short as possible \end{minipage} \hspace{1cm} \begin{minipage}[t]{0.58\linewidth} \item \textbf{H2}: No critical region (e.g., critical tissue) is pierced \item \textbf{S3}: The critical regions are circumvented most spaciously \item \textbf{S4}: The path needs the fewest amount of readjustments \item \textbf{S5}: The target center is reached as close as possible \end{minipage} \end{itemize} A synthesized strategy usually contains more than one possible action plan, which all automatically satisfy the binary hard requirements H1 and H2 regarding reachability and safety, respectively, based on the reachability query \texttt{EF StateChecker.Final\_TR\_Reached} and critical paths leading to deadlocks before. The soft requirements, in contrast, are continuous by nature, and their satisfaction differs for the accepted action plans. While some soft requirements are directly connected (e.g., S3 and S4), others usually contradict each other (e.g., S2 and S3), so that no universal optimum can be found. Therefore, a fixed weighting, i.e., cost assignment, of the soft requirements turns the action plan choice into an optimization problem. Depending on whether the system is discrete (using fixed time steps) or continuous (using ordinary differential equations), the costs assigned to the actions and distances in the model can be implemented either as integer variables incremented by cost deltas, or hybrid clocks, respectively. The concrete cost values need to be provided a-priori. For needle steering, one source of knowledge for cost weighting are the underlying biological aspects. While the precise cost vector is still subject to ongoing research, plausible assumptions for cost assignments are that needle rotations inside the tissue impose more damage than a slightly longer path, and that -- in the scope of ``safe'' regions -- readjustments impose the highest damage. \section{Experiments} \label{sec:experiments} \begin{figure}[t] \hspace*{\fill}% \subfloat[System setup]{ \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{./res/images/system-setup} \label{subfig:system-setup} \end{minipage}} \hfill \hspace*{\fill}% \subfloat[Initial strategy]{ \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth, draft=false]{./res/images/experiment-initial-strategy.pdf} \label{subfig:exp-initial-motion-plans} \end{minipage}} \hfill \subfloat[Final needle trace]{ \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth, draft=false]{./res/images/experiment-final-trace.pdf} \label{subfig:exp-final-trace} \end{minipage}} \hspace*{\fill}% \caption{Experiment setup and needle motion plans with initially unknown (red), known, and discovered (both blue) CRs.} \label{fig:example-experiment} \end{figure} Following the model structure introduced in \cref{subsec:model-and-workflow}, we implemented a needle steering model in \textit{Uppaal Stratego}. We developed an online strategy synthesis framework in \textit{Python}, which then uses verification queries on the frequently updated model for the individual synthesis steps. We conducted two types of experiments based on randomly generated (\textbf{Experiment 1}) and real-measured reference needle paths (\textbf{Experiment 2}), where the setup shown in \cref{subfig:system-setup} is used for the latter. The reference paths are used to randomly place target and critical regions on and around that path, respectively, as base for each invidual experiment run; that way, we ensure that at least one safe motion plan towards the target region exists initially, so that the experiment is not aborted immediately. An experiment run \textit{failed} if the starting state is re-reached via readjustments, or if the target region is not reached within $2$ minutes. The experiment runs differ in the number ($\{0, 1, 2, \mathbf{5}, 10, 20\}$) and size ($\{1mm, 2mm, \mathbf{3mm}, 4mm,$ $5mm, 10mm\}$) of CRs, assumed size of identified CRs ($\{1mm, 2mm, \mathbf{3mm}, 4mm, 5mm, 10mm\}$), distance of the TR along the reference trace ($\{10mm, 20mm, \mathbf{30mm}, 40mm, 50mm\}$), and percentage of initially known CRs ($\{\mathbf{0\%}, 20\%, 40\%, 60\%, 80\%, 100\%\}$). One parameter is changed at a time, and the bold values are used for the other parameters that remain unchanged. For each of the $29$ parametrizations derived hereby, we perform $20$ experiment runs, i.e., $580$ needle steering runs in total. \begin{table}[b] \centering \begin{tabular}[t]{lccccc} \hline & TR Reach & Readjustments & Synthesis Time & Overall Time \\ \hline Experiment 1 & $88.28\%$ & $(0.00,0.57,10.00)$ & $(0.04s,2.37s,5.18s)$ & $(7.42s,20.45s,85.82s)$ \\ Experiment 2 & $68.39\%$ & $(0.00,2.94,19.00)$ & $(0.04s,2.34s,6.99s)$ & $(7.61s,36.03s,119.50s)$ \\ \hline \end{tabular} \caption{Experiment results for each measure with (min,avg,max) data} \label{fig:general-result-table} \end{table} \cref{subfig:exp-initial-motion-plans} and \cref{subfig:exp-final-trace} exemplarily show the set of initially (offline) synthesized motion plans as well as the actually (online) traversed path for a single experiment run. Furthermore, the measurement results are shown in \cref{fig:general-result-table}, and highlight four aspects: First, for experiments 1 and 2, the success rates of reaching a target are between $68 - 88\%$; the rates stay below $100\%$ because of incompleteness (cf. \cref{subsec:safety}) and imprecision of the underlying geometric model (only experiment 2). Second, the number of readjustments on average is comparatively low ($0.57$ and $2.94$, respectively); some runs even need no readjustment at all. Third, the synthesis times lie between $0.04s$ and $6.99s$, which appear acceptable for an online application. Fourth, the successful runs took between $20.45s$ and $36.03s$ on average, so that the complete procedure (from insertion to target reaching) can be finished in reasonable time. \section{Conclusion and Future Work} \label{sec:conclusion} We presented an online workflow to realize controller synthesis for real-timed games in (changing) environments of partial knowledge. We classify the environment into unknown, safe, critical, detection, and target regions and ensure safe action planning to reach a target state by periodically validating and updating our model. We applied our approach to medical needle steering. In future work, we plan to replace the geometric approximation of needle motion by a physically accurate model and increase the ``hostility'' of the environment by permitting displacement of critical regions due to respiration, deformed tissue layers (which affect the needle tip speed), and inhomogeneous properties of multi-layered tissue. Work on human-robot collaboration \cite{Askarpour2020} may then provide a starting point for formal modeling and integration of more advanced human behavior. \bibliographystyle{eptcs}
2,877,628,089,276
arxiv
\section{Jet production at HERA} In \textit{ep} collisions at HERA one distinguishes two processes, according to the virtuality $Q^2$ of the exchanged boson, DIS and photo-production. In DIS a highly virtual boson ($Q^2>1$~GeV$^2$) interacts with a parton carrying a momentum fraction of the proton. The Born level contribution to DIS generates no transverse momentum in the Breit frame, where the virtual boson and the proton collide head on. Significant transverse momentum $P_T$ in the Breit frame is produced at leading order (LO) in the strong coupling $\alpha_s$ by the QCD-Compton and boson-gluon fusion processes. In direct photo-production the quasi-real photon ($Q^2<1$~GeV$^2$) interacts with a parton from the proton. In resolved photo-production the photon behaves as a hadron and a parton from the photon, carrying a fraction of its momentum, enters the hard scattering with the proton and gives rise to jet production. In the analyses presented here jets are defined using the $k_T$ clustering algorithm. The associated cross-sections are collinear and infrared safe and therefore well suited for comparison with predictions from fixed order QCD calculations. For DIS, the jet algorithm is applied in the Breit frame, and for photo-production in a photon-proton collinear frame, the laboratory frame. \section{Gauge Structure of QCD} The angular correlations in the 3-jet cross sections were measured by the ZEUS collaboration in photoproduction and DIS based on a sample of about $130$~pb$^{-1}$ collected between 1995 and 2000~\cite{bib:Angular}. The transverse momentum $P_T$ of jets is required to exceed 14 GeV, if $Q^2<1~$GeV$^2$, or \mbox{8 GeV} for the first jet and 5 GeV for second and third jets, if $Q^2>125~$GeV$^2$. The differential cross sections were normalised in shape by the total 3-jet cross section in order to reduce the sensitivity to the scales, parton density functions (PDFs) and to the strong coupling constant and its running. In a given angular distribution, it may be possible to distinguish a particular component of the hard scattering, e.g. its shape may distinguish between a two-boson-fermion vertex or a tri-boson vertex. The absolute contribution to the cross sections of each of those vertices is given by the colour factors which represents a signature of the underlying gauge group. An example of such the angular correlations is given in figure \ref{fig:angcorr}. \begin{figure} \centering \psfig{figure=fig1.eps,height=2.8in} \caption{Differential 3-jet cross sections normalised to the shape in photoproduction as function of $\Theta_H$, the angle between the plane determined by the highest-transverse-energy jet and the beam and the plane determined by the two jets with lowest transverse energy (left) and the angle between second and third jet (right). \label{fig:angcorr}} \end{figure} The theoretical uncertainties, typically of the order of $5\%$, are dominated by the experimental uncertainties of typically $10\%$. The main impact to the experimental uncertainty comes from the limited statistics (inner error bars) and from systematic uncertainties (outer error bars) dominated by the model dependence of data correction. This measurement rule out some of the choices of underlying gauge groups, like SU(N) with large N or $C_F=0$, but further improvements in sensitivity are needed to distinguish between SU(3), SO(3) and U(1)$^3$. \section{Strong Coupling Determination} \subsection{Jets cross sections} In a ZEUS photoproduction analysis~\cite{bib:ZEUSPhP} the inclusive jet cross sections were measured by requiring the jet $P_T$ above 17~GeV and the jet pseudorapidity within \mbox{$-1.0<\eta^{\rm Lab}<2.5$}. The measured cross-sections are corrected for detector acceptance using leading order Monte Carlo event generators. The overall experimental systematic uncertainty of typically 10 to 15\% is dominated by the uncertainty on the absolute energy scale of the hadronic calorimeters and the model dependence of data correction. A jet measurement in DIS was recently performed by the H1 collaboration in two kinematic regimes. The low $Q^2$ data~\cite{bib:LowQ2}, corresponding to $5 < Q^2 < 100$~GeV$^2$, are selected by requiring the scattered electron to be measured in the Spaghetti endcap Calorimeter. The high $Q^2$ data~\cite{bib:HighQ2}, corresponding to $150 < Q^2 < 15000$~GeV$^2$, are selected by requiring the scattered electron to be measured in the Liquid Argon barrel Calorimeter. At low $Q^2$ a sample of $44$~pb$^{-1}$ collected between 1999 and 2000 is used, whereas at high $Q^2$ the analysis is based on nearly the full H1 data sample of about $400$~pb$^{-1}$ collected between the years 1999 and 2007. The inclusive jet cross sections were measured in low $Q^2$ regime by requesting \mbox{$P_T>5$~GeV} and $-1.0 < \eta^{\rm Lab} < 2.5$. At high $Q^2$ the cross sections was measured based on the inclusive jets with $7 < P_{T} < 50$~GeV and on 2-jet (3-jet) events containing at least 2 (3) jets with $5 < P_{T} < 50$~GeV. A more restrictive pseudorapidity cut was applied $-0.8 < \eta^{\rm Lab} < 2.0$ to ensure a good calibration of the jets. The jet cross sections at high $Q^2$ are normalised to the inclusive DIS cross sections in order to reduce the sensitivity to the normalisation uncertainties. The normalised jet cross sections as functions of $Q^2$ and $P_T$ are shown in figure \ref{fig:InclJets}. One of the main sources of experimental uncertainties at low and high $Q^2$ remains the uncertainty on the absolute calibration of the hadronic energy scale with an impact on the cross sections of about 1 to 5\%. The detector correction factors show an uncertainty due to the MC model dependence which amounts typically to $1$ to 10$\%$. \begin{figure} \centering \psfig{figure=fig2.eps,height=3.4 in} \begin{flushleft} \caption{The normalised inclusive jets cross sections in DIS as function of $P_T$ of jets in different $Q^2$ regions. \label{fig:InclJets}} \end{flushleft} \end{figure} \subsection{Determination of the strong coupling} The strong coupling is extracted from the data by a minimal $\chi^2$ fit procedure where the value of the strong coupling at the $Z$ boson mass, $\alpha_s(M_Z)$, is taken to be the only free parameter of the theory. In ZEUS photoproduction analysis the experimental uncertainty is estimated by the \textsl{offset method} adding in quadrature the deviation of $\alpha_s$ from the central value when the fit is repeated with independent variations of various experimental sources: \begin{eqnarray} \nonumber \alpha_s(M_Z) = 0.1223 ~\pm 0.0022 \,\mathrm{(exp.)} ~ ^{+0.0029}_{-0.0030}\,\mathrm{(th.)}\,. \end{eqnarray} \noindent The theoretical uncertainty contains a dominating part coming from terms beyond NLO estimated using the band method of \textsl{Jones et al.} \cite{bib:Jones}, added in quadrature to the uncertainties on the hadronisation corrections and to the uncertainties on the proton and photon PDFs parameterisation. The total theoretical uncertainty amounts to $2.5\%$. In the H1 analysis based on DIS jets the experimental uncertainty of $\alpha_s$ is defined by that change in $\alpha_s$ which increases the minimal $\chi^2$ by one unit. The strong coupling is extracted individually from the inclusive jets at low $Q^2$ and from the inclusive, 2-jet and 3-jet at high $Q^2$. The experimentally most precise determination of $\alpha_s(M_Z)$ is derived from the combined fit to all three observables at high $Q^2$. The extracted value is slightly lower than that obtained from photoproduction jets by ZEUS, but compatible within two standard deviations: \begin{eqnarray} \nonumber \alpha_s(M_Z) = 0.1168 ~\pm 0.0007 \,\mathrm{(exp.)} ~ ^{+0.0046}_{-0.0030}\,\mathrm{(th.)}~ \pm 0.0016\,(\textnormal{\scshape pdf})\,. \end{eqnarray} \noindent The theory uncertainty is estimated by the \textit{offset method} adding in quadrature the deviations due to various choices of scales and hadronisation corrections. The largest contribution was the theoretical uncertainty arising from terms beyond NLO which amounts to 3$\%$. The PDF uncertainty, estimated using CTEQ6.5, amounts to $1.5\%$. The $\chi^2$ \textit{variation method} leads to smaller uncertainties estimate than the \textit{offset method} for the experiment, while the \textit{offset method} leads to more conservative uncertainties estimate than the \textit{Jones et al.} method for the theory. The value extracted at low $Q^2$, \mbox{$\alpha_s(M_Z) = 0.1186 ~\pm 0.0014 \,\mathrm{(exp.)} ^{+0.0132}_{-0.0101}\,\mathrm{(th.)} \pm 0.0021\,(\textnormal{\scshape pdf})$}, is compatible with high $Q^2$, but the uncertainty arising from the renormalisation scales variation reach 10$\%$. The measurement of the strong coupling in a large $Q^2$ range allows to test the $\alpha_s(Q)$ running between 2 and 100 GeV as shown on the figure \ref{fig:alphas}. The results for jets at HERA, summarised in figure \ref{fig:alphas}, are competitive with those from $e^+ e^-$ data~\cite{bib:BETHKE} and are in good agreement with different world averages~\cite{bib:BETHKE,bib:WORLD}. \begin{figure} \psfig{figure=fig3.1.eps,height=2.2in} \hskip 1.cm \psfig{figure=fig3.2.eps,height=2.3in} \begin{flushleft} \caption{The running of $\alpha_s(Q)$ (left) and different recent extractions of $\alpha_s(M_Z)$ from HERA, compared to a LEP measurement and the world average. \label{fig:alphas}} \end{flushleft} \end{figure} \section*{References}
2,877,628,089,277
arxiv
\section{Introduction} \label{sect:intro} The mass of a star is the parameter that, to a first approximation, is most important in determining its evolution. However, the mass cannot be dynamically determined for a single star, so indirect methods have been developed for estimating stellar masses. The most widely used of them is to estimate the mass from observations of the distribution of another parameter for a stellar ensemble under study (field stars, cluster stars). The stellar luminosity is the most commonly used parameter, and the subsequent transition to masses of stars is made using the so-called mass-luminosity relation (MLR). Independent determination of the mass of a star and its luminosity is only possible for components of binary systems of certain types. One suitable type of binary system is a {visual binary star with known orbital parameters and trigonometric parallax.} Such stars are usually wide pairs, whose components do not interact with each other and are evolutionarily similar to single stars. In addition, usually they are in the nearest solar neighborhood and, therefore, are mostly low-mass stars. The problem of determining the masses of visual binaries was discussed, for example, in \citet{2016MNRAS.459.1580D,2012A&A...546A..69M,1998A&A...338..455F}. The construction of the MLR for low-mass stars based on observational data is discussed in \citet{2004ASPC..318..159H,2000A&A...364..217D,1999ApJ...512..864H,1997A&A...320...79M}. Another major source of independently defined stellar masses is detached eclipsing binary stars with components on the main sequence, where the spectral lines of both components are observed (hereafter double-lined eclipsing binaries, DLEB). These stars are usually relatively massive ($M/M_\odot > 1.5$) and their parameters are used to construct the stellar MLR for intermediate and large masses. The exact parameters of DLEB stars and the MLR based on them can be found, for example, in \citet{2010A&ARv..18...67T,2001ARep...45..972K, 1998ARep...42..793G,1991A&ARv...3...91A,1980ARA&A..18..115P}. When these two MLRs (based on the visual binaries and on the DLEB stars with components on the main sequence) are jointly analyzed and used (in particular, in order to compare the theoretical MLRs with empirical data), it is generally assumed by default, that the components of the detached close binaries and the wide binaries evolve in a similar way. It should be noted, however, that DLEB are close pairs whose components' rotation is synchronized by tidal interaction, and, due to rotational deceleration, they evolve differently than ``isolated'' (i.e., single or wide binary systems) stars. When comparing the radii of DLEB and single stars \citep{2003A&A...402.1055M}, a noticeable difference between the observed parameters of B0V--G0V components of DLEB and of single stars of similar spectral classes was found. This difference was confirmed by analysis of independent studies published by other authors. This difference also explains the disagreement between the published scales of bolometric corrections. Larger radii and higher temperatures of A-F components of DLEB stars can be explained by the synchronization and associated slowing down of rotation of such components in close systems. Another possible reason is the effect of observational selection: due to the non-sphericity of the rotating stars, the parameters determined from the observations depend on the relative orientations of their rotation axes. Isolated stars are oriented randomly, while components of eclipsing binaries are usually observed from near the equatorial plane. Systematically smaller observed radii of DLEB stars of spectral class B can be explained by the fact that stars with large radii do not occur with companions on the main sequence: most of them have already filled their Roche lobe (which stopped their further growth) and have become semi-detached systems (which excluded them from the discussed statistics). Then, in~\citet{2007MNRAS.382.1073M} data for the fundamental parameters of the components of a few currently known long-period DLEB have been collected. These stars presumably have not undergone synchronization of rotation with the orbital period and therefore spin rapidly, and evolve similarly to single stars. {The theory of synchronization (and circularization) in close binary systems developed by \citet{1975A&A....41..329Z, 1977A&A....57..383Z} is based on the mechanism of energy dissipation via dynamic tides in non-adiabatic surface layers of the component stars. Another theory was developed by \citet{1987ApJ...322..856T, 1988ApJ...324L..71T}, and is based on tidal dissipation of the kinetic energy of large-scale meridional flows. In their critical reviews, \citet{2007MNRAS.382..356K, 2010MNRAS.401..257K} point out that both the circularization and synchronization time-scales implied by these mechanisms differ by almost three orders of magnitude, and, based on an analysis of the observed rates of apsidal motion, show that the observed synchronization times agree with Zahn's theory but are inconsistent with the shorter time-scale proposed by Tassoul. The synchronization time depends primarily on the stellar mass and the binary separation. So, for example, according to \citet{1987ApJ...322..856T}, the synchronization time for orbital periods up to about 25 days is smaller than one-tenth of the main-sequence life-time of a $3 \: M_\odot$ star. The theories of synchronization mentioned above have been developed for early-type, massive stars with radiative envelopes (i.e., for stars with $M/M_\odot > 1.5$). In the current work, to construct the mass-luminosity relation for ``isolated'' stars, we study DLEB stars in the range $M/M_\odot > 2.7$, as the masses of components of other types of binary stars (visual binaries, resolved spectroscopic binaries) rarely exceed this limit (stars in the range $1.5 < M/M_\odot < 2.7$ will be considered later). According to the shorter time-scale theory of Tassoul, the synchronization time becomes comparable to the main-sequence life-time of a $2.7 \: M_\odot$ star for orbital periods of the order of 50-70 days (the longer time-scale theory of Zahn predicts even shorter periods).} Currently there is no way to properly estimate the degree to which the effect on the IMF may be important for $M > 2.7 \: M_\odot$, as available observational data for that mass range are too poor to draw definite conclusions. For that reason we have started a pilot project to study long-period massive eclipsing binaries to construct radial velocity curves and determine the masses of their components. Using published photometric data or light-curve solutions we expect to obtain luminosities for the individual components. With accurate luminosity determinations we plan to compare their location on the mass-luminosity diagram with the ``standard'' MLR. As a result of this pilot study we plan to confirm that rapid and slow rotators satisfy different MLRs, which should be used for different purposes. Then the feasibility of a larger project, the construction of a reliable ``fast rotators'' MLR, will be considered. The data we obtain will also be used to establish mass-radius and mass-temperature relations. \section{The Test Sample, Observations and Data reduction} \label{sect:Obs} \begin{table}[thb] \begin{center} \caption[]{The Test Sample.}\label{Tab:Sample} \begin{tabular}{ccllrcr} \hline\hline \# & Name & RA (2000.0) &DEC (2000.0)& mag & e &Period \\ (1)& (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 01 & V883 Ara & 16:51:45.10 &-50:17:46.5 & 8.55 & $\gg$0 & 61.8740 \\ 02 & KV CMa & 06:50:52.67 &-20:54:37.4 & 7.16 & $\gg$0 & 68.3842 \\ 03 & V338 Car & 11:13:52.31 &-58:36:30.4 & 9.30 & $=$0 & 74.6429 \\ 04 & V884 Mon & 07:05:11.84 &-11:06:02.4 & 9.13 & $=$0 &123.2100 \\ 05 & V766 Sgr & 17:51:57.00 &-28:17:02.0 & 10.80 & $=$0 &147.1050 \\ 06 & FP Car & 11:04:35.87 &-62:34:22.2 & 9.70 & $=$0 &176.0270 \\ 07 &V1108 Sgr & 19:12:43.63 &-18:08:12.0 & 11.50 & $\gg$0 & 46.5816 \\ 08 & PW Pup & 07:49:06.00 &-31:07:42.6 & 9.20 & $=$0 &158.0000 \\ 09 & mu Sgr & 18:13:45.81 &-21:03:31.8 & 3.80 & $\gg$0 &180.5500 \\ 10 & AL Vel & 08:31:11.28 &-47:39:57.4 & 8.60 & $=$0 & 96.1070 \\ 11 & NN Del & 20:46:49.22 &+07:33:10.4 & 8.39 & $\gg$0 & 99.2684 \\ \hline\hline \end{tabular} \end{center} \end{table} To compile the test sample for our pilot project we used the Catalog of Eclipsing Variables \cite[hereafter CEV;][]{2007A&A...465..549M,2013AN....334..860A,2014MNRAS.444.1982A} from which we have carefully selected 11 massive long-period (i.e., presumably non-synchronised) detached main-sequence eclipsing systems that are presented in Table~\ref{Tab:Sample}. The selected systems should have components of similar luminosities (i.e., can be observed as SB2 systems -- spectroscopic binaries, where spectral lines from both components are visible) and guarantee an accurate determination of stellar parameters (in particular masses to 3\%) of early-type stars composing them. We planned to obtain a minimum of five spectra for targets with circular orbits ($e=0$) and a minimum of 10 spectra for targets with non-circular orbits ($e \gg 0$). This number of spectra should be enough to find a credible orbital solution and to complete the science objectives. All observations were obtained with the High Resolution Spectrograph \citep[HRS;][]{Ba08, Br10, Br12, Cr14} at the Southern African Large Telescope \citep[SALT;][]{Buck06,Dono06}. The HRS was used in the medium resolution (MR) mode, that gives a spectral resolution R$\sim$36\,500--39\,000; it has an input fiber diameter of 2.23 arcsec for both object and sky. All our \'echelle data were obtained during 2017--2019 and cover the total spectral range $\approx$3900--8900~\AA, where both blue and red CCDs were used with 1$\times$1 binning. All science observations were supported by the HRS Calibration Plan, which includes a set of bias frames at the beginning of each observational night and a set of flat-fields and a spectrum of a ThAr lamp once per week. Since the HRS is a vacuum \'echelle spectrograph installed inside a temperature-controlled enclosure such a set of calibrations is enough to give an average external velocity accuracy of 300 m~s$^{-1}$ \citep{KUKB19}. HRS data underwent a primary reduction with the SALT science pipeline \citep{Cra2010}, which includes overscan correction, bias subtractions and gain correction. After that \'echelle spectroscopic reduction was carried out using the HRS pipeline described in detail in \citet{KGB16, KUKB19}. \section{Fitting Binary Stars: full pixel fitting method} \label{sect:analys} To analyze fully reduced HRS spectra of binary systems and to determine stellar atmosphere parameters for each component such as effective temperature $T_\mathrm{eff}$, surface gravity $\log g$, metallicity [Z/H] as well as stellar rotation $v \sin i$ and line-of-sight velocities $V_j$ we developed a dedicated \textsc{Python}-based package, Fitting Binary Stars (\textsc{fbs}) (Katkov et al., 2020 in preparation). The \textsc{fbs}\ implements a full pixel fitting approach to simultaneous approximation of multiple epoch spectra of binary system by combination of two synthetic stellar models using $\chi^2$ minimization. The \textsc{fbs}\ developed on top of the non-linear minimization \textsc{lmfit} package \citep{lmfit} provides a high-level interface to many optimization methods (e.g. Levenberg-Marquardt, Powell, Downhill simplex Nelder-Mead method, Differential evolution etc.). During the evaluation of $\chi^2$ the \textsc{fbs}\ proceeds by the following steps: First, the \textsc{fbs}\ interpolates two stellar templates from the grid of synthetic stellar spectra for given sets of stellar atmosphere parameters (T$_\mathrm{eff}$, $\log g$, [Z/H])$_{1,2}$. The interpolation has to be fast to work with high-resolution stellar spectra containing tens to hundreds thousands of pixels in the spectral range under analysis. Therefore, we propose an algorithm where the \textsc{fbs}\ pre-calculates the Delaunay triangulation in 3 dimensions of stellar model parameters ($T_\mathrm{eff}$, $\log g$, [Z/H]) using nodes of the synthetic grid. Interpolating the \textsc{fbs}\ finds the simplex containing the given point, then averages the spectra from the simplex vertices with weights inversely proportional to the squared distance to the vertex. Such an algorithm is very fast and might work on regular as well as irregular model grids with missing nodes. Then, the model templates are broadened by individual stellar rotation {$v \sin i$}$_{1,2}$ and shifted for the line-of-sight velocities $V_1^j$, $V_2^j$ at the epoch of the $j$-th spectrum. The last two steps are to sum templates with weights $w_{1,2}$ and multiply the spectrum by the extinction curve appropriate to ithe assumed $E(B-V)$ or to multiply the final spectrum by a polynomial continuum to match the difference between the observed and synthetic spectra. In such an approach the $\chi^2$ value can be written as follows: \begin{equation} \chi^2 = \sum_j \chi^2_j = \sum_j \sum_\lambda \left( \frac{F^j_\lambda - M^j_\lambda}{\delta F^j_\lambda} \right)^2\\ \end{equation} \begin{equation} M^j_\lambda = C_\lambda \sum_{k={1,2}} w_k \cdot S\left(T_{\mathrm{eff} k}, \log g_k, [Z/H]_k\right) * \mathcal{L}(V^j_k, v \sin i_k), \end{equation} where $F^j_\lambda$, $\delta F^j_\lambda$, $M^j_\lambda$ represent the observed spectrum at the $j$-th epoch, its uncertainties and the model, respectively; $S$ is the stellar template interpolated from the grid of stellar models; $\mathcal{L}$ is the convolution kernel to derive the broadening effect due to stellar rotation \citep{Gray92} and to shift the templates by the line-of-sight velocity of each binary star component $k$ at epoch $j$; ``$*$'' denotes convolution; $C_\lambda$ is a polynomial multiplicative continuum or extinction curve for the given $E(B-V)$. The approach produces the following parameters: $T_\mathrm{eff}$, $\log g$, [Z/H], $v \sin i$ for both components of the binary star, $n$ pairs of line-of-sight radial velocities $V^j_1$, $V^j_2$ for $n$ spectra observed at different epochs and $E(B-V)$ for the system. Hereafter, we usualy employ \cite{Coelho14} stellar models (stars with T$_{\mathrm{eff}} = 3000-26000$~K), which match the HRS MR instrumental resolution \citep{KUKB19} well. Also we adapted high-resolution \textsc{tlusty} models \citep[T$_{\mathrm{eff}} = 15000-55000$~K;][]{2003ApJS..146..417L,2007ApJS..169...83L} and \textsc{phoenix} models \citep[T$_{\mathrm{eff}} = 3000-23000$~K;][]{phoenix}, convolving them to match the HRS MR instrumental resolution. Due to the extensive functionality of the \textsc{lmfit} package, the \textsc{fbs}\ provides flexible control over model parameters, including upper/lower bounding, parameter fixing and making parameters connected. As an example, connecting stellar component metallicities ($[Z/H]_1 \equiv [Z/H]_2$) might be a reasonable approximation for binary stars formed in the same gas cloud. An example of the use of the \textsc{fbs}\ software is shown in Figure~\ref{fig:examples} for three spectra of the binary system FP\,Car from our test sample. The \textsc{fbs}\ is a full pixel fitting approach and allows us to approximate the full spectral range of the given spectrum or one or several spectral intervals as well as to easily mask bad pixels and/or spectral regions. The \textsc{fbs}\ also contains basic functionality to determine orbital parameters from the stellar rotation curves. \begin{figure} \begin{minipage}[h]{0.490\linewidth} \centering \includegraphics[angle=0, width=\textwidth, clip=]{RAA-2020-0036fig1.pdf} \caption{Example of the processing of three observed spectra of FP\,Car (three epochs) with the \textsc{fbs}\ software. Each panel shows the part of the observed spectrum in the region of the H$\beta$ line in black. The result of modelling is shown in red. The two components are shown in blue and orange, respectively. The difference between the observed and modelled spectra is shown at the bottom of the panel in grey, together with errors that were propagated from the HRS data reduction (continuous dark blue lines).\label{fig:examples} } \end{minipage} \begin{minipage}[h]{0.490\linewidth} \centering \includegraphics[angle=-90, width=\textwidth, clip=]{RAA-2020-0036fig2a.pdf} \includegraphics[angle=-90, width=\textwidth, clip=]{RAA-2020-0036fig2b.pdf} \includegraphics[angle=-90, width=\textwidth, clip=]{RAA-2020-0036fig2c.pdf} \caption{A comparison of the calculated $T_\mathrm{eff}$, $\log g$ and $v \sin i$ with previously published results for the sample of early and late B-type stars.} \label{fig:Comparison} \end{minipage}% \end{figure} \section{Check-up of the External Accuracy} To check the external accuracy of our \textsc{fbs}\ software, we make different tests. For one of them we fitted with \textsc{fbs}\ program 18 echelle spectra of early and late B-type stars, that were obtained with the Fibre-fed Extended Range Optical Spectrograph \citep[FEROS;][]{Kaufer96} and were modelled and published \citep{HH03,NP12,BL13}. For this fit we used models from \cite{Coelho14} for the late B-stars and models from \cite{2003ApJS..146..417L,2007ApJS..169...83L} for the early B-stars. The comparison of our results for $T_\mathrm{eff}$, $\log g$ and $v \sin i$ with results published earlier are shown in Figure~\ref{fig:Comparison}. Our found $T_\mathrm{eff}$ is comparable to the previously found with rms of 650~K, $\log g$ with rms of 0.27~dex and $v \sin i$ with rms of 5.4~km s$^{-1}$ that are very close to cited errors that are shown with gorizontal bars. There are no any obvious systematic issues are visible in this Figure. \begin{figure}[t] \centering{ \includegraphics[clip=,angle=0,width=0.9\textwidth]{RAA-2020-0036fig3.pdf} } \caption{An example of a fully processed spectrum of FP\,Car. The spectrum consists of 70 \'echelle orders from both the blue and red arms merged together and corrected for the sensitivity curve. \label{fig:FP_Car_spec}} \end{figure} \begin{figure*} \centering{ \includegraphics[clip=,angle=0,width=0.9\textwidth]{RAA-2020-0036fig4.pdf} } \caption{The results of the analysis of one spectrum of FP\,Car obtained with HRS. The panel shows the result of the fit in the spectral region 4000-5300~\AA. Designations are the same as in Figure~\ref{fig:examples}. \label{fig:FP_Car_spec_fit}} \end{figure*} \begin{figure}[t] \centering \includegraphics[angle=0, width=0.9\textwidth, clip=]{RAA-2020-0036fig5.pdf} \caption{The calculated radial velocity curves for the FP\,Car binary system from our test sample.} \label{fig:FP_Car} \end{figure} \begin{figure}[t] \centering \includegraphics[clip=,angle=0,width=0.7\textwidth]{RAA-2020-0036fig6.pdf} \caption{Photometric data from the ASAS survey converted to the period P=176.027~days. There are only a few points that indicate the shape of the primary and secondary minima. \label{fig:FP_Car_phot}} \end{figure} \begin{table*} \centering \caption{Best-fit orbital elements.} \begin{tabular}{llc} \hline\hline Parameter & Value & \% \\ \hline Epoch at radial velocity maximum $T_0$ (d) & $2455094.47\pm0.15$ & 0.00 \\ Orbital period $P$ (d) & 176.032$\pm$0.010 & 0.00 \\ Eccentricity $e$ & 0 (fixed) & 0.00 \\ Radial velocity semi-amplitude $K1$ (km s$^{-1}$) & 22.92$\pm$0.73 & 3.20 \\ Radial velocity semi-amplitude $K2$ (km s$^{-1}$) & 35.30$\pm$0.26 & 0.72 \\ Systemic heliocentric velocity $\gamma$ (km s$^{-1}$) &$-$15.31$\pm$0.15 & 0.96 \\ Root-mean-square residuals of Keplerian fit (km s$^{-1}$) & 0.976 & -- \\ \hline \end{tabular} \label{tab:FP_Car_orb} \end{table*} \section{First Results: the FP\,Car system} As the first result we would like to present here some our results on the study of the FP\,Car binary system that belongs to our test sample (see Table~\ref{Tab:Sample}). FP\,Car (HD96214) as a variable star with period of $\sim$176~days was discovered by \citet{1926BHarO.837....1C}. The period of this system was measured more accurate by \citet{2004IBVS.5542....1D} using data from ASAS survey \citep{1997AcA....47..467P}. The spectral type was estimated by \citet{1975mcts.book.....H} as B5/7(V). Approximate masses $M_1 = 15.61 M_\odot$ and $M_2 = 7.49 M_\odot$ for both components were calculated by \citet{1980AcA....30..501B} with use their iterative method for computation of geometric and physical parameters for components of eclipsing binary stars. Our spectral observations of FP\,Car were made during 2017--2019 with HRS at SALT (see Section~\ref{sect:Obs}). Ten spectra were obtained in total covering all phases of the binary's orbit. After the standard HRS reduction, additionally, each HRS spectrum of FP\,Car was corrected for bad columns and pixels and was also corrected for the spectral sensitivity curve obtained closest to the date of the observation. Spectrophotometric standards for HRS were observed once a week as a part of the HRS Calibration Plan. Figure~\ref{fig:FP_Car_spec} presents one of a fully processed spectrum of FP\,Car, which was used in further analysis. The spectrum consists of 70 \'echelle orders from both the blue and the red arms of HRS merged together and corrected for sensitivity. Unfortunately, SALT is a telescope where the unfilled entrance pupil of the telescope moves during the observation and for that reason the absolute flux calibration is not feasible with SALT. At the same time, since all optical elements are always the same, the relative flux calibration can be used for SALT data. All HRS observations of the FP\,Car system were used simultaneously for the calculation of radial velocity curves using \textsc{fbs}\ package. The determination of orbital parameters from the stellar rotation curves was also done with \textsc{fbs}\ package as it is shown in Figure~\ref{fig:FP_Car} and presented in Table~\ref{tab:FP_Car_orb}. The found period is P=176.032$\pm$0.010~days that is in agreement within uncertainties with photometric period presented in Table~\ref{Tab:Sample}. Our spectral data show that system has circular orbit (e=0) and this parameter was fixed for the last iteration. Our found amplitudes of velocities have small errors 0.7\% for the component B (blue) and 3.2\% for the component A (orange) that is totally in agreement with fit shown in Figure~\ref{fig:examples}, where spectrum of the component B shows many narrow lines and spectrum of the component A shows only wide Balmer and helium lines with $v \sin i \sim 100$~km~s$^{-1}$. Finally, we can calculate masses of both components for the FP\,Car system as $M_1 = (2.19\pm0.06) \sin^{-3}(i)\,\, M_\odot$ and $M_2 = (1.42\pm0.06) \sin^{-3}(i)\,\, M_\odot$, where $i$ is the orbital inclination angle that can only be determined from the modeling of photometric data. Unfortunately, there is no good photometric data for FP\,Car among all existing public surveys. The best available data are from the ASAS survey \citep{1997AcA....47..467P} as shown in Figure~\ref{fig:FP_Car_phot}. However, even these data have too few points outlining the positions and shapes of the narrow primary and secondary minima and it is impossible to use these data for any modeling. For that reason we are actively accumulating photometric data for FP\,Car and other stars from our test sample using the telescope network LCO \citep{2013PASP..125.1031B}. \section{Conclusions} \label{sect:conclusion} We present our new project on study of the long-period massive eclipsing binaries, where components are not synchronized and for this reason not changed the evolution scenario of each other. Small sample of eleven binary systems was described here that was formed for the pilot spectroscopy with HRS/SALT. The software package \textsc{fbs}\ (Fitting Binary Stars) was developed by us for the analysis of spectral data. We describe this package and show its external accuracy in determination stellar parameters. As the first result we present the radial velocity curves and the best-fit orbital elements for both components of the FP\,Car binary system from our test sample. } \begin{acknowledgements} All spectral observations reported in this paper were obtained with the Southern African Large Telescope (SALT) under programs 2016-1-MLT-002, 2017-1-MLT-001 and 2019-1-SCI-004 (PI: Alexei Kniazev). AK acknowledges support from the National Research Foundation of South Africa. OM acknowledges support by the Russian Foundation for Basic Researches grant 20-52-53009. IK acknowledges support by the Russian Science Foundation grant 17-72-20119. LB acknowledges support by the Russian Science Foundation grants 18-02-00890 and 19-02-00611. \end{acknowledgements} \bibliographystyle{raa}
2,877,628,089,278
arxiv
\section*{Appendix - list of propositions from \emph{Elements} mentioned in the text of \emph{Solutio theorematum}} \begin{flushleft} \emph{The first number indicates proposition number, the second one is the book number. Latin text from \cite{Euclid-latin}, English translation of propositions from \cite{Euclid}. Pappus generalization of 47.1 and Clavius' scholium for Prop. 31.3 translated by H.F. } \end{flushleft} \begin{pairs} \begin{Leftside} \selectlanguage{latin} \input{propositiones-lat.tex} \end{Leftside} \begin{Rightside} \selectlanguage{english} \input{propositiones-eng.tex} \end{Rightside} \Columns \end{pairs}
2,877,628,089,279
arxiv
\section{Introduction} Among the various uses of the Path Integrals the earliest and foremost is the application to quantum evolution. The Path Integral formalism is appreciated because of its compact operational notation with all the gamut of the finite dimensional Integral Calculus, such as integration by parts, repeated integration, canonical substitutions, analytic continuation (Wick rotation), stationary phase approximations etc. However, in R.Feynman words, the Path Integral is ``an intuitive leap at mathematical formalism''. A natural justification would be by suitable integral approximations, the route chosen originally by Feynman himself in the 40's via time-slicing and discretization. Unfortunately the discretization ambiguities along with the convergence problems have plagued the deed from the start (the notable exception~\cite{Nel} is for rather special hamiltonians). \smallskip Nevertheless we propose a \emph{rigorous time-slicing} construction of the (flat) phase space Path Integral for propagators both in Quantum Mechanics and Quantum Field Theory for a fairly general class of quasi-dissipative quantum observables (e.g. the Schr\H{o}dinger hamiltonians with smooth scalar potentials of any power growth). Moreover we allow time-dependent hamiltonians and a great variety of discretizations, in particular, the standard, Weyl, and normal ones. \paragraph{Abstract Cauchy Problem.} Consider the Initial Value Problem $$\fder{\psi}{t} + A(t)\psi (t) = 0,\quad \psi (0) = \psi_0,\quad 0\leq t\leq T, \quad \psi (t) \in \mathcal H ,$$ wherein $\mathcal H$ is a Hilbert space and $A(t)$ is a family of (usually) unbounded operators on $\mathcal H$. The Cauchy Problem is called {\it proper} relative a dense subspace $\mathcal S$ in $\mathcal H$ if there is the unique solution $\psi$ for every $\psi_0 \in \mathcal S$ and the {\it Evolution Operator } $$U(t'',t')\psi (t')=\psi (t''),\quad \psi (t') \in {\mathcal S}, \quad t''>t',$$ is bounded on $\mathcal H$. The Evolution Operator may be sought in the time-slicing form of the strong operator limit \emph{Product Integral}: \begin{align*} U(t'',t') &= \prod_{t''\ge t \ge t'}\exp [-A(t)dt] \\ &:= \lim_{|{\mathcal P}|\rightarrow 0}U^{\mathcal P}(t'',t') : =\lim_{|{\mathcal P}|\rightarrow 0} \prod_{t_{j+1}>t_j} \exp [-A(t_j) \Delta t_j]. \end{align*} Here $\mathcal P$ is a finite partition $0 \leq t'=t_0 < \dots <t_j<t_{j+1}< \dots <t''=t_ \mathcal P \leq T$ of the interval $t' \leq t \leq t'', \ \Delta t_j=t_{j+1}-t_j,\ | \mathcal P|= \max_{j} | \Delta t_{j}|$ \paragraph{From Product to Path Integrals on the Phase Space.} Consider a quantum evolution equation on $\mathcal L^2 (\mathbb R^d)$ $$ \pder{\psi}{t}(t,q)+\frac{i}{\hbar}f(t,q,\frac{i}{\hbar}\pder{}{q})\psi (t,q)=0,\quad \psi (0,q)=\psi_0, $$ with a pseudodifferential operator $f(t,q,\frac{i}{\hbar} \pder{}{q})$ in the standard form of the $qp-$quantization of a complex-valued function $f(t,q,p)$ on the phase space $\mathbb R^{2d}$, the standard symbol of $f(t,q,\frac{i}{\hbar} \pder{}{q})$. \smallskip The Tobocman's version of the Dirac-Feynman Ansatz: {\it For small $\Delta t $ the standard symbol $<p|U(t+\Delta t,t)|q>$ of the propagator \mbox{$U(t+\Delta t,t)$} is approximately equal to $\exp [-\frac{i}{\hbar} f(t,q,p)\Delta t]$.} \smallskip Then according to the product rule for the standard symbols, the standard symbol of $U^{\mathcal P}$ is approximately equal to the distributional multiple integral $$ \int \prod_{j=1}^{d-1} d\lambda _{\hbar} (t_j) \exp \frac{i}{\hbar}\sum_{j=0}^{d-1}[ p(t_{j+1})\Delta q(t_j)-f(t_j,q(t_j), p(t_{j+1})\Delta t_j] $$ where $d\lambda _{\hbar} (t_j)=(2\pi \hbar)^{-d} dq(t_j)dp (t_j)$ is the Lebesgue-Liouville measure on the phase space ${\mathbb R}^{2d}$. \smallskip As the mesh $|\mathcal P |\rightarrow 0$, the multiple integrals are presumed to converge to a Hamiltonian Path Integral for the standard symbol of the evolution operator $U(t'',t')$ $$ \int \prod_{t''\geq t\geq t'} d\lambda _{\hbar} (t) \exp \frac{i}{\hbar} \int_{t'}^{t''} [p(t) \dot{q} (t)-f(t,q(t),p (t)]dt $$ where $d\lambda _{\hbar} (t) = (2\pi \hbar)^{-d} dq(t)dp (t)$ is {\it ''the Feynman-Liouville measure''} on the space of paths from $q(t')=q$ to $p(t'') = p $ in $\mathbb R^{2d}$ and $$\int_{t'}^{t''} [p(t)\dot{q} (t)-f(t,q(t),p(t)]dt$$ is the hamiltonian symplectic action functional on that space.\smallskip The standing physical presumption: all calculus rules are valid in the limit. However there are two fundamental mathematical problems: \newline \emph{The validity of the DFT-ansatz} and \emph{the existence of the limit.} So far both problems have been settled only for special $f$ (c.f.~\cite{Nel,Mor,Hida}). \paragraph{Euler Detour.} Euler polygonal approximation for the solution of the quantum Cauchy Problem $$ \fder{\psi}{t}+\hat {f}(t)\psi (t)=0 $$ is the finite difference approximation $$ \frac{\psi (t_{j+1})- \psi(t_j)}{\Delta t_j}+ \hat {f}(t_j)\psi (t_j)=0, \quad \psi (t_0)=\psi _0, $$ or $$\psi (t_{j+1})=( 1 - \hat {f}(t_j)\Delta t_j)\psi (t_j), $$ so that the Evolution Operator might be the strong operator limit $$ U(t'',t')=\lim_ {|\mathcal P|\rightarrow 0} \prod_j ( 1 - \hat {f}(t_j)\Delta t_j) :=\prod_{t''\geq t\geq t'}[ 1 - \hat {f}(t)dt]. $$ If $\hat {f}(t)$ is a standard pseudodifferential operator then the partial product approximations are pseudodifferential operators again. However, if the order of $\hat {f}(t)$ is positive, then the order of these pseudodifferential operator approximations increases to infinity and the convergence of their symbols is out of the control.\smallskip Fortunately, the backward Euler approximation \[ \frac{\psi(t_{j+1})-\psi (t_j)}{\Delta t_j} + A(t_{j+1})\psi (t_{j+1})=0 \] suggests $$ U(t'',t')=\prod_{t''\geq t\geq t'} ( 1 +\hat {f}(t)dt)^{-1} $$ with zero order approximation symbols.\smallskip Our main result is that for \emph{apt} functions $f$ this backward approximation entails (in the spirit of the DFT ansatz) \[ U(t'',t')=\prod_{t'\geq t\geq t'} \left [( 1 - f(t)dt)^{-1} \right] \widehat{} \] leading to a Path Integral representation of the symbol $<\! p|U(t'',t')|q\! >$. \smallskip Incidentally, the Green function (the coordinate propagator) can be easily expressed via the symbol: $$<\! q''|U(t'',t')|q'\! >=\int dp<\!q''|p\! ><\! p|U(t'',t')|q'\! >.$$ \section{ Rigorized $\Omega$-symbolic calculus} This section provides necessary technical tools. \smallskip For $z=(q,p) \in \mathbb R^d \times \mathbb R^d$ introduce the complex coordinates $z^+=2^{-1/2}(q+ip), \quad z^-=2^{-1/2}(q-ip)$ so that the standard symplectic form $[(p_1,q_1), (p_2, q_2)]=p_1q_2-p_2q_1 $ on $\mathbb R^{2d}$ becomes \[ \frac{1}{i} [z_1,z_2]= \frac{1}{i} (z_{1}^{+}z_{2}^{-} -z_{1}^{-} z_{2}^+). \] The \emph{$\hbar$ -Symplectic Fourier transform} is defined as \[ {\tilde{f}}(\zeta)=\int f(z) e^{\frac{1}{\hbar}[z,\zeta]}d\lambda _{\hbar}(z),\quad d_{\hbar} \lambda (z) = \frac{1}{(\pi \hbar )^d } dz^+ dz^-. \] \paragraph{Heisenberg Canonical Commutation Relations} $z \mapsto \hat z$ over $\mathbb R^{2d}$ in a Hilbert space $\mathcal H$ is a linear map of $\mathbb R^{2d}$ to essentially self-adjoint operators on a common invariant subspace $\mathcal G$ of $\mathcal H$ (the G\H{a}rding domain) such that $[\hat{z} _1,{\hat{z} _2}]=\hbar [z_1,z_2] \mathbf 1$. \smallskip E.g., for the Schr\H{o}dinger (position) representation on ${\mathcal L}^2(\mathbb R^d)$ \[ \hat{z} \psi (x)=(qx)\psi (x) + \frac{\hbar}{i} \pder{\psi }{x} (x), \] the Schwartz space $\mathcal S (\mathbb R^d)$ may be chosen as a G\H{a}rding domain. \smallskip Other examples are the momentum or mixed momentum-position representations, holomorphic Bargmann-Segal representation (conducive to the coherent states Path Integral), Gelfand-Zak representation in the Solid State Physics, the Cartier compact representation in $\theta $-functions. \smallskip By the von Neumann-Stone theorem, for given $\hbar >0$ any Heisenberg Canonical Commutation Relations is unitary equivalent to a direct sum of the Schr\H{o}dinger representations. Thus we may chose the G\H{a}rding domain $\mathcal G (\mathcal H)$ to be unitary equivalent to a direct sum of the spaces $\mathcal S (\mathbb R^d)$. Correspondingly, the dual space $\mathcal G' (\mathcal H)$ is unitary equivalent to a direct sum of the spaces $\mathcal S' (\mathbb R^d)$. \paragraph{Weyl operators $\hat{f}$ on $\mathcal H$} associated with generalized functions $f\in \mathbb R^{2d}$ are continuous linear operators from $\mathcal G (\mathcal H)$ to $\mathcal G' (\mathcal H)$ \[ \hat{f} = \int \tilde{f} (\zeta ) \ex [\zeta,\hat{z} ] d\lambda _{\hbar} (\zeta ) \] wherein \[ [\zeta ,\hat{z}] = \zeta ^+ \hat{z}^- - \zeta ^-\hat{z}^+. \] A version of the {\it Schwartz Kernel Theorem} states that a linear operator from $\mathcal G (\mathcal H)$ to $\mathcal G' (\mathcal H)$ is continuous if and only if it is a Weyl operator $\hat{f}$. \paragraph{$\Omega$ -symbols.} Consider a formal power series over $\mathbb C$ $$ \Omega (\zeta ) = 1 + \sum_{|\alpha | >0} c_\alpha z^\alpha. $$ A {\it formal} $\Omega $-{\it symbol} of $f\in {\mathcal S'}(\mathbb R^{2d})$ is the formal power series over $\mathcal S'(\mathbb R^{2d})$ defined via \[ \tilde{f} ^{\Omega} (\zeta ) = \tilde{f}/\Omega (\zeta). \] Obviously, this has sense for polynomial $f(z)$ when various $\Omega$ provide common ordering rules according to the following table (c.f.~\cite{Aga}): \[ \ctablec{\textrm{Name} &\Omega (\zeta ) &\textrm{Ordering}~(d=1)\cr \noalign{\hrule} &&\cr \textrm{Weyl} &1 &q^np^m\leftrightarrow\cr &\textrm{} &\frac{1}{2^n} \sum_{j=0}^{n} \left (\! \begin{array}{c} n\\j \end{array} \! \right ) \hat{q}^{n-j} \hat{p}^m\hat{q}^j\cr &&\cr \noalign{\hrule} &&\cr \textrm{Standard} &e^{\frac{1}{4}[(\zeta ^+)^2-(\zeta ^-)^2)} &q^np^m\leftrightarrow \hat{q}^n\hat{p}^m\cr (qp~\textrm{or~Kohn-Nirenberg}) &\textrm{} &\cr &&\cr \noalign{\hrule} &&\cr \textrm{Antistandard} &e^{-\frac{1}{4}[(\zeta ^+)^2-(\zeta ^-)^2)]} &q^np^m \leftrightarrow \hat{p}^m\hat{q}^n\cr (\textrm{or}~pq) &\textrm{} &\cr &&\cr \noalign{\hrule} &&\cr \textrm{Normal} &e^{\frac{1}{2}\zeta ^+ \zeta ^-} &(z^+)^n(z^-)^m \leftrightarrow\cr \textrm{(or~Wick)} &\textrm{} &(\hat{z}^+)^n(\hat{z}^-)^m\cr &&\cr \noalign{\hrule} &&\cr \textrm{Antinormal} &e^{-\frac{1}{2}\zeta ^+\zeta ^-} &(z^+)^n (z^-)^m \leftrightarrow\cr (\textrm{or~Anti-Wick}) &\textrm{} &(\hat{z}^-)^m(z^+)^n\cr &&\cr \noalign{\hrule} &&\cr \textrm{Symmetric} &\cos\frac{1}{4}[(\zeta ^+)^2-(\zeta ^-)^2] &q^np^m \leftrightarrow\cr &&\frac{1}{2}(\hat{q}^n\hat{p}^m+\hat{p}^m\hat{q}^n)\cr &&\cr \noalign{\hrule} &&\cr \textrm{Born-Jordan} &\frac{\sin\frac{1}{4} [(\zeta ^+)^2-(\zeta ^-)^2]} {\frac{1}{4} [(\zeta ^+)^2- (\zeta ^-)^2]} &q^np^m\leftrightarrow\cr && \frac{1}{m+1} \sum_{j=0}^{m} \widehat{p}^{m-j}\widehat{q}^n\widehat{p}^j\cr &&\cr} \] \newpage Suppose now that $\Omega (\zeta )\neq 0$ for all $\zeta \in \mathbb R^{2d}$. Then $\tilde{f} (\zeta) /\Omega (\zeta )$ is meaningful for $f\in \mathcal S'(\mathbb R^{2d})$ if and only if $1/\Omega$ is a multiplier in $\mathcal S(\mathbb R^{2d})$. In such a case $f^\Omega $ is called the {\it strict $\Omega $-symbol} of the distibution $f\in \mathcal S'(\mathbb R^{2d})$. E.g. every $f\in \mathcal S'(\mathbb R^{2d})$ has strict standard , antistandard and normal symbols. More generally $f^ {\Omega }$ is called the {\it strict $\Omega$ -symbol} of $f$ if only $\Omega (\zeta )\tilde{f} ^\Omega (\zeta ) \in \mathcal S' (\mathbb R ^{2d}).$ \smallskip Of course not every Weyl operator has either antinormal, or symmetric, or Born-Jordan symbol. \paragraph{Quasi-polynomials.} Define (c.f.~\cite{shu}, Appendix 2) for $m=(m_1,m_2),\ r=(r_1,r_2),\ r_1 \geq 0,\ r_2<1/2$, the class $S(m,r)$ of {\it quasi-polynomial} $f=\{ (f_\hbar (z),\ 0<\hbar \leq \hbar (f) \}$ in $\mathcal S (\mathbb R ^{2d})$ such that \[ \partial _z^\alpha f =\mathcal O _\alpha (1)(1+|z|)^{m_1 - r_1 |\alpha |} \hbar ^{m_2 - r_2 |\alpha |} \] wherein $\partial _z = \partial /\partial z$ and $\alpha $ is a multiindex. \newline As usual, $S(-\infty ) = \cap S(m,r),\quad S(\infty ) = \cup S(m,r)$. \smallskip A quasi-polynomial $f$ is said to be {\it asymptotic } to a series $\sum _{\alpha \geq \mu }{} f^\alpha $ $$ f\simeq \sum_ {\alpha \geq \mu }{} f^\alpha $$ with $f^\alpha \in S(m_\alpha,r_\alpha), \quad m_\alpha \searrow -\infty , \quad r_\alpha \searrow -\infty $ if for all $\nu$ \[ f- \sum_{\alpha < \nu} f^\alpha \in S(m_\nu,r_\nu). \] The classical Borel-H\H{o}rmander construction leads to the following \paragraph{Proposition.} {\it For every $f\in S(m,r)$ and a formal $\Omega $ there is a $g\in S(m,r)$ asymptotic to $[1/\Omega (\frac{\hbar }{i} \partial _z )]f $.} \smallskip Such function $g$ is called an {\it asymptotic symbol} $f^{\Omega }$ of $f$. It is defined $mod\ S(-\infty).$ \paragraph{$\Omega$-products of quasi-polynomials.} If $f_j$ are quasi-polynomials then $\hat{f_j} $ act from $\mathcal G (\mathcal H)$ to $\mathcal G (\mathcal H)$ and therefore from $\mathcal G' (\mathcal H)$ to $\mathcal G' (\mathcal H)$, so that $\hat{f} = \hat{f} _1 \hat{f} _2 \dots \hat{f} _N$ is well defined. Actually $f$ is quasi-polynomial, and \[ f^\Omega (z) = \int \mathcal K ^\Omega(z - z_1, \dots z - z_N) \prod_{j=1}^{N} f_j^{\Omega} (z_j) d\lambda _{\hbar} z_j, \] wherein \[ \tilde{\mathcal K} ^\Omega (\zeta _1, \dots ,\zeta _n) \simeq \frac {\prod_j \Omega (\zeta _j)}{\Omega (\sum _j \zeta _j)} \exp \left \{ \frac{1}{2} \sum_{j<k} [\zeta _j, \zeta _k] \right \}, \] (=, if $\Omega$ is strict). Generally, the integral is distributional, but absolutely converges for some $\Omega $, e.g., the normal one. The integral representation entails the asymptotic expansion \[ f^\Omega (z) \simeq \tilde { \mathcal K}^{\Omega }\left ( \frac{\hbar}{i} \pder{}{z^+} ,- \frac{\hbar}{i} \pder{}{z^-} \right ) \prod_j f^\Omega (z_j) | _{z_j =z}, \] wherein $ \pder{}{z^+} = \frac{1}{\sqrt 2} (\pder{}{q} + \frac{1}{i} \pder{}{p} ), \quad \pder{}{z^-} = \frac{1}{\sqrt 2} (\pder{}{q} - \frac{1}{i} \pder{}{p}).$ \paragraph{Trace.} A \emph{ density operator} $\hat{\rho}: \mathcal G' \rightarrow \mathcal G $ is a Weyl operator with $\rho \in \mathcal S (\mathbb R^{2d})$. The operator trace of $f^\Omega \rho ^\Omega $ is well defined and may be evaluated for strict $\Omega $ via the \emph{Trace formula}: Tr$ (\hat{f}^\Omega \hat{\rho } ^\Omega )= <f^\Omega |\rho ^\Omega> $ . \section {Main Theorem.} A quasi-polynomial $f\in S(m,r),\ m>0,$ is called {\it apt} if for sufficiently small $\hbar $ it satisfies the following three conditions uniformly: \begin{itemize} \item \emph{Quasi-dissipativity}: $\emph{Re}(if) >\delta $, a constant . \item \emph{Hypoellipticity}: for all multi-indices $\alpha $ and $0\leq t',t'' \leq T$ \[ \partial _z^\alpha f(t'',z) =\mathcal O _\alpha (1) |if(t') - \delta | (1+|z|)^{ - r_1 |\alpha |} \hbar ^{ - r_2 |\alpha |}. \] \item \emph{t-Continuity} of f(t,$\cdot $ ) in $S(m,r)$. \end{itemize} {\it Law of Inertia}: If $f$ is apt then all its asymptotic symbols $f^\Omega $ are apt as well, albeit on different intervals of $\hbar $. Also if $f$ is hypoelliptic and \emph{real} then $\hat{f}(t)$ are essentially self-adoint (c.f.~\cite{shu}, Proposition A2.1) in $\mathcal H$. \paragraph{ Main Theorem.} {\it If $f$ is an apt quasi-polynomial, then for sufficiently small $\hbar$ \begin{description} \item (1) The Cauchy Problem $$\fder{\psi}{t} + \hat{f} (t)\psi (z,t) = 0,\quad \psi (z,0) = \psi_0,\quad 0\leq t\leq T, $$ is proper on $\mathcal H $ relative $\mathcal G (H)$. \item (2) The evolution operator is the strong product integral \[ U(t'',t') =\prod_{t''\geq t \geq t'} [ (\mathbf 1 + \frac{idt}{\hbar} {f} (t,\cdot ))^{-1})]\ \widehat{\ }. \] \item (3) A strict $\Omega$ -symbol $u^\Omega (t'',t',z)$ of the evolution operator is the limit in $\mathcal S' (\mathbb R ^{2d})$ of the strict $\Omega $-symbols $u^\mathcal P (t'',t',z)$ of the partial operator products $\prod_{t''\geq t_j \geq t'} [ (\mathbf 1 + \frac{i\Delta t_j}{\hbar} {f} (t_j,z))^{-1})]\ \widehat{\ }$ as $|\mathcal P| \rightarrow 0$. \end{description} } \paragraph{Proof (outline).} We apply the $\Omega $-calculus along with the theory of Abstract Cauchy Problems (c.f.~\cite{Fat}) and the theory of Finite Difference Methods for Initial Value Problems (c.f.~\cite{Ric}) with the terminology thereof. {\it The following statements hold for various intervals of positive $\hbar$.} By the Law of Inertia, the \emph{anti-normal} symbol of $f$ is quasi-dissipative. Then (c.f.~\cite{shu}, Proposition 24.1) the real part of \mbox{$<\! \psi |\delta _1 \mathbf 1 +i\hat{f} (t) |\psi \! >$} is greater than \mbox{$\gamma \! <\psi |\psi \!>$} with some constants $\gamma >0$ and $\delta _1$. It is safe to assume that $\delta _1 =0$. Together with the hypoellipticity (c.f.~\cite{shu}, Theorem 25.4) this entails that $|\!|[ \lambda \mathbf 1 +i\hat{f} (t)]^{-1}| \!| <1/\lambda $ for positive $\lambda $ so that the operators $\hat{f} (t)$ are a (1,0)-stable family in $\mathcal H$. When both $\psi $ and $\hat{f} (t)\psi$ belong to $\mathcal H$ for some $t=t_0$ then it is so for all $t$ by the virtue of the hypoellipticity. The space $\mathcal F$ of all such $\psi $ is dense in $\mathcal H$ and is a Hilbert space relative the new Hermitean product $<\! \psi |\psi \!>_0=<\! \hat{f} (t_0)\psi |\hat{f} (t_0)\psi \!>$. Now the $\hat{f} (t): \mathcal F \rightarrow \mathcal H$ form a $t$-continuous family of bounded operators. Moreover, \mbox{$<\!\hat{f}(t)\psi |\psi \!>_0$} =\mbox{$<\! \hat{g}(t)\psi |\psi \! >$} with $\hat{g}(t)$=\mbox{$\hat{f}(t_0)^\dagger \hat{f}(t_0)\hat{f}(t)$} so that $g$ is apt again and thus (as above) is (1,0)-stable in $\mathcal F$. By the Hille-Yosida theorem~\cite{hil}, $\hat{f} (t)$ generates for every $t$ a contractive operator semi-group in $\mathcal F$. Since the family \mbox{$\hat{f} (t)-\hat{f} (T) $} has similar properties, the theorem 7.7.13 of~\cite{Fat} establishes (1), the \emph{properness} of the Cauchy problem. This leads to a preliminary Product Integral representation $U(t'',t') =\prod_{t''\geq t \geq t'} [\mathbf 1 +\frac{idt}{\hbar} \hat{f}^{-1}]$ (c.f. the proof of the theorem 7.7.5 of~\cite{Fat}). It implies the Product Integral representation (2) via the Lax Equivalence Theorem~\cite{Ric} whereby the required consistency is checked via the \emph{Weyl} calculus. The last statement (3) follows from the \emph{trace formula}. \section{Path Integrals in Quantum Field Theory.} \paragraph{Infinite dimensional phase spaces.} In the case of $d=\infty$ there are non-isomorphic phase spaces and the symplectic structures usually appear with extra features. Our phase space is based on a separable Frechet nuclear space $\mathcal Z$ over $\mathbb C$ with a ``dotless'' hermitian product $zw^*$. If $\mathcal H$ is the corresponding Hilbert space completion of the $\mathcal Z$, and $\mathcal Z^*$ is the corresponding anti-dual of $\mathcal Z$ then $$\mathcal Z \hookrightarrow \mathcal H \hookrightarrow \mathcal Z^*$$ is a Gelfand nuclear triplet. The phase space is $\mathcal Z$ taken over $\mathbb R$ with the symplectic form $-\mathrm{Im}(zw^*)$. It is also a pre-Hilbert space with the scalar product $\mathrm{Re}(zw^*)$. \paragraph{Complex Gaussian rigging.} (C.f.~\cite{Hid}.) \emph{The Gaussian measure} $\gamma _\hbar $ of covariance $1/\hbar$ is defined via its characteristic function $$ \int_{\mathcal Z^*} e^{i\mathrm{Re} (z\zeta ^*)} d\gamma _\hbar (\zeta ^*) = e^{-zz^*/2\hbar },\quad z\in \mathcal Z, $$ so it stands for the non-existent $(\frac{\hbar}{2\pi} )^{\infty } \exp (-\hbar \zeta ^* \zeta ^* ) d\zeta ^* $. The Bargmann-Segal space $[\mathcal H]$ is the closure of the subspace of the continuous complex analytic polynomials on $\mathcal Z ^*$ in $\mathcal L ^2 (\mathcal Z ^*)$. Its elements are entire functions $h(z^*)$ of order 2 and type $<\hbar /2$: $$ h(z^*)=\mathcal O (e^{ \hbar p(z^*)^2 /2}) $$ for some dual semi-norm $p$ on $\mathcal Z^*$. Let $[\mathcal Z]$ denote the space of entire functions $h(z^*)$ of order 2 and minimal type, and $[\mathcal Z]^*$ denote the space of entire functions $h(z^*)$ of order 2 and maximal type. Then $[\mathcal Z]$ is naturally a separable nuclear Frechet space, and $[\mathcal Z] ^*$ its anti-dual. Thus \[ [\mathcal Z] \hookrightarrow [\mathcal H] \hookrightarrow [\mathcal Z] ^* \] is another Gelfand triplet, the Complex Gaussian rigging of the triplet \[ \mathcal Z \hookrightarrow \mathcal H \hookrightarrow \mathcal Z ^*. \] The \emph{coherent states} $e_w (z^*):=\exp (-\hbar wz^*), \ w\in \mathcal Z$, form a total (overcomplete) set in $[\mathcal Z]$. \paragraph{Free bosonic field over $\mathcal Z$ in $[\mathcal Z]^*$.} Let $z\rightarrow \bar{z} $ be an antilinear conjugation on $\mathcal Z$ and correspondingly on $\mathcal Z^*$. Set \[ z^+=z/\sqrt 2 \in \mathcal Z,\ z^- = \bar{z}/\sqrt 2 \in \bar{\mathcal Z}. \] The operators $\hat{z} ^+$ and $\hat{z} ^-$ are defined on $f\in [\mathcal Z ]^*$ as \[ \hat{z}^+ f(\zeta ^*) = (z\zeta ^*)f(\zeta ^*), \ \hat{z} ^- =\hbar \partial _z f(\zeta ^*). \] They represent the Canonical Commutation Relations (CCR): \[ [\hat{z} ^-,\hat{z} ^+]=\hbar \mathbf 1. \] The coherent states are entire vectors for the CCR. \paragraph{Wick Operators} are, by definition, the continuous linear operators $W$ from $[\mathcal G]$ to $[\mathcal G]^*$. The \emph{Wick symbol} of $W$ is \[ w(z^+,z^-):=e^{-z^+z^-}\int _{\mathcal Z^*}[We_{z^+}(\zeta ^-)]e_{z^-} (\zeta ^+)d\gamma _{\hbar }(\zeta ),\quad z\in \mathcal Z. \] The Wick symbol $w$ is an entire function on $\mathcal Z \times \mathcal Z$, so that the operator $\hat{w}: = w(-\hat{z}^+,\hat{z}^-)$ is well defined on the coherent states and $W=\hat{w} $ and $W$ is its closure. \paragraph{$\Omega$ -symbols.} Consider a formal complex power series $1+\sum_{|\alpha |>0}z^{\alpha }$ on $\mathcal Z^*$. The formal $\Omega $-symbol of $w(\zeta )$ is \[ w^{\Omega }:= \left [1/\Omega (\frac{\hbar }{i}\partial _z)\right ]w(\zeta ). \] \paragraph{Quasi-polynomial}$w(\zeta )$ is the family $\{ w_{\hbar } (\zeta ) :0<\hbar \leq \hbar (w)\}$ such that for a dual semi-norm $p$ on $\mathcal Z ^*$ \[ \partial _z ^{\alpha }w(\zeta ) = \mathcal O _{\alpha ,p}(1)(1+p(\zeta )) ^{m_1 -r_1 |\alpha |} \hbar ^{m_2 -r_2|\alpha |}. \] The class of such families is denoted $S(m,r),\ m=(m_1,m_2),r=(r_1,r_2).$ \paragraph{Weyl symbols and operators.} The Weyl symbols $f(z)$ of $w(z)$ correspond to $\Omega (z) = \exp (-\frac{1}{2} z^+z^-)$ (c.f.the table above): \[ f(z) = \exp \left [\frac {\hbar ^2}{2} \frac {\partial _2} {\partial z^+ \partial z^-} \right ] w(z). \] Conversely \[ w(z) =\int_{\mathcal Z ^*}f(z-\zeta )d\gamma _{\hbar ^2 /2} (\zeta ). \] (Note: not every Wick operator has a strict Weyl symbol.)\smallskip The corresponding Wick operators are the \emph{Weyl operators} $\hat{f}$. The Borel-H\H{o}rmander constructions for a countable fundamental family of dual gaussian semi-norms followed by the Cantor diagonal trick imply that for every $\Omega $ and Weyl $f\in \mathcal S (m,r)$ there is $g\in \mathcal S (m,r)$ asymptotic to $f^{\Omega }$.\smallskip The $\Omega $-symbols of the operator product $\hat{f}_1 \hat{f}_2 ... \hat{f}_N$ have the asymptotic expansions just as in the case $d<\infty $. However their integral representations are known rarely. Fortunately, for the \emph{normal symbols} $w$ of $\hat{w}_1\hat{w}_2 ... \hat{w}_N $ \[ w(z) = \int \prod_{j=1}^{N} e^{z_j^- z_{j-1}^+} w_j (z_j^-,z_{j-1}^+) \prod_{j=1}^{N-1} d\gamma_{\hbar} (z_j^-,z_{j-1}^+),\quad z_0^+:=z^+,\ z_N^-:=z^-. \] Finally, as in the finite-dimensional case, \emph {the Main Theorem} holds in infinite dimensions (with the same proof) at least for the strict Wick symbols of evolution operators in $G=[\mathcal H]$. In the latter case the symbol approximations are absolutely convergent multiple integrals with respect to $d\gamma _\hbar $ over the infinite-dimensional phase space $\mathcal Z$. \smallskip \section{Conclusion and outlook.} \begin{enumerate} \item The phase space Path Integral (according to L.Shulman, ''a difficult form'' of the path integral) was originated in different ways by Feynman himself ~\cite{Fey} in 1951 and by Tobocman ~\cite{Tob} in 1956. The Coherent State discretization was introduced in 1960 by Klauder~\cite{kla} in the Schr\H{o}dinger representation and in 1962 by Schweber~\cite{sch} in tne Bargmann-Fock represenatation. In the 70's Berezin~\cite{Ber} considered various discretizations on the basis of pseudo-differential analysis. However no convergence of the discretizations has been proved until now. On the other hand Daubeshies \& Klauder~\cite{Dau} have established in 1984 that a wide class of coherent state path integrals (essentially with self-adjoint polynomial hamiltonians) on a flat finite-dimensional phase space may are limits of Wiener Integrals on the space of paths in the phase space. They even suggested that Feynman type time-slicing construction is impossible for the phase space Path Integrals. \item We have presented a \emph{rigorous time-slicing} Phase Space Path Integral construction for the symbols of the Evolution Operators with wide variety of smooth hamiltonians both in finite and infinite degrees of freedom. The convergence is established only for small $\hbar $, in agreement with the postulated semi-classical nature of the Path Integral which relates the classical and quantum dynamics. \item According to the $\Omega $-calculus, the discretizations of the Path Integral are distributional multiple integrals. E.g., as mentioned in the Introduction, the traditional discretization of the Phase Space Integral comes from the standard $\Omega $-calculus. Similarly, the Coherent State Path Integral discretization is associated with the normal $\Omega $-symbol in which case the multiple integrals are absolutely convergent. \item The last statement in the Main Theorem is equivalent to a modified DTF-ansatz: the $\Omega $-symbol of the short time propagator is approximately equal to $[\mathbf 1 +\frac{i}{\hbar}f(t,z)\Delta t]^{-1}$. However, in the case of the normal $\Omega$-calculus (because of the absolute convergence) one may consistently replace it with the more customary ansatz $\exp [-\frac{i}{\hbar} f(t,q,p)\Delta t]$. \item Our Path Integrals are ``pathless'', in agreement with the Uncertainty Principle: no quantum path in the phase space (c.f.~\cite{Kla} for an illuminating somewhat different point of view). Yet they are semiclassical in the following sense: the principal terms in the $\hbar $-expansions of the partial products symbols are the backward Euler approximations of the corresponding classical Hamilton-Jacobi equations. \item We have rigorized the $\Omega $-calculus of Agarval \& Wolf~\cite{Aga} to justify numerous Path Integral discretizations and as an important techniques. However in the infinite degrees we have been able to prove the convergence only for normal (Wick) symbols. Actually the formal $\Omega $-calculus is a special case of the formal $\ast $-calculus~\cite{Bay}. Since on the finite-dimensional flat symplectic space all $\ast $-products are formally equivalent, our results yield a {\it construction of the formal \mbox{$\ast $-exponential}}, a solution of a well known problem (c.f~\cite{sha} for an interpretation of the Evolution Operator symbol as a $\ast$-exponential). \item Most of the other mathematical interpretations of the Path Integral are primarily in terms of various distributional integrals on the paths in the configuration space : first, by Kac~\cite{Kac} via analytic continuation to a Wiener integral (the Feynman-Kac formula), followed by DeWitt-Morette~\cite{Mor} in terms of prodistributions, by Albeverio and H\o eg-Krohn~\cite{Alb} in terms of the Parseval equation for the oscillatory Gaussian integrals, and by Hida \& Streit~\cite{Hid} in terms of White Noise disributions. Notably, these Path Integrals are associated only with the Schr\H{o}dinger hamiltonians (essentially) of quadratic growth, with the presumed ``Feynman measure'' built from the kinetic energy term. \end{enumerate} \nocite{*}
2,877,628,089,280
arxiv
\section{Introduction} Almost complex manifolds with Norden metric were first studied by A.~P.~Norden \cite{N} and are introduced in \cite{Gri-Mek-Dje} as generalized $B$-manifolds. A classification of these manifolds with respect to the covariant derivative of the almost complex structure is obtained in \cite{Gan} and two equivalent classifications are given in \cite{Gan-Mih,Gan-Gri-Mih2}. An important problem in the geometry of almost complex manifolds with Norden metric is the study of linear connections preserving the almost complex structure or preserving both, the structure and the metric. The first ones are called almost complex connections, and the second ones are known as natural connections. A special type of natural connection is the canonical one. In \cite{Gan-Mih} it is proved that on an almost complex manifold with Norden metric there exists a unique canonical connection. The canonical connection (called also the $B$-connection) and its conformal group on a conformal K\"{a}hler manifold with Norden metric are studied in \cite{Gan-Gri-Mih2}. In \cite{Teo3} we have obtained a two-parametric family of complex connections on a conformal K\"{a}hler manifold with Norden metric and have proved that the curvature tensors corresponding to these connections coincide with the curvature tensor of the canonical connection. In the present work we continue our research on complex connections on complex manifolds with Norden metric by focusing our attention on the class of the conformal K\"ahler manifolds, i.e. manifolds which are conformally equivalent to K\"aher manifolds with Norden metric. We introduce an eight-parametric family of complex connections on such manifolds and consider their curvature properties. We also study the conformal group of these connections and obtain some conformal invariants. In the last section we give an example of a four-dimensional conformal K\"ahler manifold with Norden metric, on which the considered complex connections are flat. \section{Preliminaries} Let $(M,J,g)$ be a $2n$-dimensional almost complex manifold with Norden metric, i.e. $J$ is an almost complex structure and $g$ is a pseudo-Riemannian metric on $M$ such that \begin{equation}\label{11} J^{2}x=-x,\qquad g(Jx,Jy)=-g(x,y) \end{equation} for all differentiable vector fields $x$, $y$ on $M$, i.e. $x,y\in \mathfrak{X}(M)$. The associated metric $\widetilde{g}$ of $g$ is given by $\widetilde{g (x,y)=g(x,Jy)$ and is a Norden metric, too. Both metrics are necessarily neutral, i.e. of signature $(n,n)$. If $\nabla $ is the Levi-Civita connection of the metric $g$, the fundamental tensor field $F$ of type $(0,3)$ on $M$ is defined by \begin{equation}\label{F} F(x,y,z)=g\left((\nabla _{x}J)y,z\right) \end{equation} and has the following symmetries \begin{equation}\label{Fp} F(x,y,z)=F(x,z,y)=F(x,Jy,Jz). \end{equation} Let $\left\{ e_{i}\right\} $ ($i=1,2,\ldots ,2n$) be an arbitrary basis of $ T_{p}M$ at a point $p$ of $M$. The components of the inverse matrix of $g$ are denoted by $g^{ij}$ with respect to the basis $\left\{ e_{i}\right\} $. The Lie 1-forms $\theta $ and $\theta^{\ast}$ associated with $F$, and the Lie vector $\Omega$, corresponding to $\theta$, are defined by, respectively \begin{equation}\label{1-3} \theta (x)=g^{ij}F(e_{i},e_{j},x), \qquad \theta^{\ast}=\theta \circ J, \qquad \theta (x)=g(x,\Omega ). \end{equation} The Nijenhuis tensor field $N$ for $J$ is given by \cite{Ko-No} \begin{equation*} N(x,y)=[Jx,Jy]-[x,y]-J[Jx,y]-J[x,Jy]. \end{equation*} It is known \cite{N-N} that the almost complex structure is complex if and only if it is integrable, i.e. iff $N=0$. A classification of the almost complex manifolds with Norden metric is introduced in \cite{Gan}, where eight classes of these manifolds are characterized according to the properties of $F$. The three basic classes $\mathcal{W}_{i}$ ($i=1,2,3$) are given by $\bullet$ the class $\mathcal{W}_{1}$: \begin{equation}\label{w1} \begin{array}{l} F(x,y,z)=\frac{1}{2n}\left[ g(x,y)\theta (z)+g(x,Jy)\theta (Jz)\right. \medskip \\ \quad \qquad \qquad \quad \left. +g(x,z)\theta (y)+g(x,Jz)\theta (Jy)\right]; \end{array} \end{equation} $\bullet$ the class $\mathcal{W}_{2}$ of the \emph{special complex manifolds with Norden metric}: \begin{equation}\label{w2} F(x,y,Jz)+F(y,z,Jx)+F(z,x,Jy)=0,\quad \theta =0 \quad\Leftrightarrow\quad N=0,\quad \theta=0; \end{equation} $\bullet$ the class $\mathcal{W}_{3}$ of the \emph{quasi-K\"{a}hler manifolds with Norden metric}: \begin{equation}\label{w3} F(x,y,z)+F(y,z,x)+F(z,x,y)=0. \end{equation} The special class $\mathcal{W}_{0}$ of the \emph{K\"{a}hler manifolds with Norden metric} is characterized by the condition $F=0$ ($\nabla J=0$) and is contained in each one of the other classes. Let $R$ be the curvature tensor of $\nabla $, i.e. $R(x,y)z=\nabla _{x}\nabla _{y}z-\nabla _{y}\nabla _{x}z-\nabla _{\left[ x, \right] }z$. The corresponding (0,4)-type tensor is defined by $R(x,y,z,u)=g\left( R(x,y)z,u\right)$. A tensor $L$ of type (0,4) is called a \emph{curvature-like} tensor if it has the properties of $R$, i.e. \begin{equation}\label{L} \begin{array}{l} L(x,y,z,u)=-L(y,x,z,u)=-L(x,y,u,z),\medskip\\ L(x,y,z,u)+L(y,z,x,u)+L(z,x,y,u)=0. \end{array} \end{equation} The Ricci tensor $\rho(L)$ and the scalar curvatures $\tau(L)$ and $ \tau^{\ast}(L)$ of $L$ are defined by: \begin{equation} \begin{array}{c} \rho(L)(y,z)=g^{ij}L(e_{i},y,z,e_{j}),\medskip\\ \tau(L)=g^{ij}\rho(L)(e_{i},e_{j}),\quad \tau^{\ast}(L)=g^{ij}\rho(L) (e_{i},Je_{j}). \end{array} \label{Ricci, tao} \end{equation} A curvature-like tensor $L$ is called a \emph{K\"{a}hler tensor} if \begin{equation}\label{Ka} L(x,y,Jz,Ju) = - L(x,y,z,u). \end{equation} Let $S$ be a tensor of type (0,2). We consider the following tensors \cite{Gan-Gri-Mih2}: \begin{equation}\label{psi} \begin{array}{l} \psi_{1}(S)(x,y,z,u) =g(y,z)S(x,u)-g(x,z)S(y,u) \smallskip\\ \phantom{\psi_{1}(S)(x,y,z,u)}+ g(x,u)S(y,z) - g(y,u)S(x,z), \medskip\\ \psi_{2}(S)(x,y,z,u) =g(y,Jz)S(x,Ju) - g(x,Jz)S(y,Ju) \smallskip\\ \phantom{\psi_{1}(S)(x,y,z,u)} + g(x,Ju)S(y,Jz) - g(y,Ju)S(x,Jz), \medskip\\ \pi_{1}=\frac{1}{2}\psi_{1}(g), \qquad \pi_{2}=\frac{1}{2}\psi_{2}(g),\qquad \pi_{3}=-\psi_{1}(\widetilde{g})=\psi_{2}(\widetilde{g}). \end{array} \end{equation} The tensor $\psi_{1}(S)$ is curvature-like if $S$ is symmetric, and the tensor $\psi_{2}(S)$ is curvature-like if $S$ is symmetric and hybrid with respect to $J$, i.e. $S(x,Jy)=S(y,Jx)$. In the last case the tensor $\{\psi_1 - \psi_2\}(S)$ is K\"ahlerian. The tensors $\pi_{1} - \pi_{2}$ and $\pi_{3}$ are also K\"{a}hlerian. The usual conformal transformation of the Norden metric $g$ (conformal transformation of type I \cite{Gan-Gri-Mih2}) is defined by \begin{equation}\label{conf} \overline{g}=e^{2u}g, \end{equation} where $u$ is a pluriharmonic function, i.e. the 1-form $du\circ J$ is closed. A $\mathcal{W}_1$-manifold with closed 1-forms $\theta$ and $\theta^{\ast}$ (i.e. $\mathrm{d}\theta=\mathrm{d}\theta^\ast=0$) is called a \emph{conformal K\"ahler manifold with Norden metric}. Necessary and sufficient conditions for a $\mathcal{W}_1$-manifold to be conformal K\"ahlerian are: \begin{equation}\label{cK} (\nabla_x\theta)y=(\nabla_y\theta)x,\qquad (\nabla_x\theta)Jy=(\nabla_y\theta)Jx. \end{equation} The subclass of these manifolds is denoted by $\mathcal{W}_{1}^{\hspace{0.01in}0}$. It is proved \cite{Gan-Gri-Mih2} that a $\mathcal{W}_{1}^{\hspace{0.01in}0}$-manifold is conformally equivalent to a K\"ahler manifold with Norden metric by the transformation (\ref{conf}). It is known that on a pseudo-Riemannian manifold $M$ ($\dim M=2n \geq 4$) the conformal invariant Weyl tensor has the form \begin{equation}\label{Weyl} W(R)=R-\frac{1}{2(n-1)}\big \{\psi_{1}(\rho)-\frac{\tau}{2n-1}\pi_{1}\big \}. \end{equation} Let $L$ be a K\"ahler curvature-like tensor on an almost complex manifold with Norden metric $(M,J,g)$, $\dim M=2n\geq 6$. Then the Bochner tensor $B(L)$ for $L$ is defined by \cite{Gan-Gri-Mih2}: \begin{equation}\label{Bochner} \begin{array}{l} B(L)= L - \frac{1}{2(n-2)}\big\{\psi_{1}-\psi_{2}\big\}\big(\rho(L)\big)\medskip\\ \phantom{B(L)}+ \frac{1}{4(n-1)(n-2)}\big\{\tau(L)\big(\pi_{1}-\pi_{2}\big) + \tau^{\ast}(L)\pi_{3}\big\}. \end{array} \end{equation} \section{Complex Connections on $\mathcal{W}_1$-manifolds} \begin{definition}[\cite{Ko-No}]\label{def-complex} \emph{A linear connection $\nabla^{\prime}$ on an almost complex manifold $(M,J)$ is said to be} almost complex \emph{if $\nabla^{\prime}J=0$.} \end{definition} We introduce an eight-parametric family of complex connections in the following \begin{theorem} On a $\mathcal{W}_1$-manifold with Norden metric there exists an eight-parametric family of complex connections $\nabla^{\prime}$ defined by \begin{equation}\label{2-1} \begin{array}{l} \nabla_x^{\prime}y = \nabla_x y + Q(x,y), \end{array} \end{equation} where the deformation tensor $Q(x,y)$ is given by \begin{equation}\label{2-2} \begin{array}{l} Q(x,y)= \frac{1}{2n}\left[\theta(Jy)x-g(x,y)J\Omega\right]\medskip\\ \hspace{0.065in}\phantom{Q(x,y)}+ \frac{1}{n}\left\{\lambda_1\theta(x)y+\lambda_2\theta(x)Jy +\lambda_3\theta(Jx)y +\lambda_4\theta(Jx)Jy\right.\medskip\\ \phantom{Q(x,y)}\hspace{0.06in}+\lambda_5\left[\theta(y)x-\theta(Jy)Jx\right] + \lambda_6\left[\theta(y)Jx+\theta(Jy)x\right]\medskip\\ \left.\hspace{0.06in}\phantom{Q(x,y)}+\lambda_7\left[g(x,y)\Omega-g(x,Jy)J\Omega\right]+\lambda_8\left[g(x,Jy)\Omega+g(x,y)J\Omega\right]\right\}, \end{array} \end{equation} $\lambda_i\in \mathbb{R}$, $i=1,2,...,8$. \end{theorem} \begin{proof} By (\ref{w1}), (\ref{2-1}) and (\ref{2-2}) we verify that $(\nabla^{\prime}_xJ)y = \nabla^{\prime}_xJy - J\nabla^{\prime}_xy=0$, and hence the connections $\nabla^{\prime}$ are complex for any $\lambda_i\in \mathbb{R}$, $i=1,2,...,8$. \end{proof} Let us remark that the two-parametric family of complex connections obtained for $\lambda_1=\lambda_4$, $\lambda_3=-\lambda_2$, $\lambda_5=\lambda_7=0$, $\lambda_8=-\lambda_6=\frac{1}{4}$, is studied in \cite{Teo3}. Let $T^{\prime}$ be the torsion tensor of $\nabla^{\prime}$, i.e. $T^{\prime}(x,y)=\nabla^{\prime}_xy - \nabla^{\prime}_xy - [x,y]$. Taking into account that the Levi-Civita connection $\nabla$ is symmetric, we have $T^{\prime}(x,y)=Q(x,y)-Q(y,x)$. Then by (\ref{2-2}) we obtain \begin{equation}\label{T} \begin{array}{l} T^{\prime}(x,y)=\frac{1}{n}\left\{\left(\lambda_1-\lambda_5\right)\left[\theta(x)y-\theta(y)x\right] +\left(\lambda_2-\lambda_6\right)\left[\theta(x)Jy-\theta(y)Jx\right]\right.\medskip\\ \qquad\quad\hspace{0.02in}\left.+\left(\lambda_3-\lambda_6-\frac{1}{2}\right)\left[\theta(Jx)y-\theta(Jy)x\right] +\left(\lambda_4+\lambda_5\right)\left[\theta(Jx)Jy-\theta(Jy)Jx\right] \right\}. \end{array} \end{equation} It is easy to verify the following \begin{equation} \underset{x,y,z}{\mathfrak{S}}T^{\prime}(x,y,z)=\underset{x,y,z}{\mathfrak{S}}T^{\prime}(Jx,Jy,z)=\underset{x,y,z}{\mathfrak{S}}T^{\prime}(x,y,Jz)=0, \end{equation} where $\mathfrak{S}$ is the cyclic sum by the arguments $x,y,z$. Next, we obtain necessary and sufficient conditions for the complex connections $\nabla^{\prime}$ to be symmetric (i.e. $T^{\prime}=0$). \begin{theorem} The complex connections $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}) are symmetric on a $\mathcal{W}_1$-manifold with Norden metric if and only if $\lambda_1=-\lambda_4=\lambda_5$, $\lambda_2=\lambda_3-\frac{1}{2}=\lambda_6$. \end{theorem} Then, by putting $\lambda_1=-\lambda_4=\lambda_5=\mu_1$, $\lambda_2=\lambda_6=\lambda_3-\frac{1}{2}=\mu_2$, $\lambda_7=\mu_3$, $\lambda_8=\mu_4$ in (\ref{2-2}) we obtain a four-parametric family of complex symmetric connections $\nabla^{\prime\prime}$ on a $\mathcal{W}_1$-manifold which are defined by \begin{equation}\label{sym} \begin{array}{l} \nabla^{\prime\prime}_x y= \nabla_x y + \frac{1}{2n}\left[\theta(Jx)y+\theta(Jy)x-g(x,y)J\Omega\right]\medskip\\ \phantom{\nabla^{\prime}_x y}+\frac{1}{n}\left\{\mu_1\left[\theta(x)y+\theta(y)x-\theta(Jx)Jy-\theta(Jy)Jx\right]\right.\medskip\\ \phantom{\nabla^{\prime}_x y}+\mu_2\left[\theta(Jx)y+\theta(Jy)x+\theta(x)Jy+\theta(y)Jx\right]\medskip\\ \phantom{\nabla^{\prime}_x y}+\left.\mu_3\left[g(x,y)\Omega-g(x,Jy)J\Omega\right]+\mu_4\left[g(x,Jy)\Omega+g(x,y)J\Omega\right]\right\}. \end{array} \end{equation} The well-known Yano connection \cite{Ya1,Ya2} on a $\mathcal{W}_1$-manifold is obtained from (\ref{sym}) for $\mu_1=\mu_3=0$, $\mu_4=-\mu_2=\frac{1}{4}$. \begin{definition}[\cite{Gan-Mih}]\label{def-nat} \emph{A linear connection $\nabla^{\prime}$ on an almost complex manifold with Norden metric $(M,J,g)$ is said to be} natural \emph{if $\nabla^{\prime}J=\nabla^{\prime}g=0$ (or equivalently, $\nabla^{\prime}g=\nabla^{\prime}\widetilde{g}=0$).} \end{definition} From (\ref{2-1}) and (\ref{2-2}) we compute the covariant derivatives of $g$ and $\widetilde{g}$ with respect to the complex connections $\nabla^{\prime}$ as follows \begin{equation}\label{2-5} \begin{array}{l} \left(\nabla^{\prime}_x g\right)(y,z)=-Q(x,y,z)-Q(x,z,y)\medskip\\ =-\frac{1}{n}\left\{ 2\left[\lambda_1\theta(x)g(y,z)+\lambda_2\theta(x)g(y,Jz)+\lambda_3\theta(Jx)g(y,z)\right.\right.\medskip \\ \left.+\lambda_4\theta(Jx)g(y,Jz)\right]+(\lambda_5+\lambda_7)\left[\theta(y)g(x,z)+\theta(z)g(x,y)\right.\medskip\\ \left.-\theta(Jy)g(x,Jz)-\theta(Jz)g(x,Jy)\right]+(\lambda_6+\lambda_8)\left[\theta(y)g(x,Jz)\right.\medskip\\ \left.+\theta(z)g(x,Jy)+\theta(Jy)g(x,z)+\theta(Jz)g(x,y)\right]\left.\right\},\medskip\\ \left(\nabla^{\prime}_x \widetilde{g}\right)(y,z)=-Q(x,y,Jz)-Q(x,Jz,y). \end{array} \end{equation} Then, by (\ref{2-5}) we get the following \begin{theorem} The complex connections $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}) are natural on a $\mathcal{W}_1$-manifold if and only if $\lambda_1=\lambda_2=\lambda_3=\lambda_4=0$, $\lambda_7=-\lambda_5$, $\lambda_8=-\lambda_6$. \end{theorem} If we put $\lambda_8=-\lambda_6=s$, $\lambda_7=-\lambda_5=t$, $\lambda_i=0$, $i=1,2,3,4$, in (\ref{2-2}), we obtain a two-parametric family of natural connections $\nabla^{\prime\prime\prime}$ defined by \begin{equation}\label{nabla-n} \begin{array}{l} \nabla^{\prime\prime\prime}_x y = \nabla_x y +\frac{1-2s}{2n}\left[\theta(Jy)x-g(x,y)J\Omega\right] +\frac{1}{n}\left\{s\left[g(x,Jy)\Omega - \theta(y)Jx\right]\right.\medskip\\ \phantom{\nabla^{\prime}_x y}\left.+t\left[g(x,y)\Omega - g(x,Jy)J\Omega - \theta(y)x+\theta(Jy)Jx\right]\right\}. \end{array} \end{equation} The well-known canonical connection \cite{Gan-Mih} (or $B$-connection \cite{Gan-Gri-Mih2}) on a $\mathcal{W}_1$-manifold with Norden metric is obtained from (\ref{nabla-n}) for $s=\frac{1}{4}$, $t=0$. We give a summery of the obtained results in the following table \begin{center} \begin{tabular}{|l|c|c|} \hline Connection type & Symbol & Parameters \\ \hline $\begin{array}{l}\text{Complex}\end{array}$ & $\nabla^{\prime}$ & $\lambda_i\in\mathbb{R}$, $i=1,2,...,8$. \\ \hline $\begin{array}{l}\text{Complex}\smallskip\\ \text{symmetric}\end{array}$ & $\nabla^{\prime\prime}$ & $\begin{array}{c}\mu_i,\hspace{0.03in} i=1,2,3,4, \smallskip\\ \mu_1=\lambda_1=-\lambda_4=\lambda_5,\hspace{0.03in} \mu_2=\lambda_2=\lambda_6=\lambda_3-\frac{1}{2},\smallskip\\ \mu_3=\lambda_7,\hspace{0.03in} \mu_4=\lambda_8\end{array}$ \\ \hline $\begin{array}{l}\text{Natural}\end{array}$ & $\nabla^{\prime\prime\prime}$ & $\begin{array}{c}s,t,\smallskip\\ s=\lambda_8=-\lambda_6, \hspace{0.03in} t = \lambda_7=-\lambda_5,\smallskip\\ \lambda_i=0,\hspace{0.03in} i =1,2,3,4.\end{array}$ \\ \hline \end{tabular} \end{center} \medskip Our next aim is to study the curvature properties of the complex connections $\nabla^{\prime}$. Let us first consider the natural connection $\nabla^0$ obtained from (\ref{nabla-n}) for $s=t=0$, i.e. \begin{equation}\label{nabla-0} \nabla^{0}_x y = \nabla_x y + \frac{1}{2n}\left[\theta(Jy)x - g(x,y)J\Omega\right]. \end{equation} This connection is a semi-symmetric metric connection, i.e. a connection of the form $\nabla_x y + \omega(y)x - g(x,y)U$, where $\omega$ is a 1-form and $U$ is the corresponding vector of $\omega$, i.e. $\omega(x)=g(x,U)$. Semi-symmetric metric connections are introduced in \cite{Ha} and studied in \cite{Im,Ya3}. The form of the curvature tensor of an arbitrary connection of this type is obtained in \cite{Ya3}. The geometry of such connections on almost complex manifolds with Norden metric is considered in \cite{Si}. Let us denote by $R^0$ the curvature tensor of $\nabla^0$, i.e. $R^0(x,y)z=\nabla^0_x\nabla^0_y z - \nabla^0_y\nabla^0_x z - \nabla^0_{[x,y]}z$. The corresponding tensor of type (0,4) is defined by $R^0(x,y,z,u)=g(R^0(x,y,)z,u)$. According to \cite{Ya3} it is valid \begin{proposition}\label{2} On a $\mathcal{W}_1$-manifold with closed 1-form $\theta^{\ast}$ the K\"ahler curvature tensor $R^0$ of $\nabla^0$ defined by (\ref{nabla-0}) has the form \begin{equation}\label{R0} R^0=R - \frac{1}{2n}\psi_1(P), \end{equation} where \begin{equation}\label{R01} P(x,y)=\left(\nabla_x \theta\right)Jy + \frac{1}{2n}\theta(x)\theta(y)+\frac{\theta(\Omega)}{4n}g(x,y)+\frac{\theta(J\Omega)}{2n}g(x,Jy). \end{equation} \end{proposition} Since the Weyl tensor $W(\psi_1(S))=0$ (where $S$ is a symmetric (0,2)-tensor), from (\ref{R0}) and (\ref{R01}) we conclude that \begin{equation}\label{WW} W(R^0)=W(R). \end{equation} Thus, the last equality implies \begin{proposition}\label{thW} Let $(M,J,g)$ be a $\mathcal{W}_1$-manifold with closed 1-form $\theta^{\ast}$, and $\nabla^0$ be the natural connection defined by (\ref{nabla-0}). Then, the Weyl tensor is invariant by the transformation $\nabla \rightarrow \nabla^0$. \end{proposition} Further in this section we study the curvature properties of the complex connections $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}). Let us denote by $R^{\prime}$ the curvature tensors corresponding to these connections. If a linear connection $\nabla^\prime$ and the Levi-Civita connection $\nabla$ are related by an equation of the form (\ref{2-1}), then, because of $\nabla g=0$, their curvature tensors of type (0,4) satisfy \begin{equation}\label{33} \begin{array}{l} g(R^\prime(x,y)z,u) = R(x,y,z,u) + (\nabla_x Q)(y,z,u) - (\nabla_y Q)(x,z,u)\medskip\\ \phantom{g(R^\prime(x,y)z,u)} + Q(x,Q(y,z),u) - Q(y,Q(x,z),u), \end{array} \end{equation} where $Q(x,y,z) = g(Q(x,y),z)$. Then, by (\ref{2-1}), (\ref{2-2}), (\ref{R0}), (\ref{R01}), (\ref{33}) we obtain the relation between $R^{\prime}$ and $R^0$ as follows \begin{equation}\label{h1} \begin{array}{l} R^{\prime}(x,y,z,u) = R^{0}(x,y,z,u) +g(y,z)A_1(x,u) - g(x,z)A_1(y,u)\medskip\\ + g(x,u)A_2(y,z) - g(y,u)A_2(x,z) -g(y,Jz)A_1(x,Ju)\medskip\\ + g(x,Jz)A_1(y,Ju)-g(x,Ju)A_2(y,Jz)+g(y,Ju)A_2(x,Jz)\medskip\\ +\left[\frac{\lambda_5\lambda_7 - \lambda_6\lambda_8}{n^2}\theta(\Omega) + \frac{\lambda_7 - \lambda_5 + 2(\lambda_5\lambda_8+\lambda_6\lambda_7)}{2n^2}\theta(J\Omega) \right]\{\pi_1-\pi_2\}(x,y,z,u)\medskip\\ -\left[\frac{\lambda_5\lambda_8 + \lambda_6\lambda_7}{n^2}\theta(\Omega) - \frac{\lambda_6 - \lambda_8 + 2(\lambda_5\lambda_7-\lambda_6\lambda_8)}{2n^2}\theta(J\Omega)\right]\pi_3(x,y,z,u), \end{array} \end{equation} where \begin{equation}\label{h2} \begin{array}{l} A_1(x,y) = \frac{\lambda_7}{n}\left\{\left(\nabla_x\theta\right)y+\frac{\lambda_7}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\}\medskip\\ \phantom{A_1(x,y)}+\frac{\lambda_8}{n}\left\{\left(\nabla_x\theta\right)Jy + \frac{1-2\lambda_8}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\}\medskip\\ \phantom{A_1(x,y)}+\frac{\lambda_7(4\lambda_8-1)}{2n^2}[\theta(x)\theta(Jy)+\theta(Jx)\theta(y)],\bigskip\\ A_2(x,y)=-\frac{\lambda_5}{n}\left\{\left(\nabla_x\theta\right)y - \frac{\lambda_5}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\}\medskip\\ \phantom{A_2(x,y)}-\frac{\lambda_6}{n}\left\{\left(\nabla_x\theta\right)Jy+\frac{1+2\lambda_6}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\right\}\medskip\\ \phantom{A_2(x,y)}+\frac{\lambda_5(4\lambda_6+1)}{2n^2}[\theta(x)\theta(Jy)+\theta(Jx)\theta(y)]. \end{array} \end{equation} We are interested in necessary and sufficient conditions for $R^{\prime}$ to be a K\"ahler curvature-like tensor, i.e. to satisfy (\ref{L}) and (\ref{Ka}). From (\ref{psi}), (\ref{h1}) and (\ref{h2}) it follows that $R^{\prime}$ is K\"ahlerian if and only if $A_1(x,y)=A_2(x,y)$. Hence we obtain \begin{theorem}\label{1} Let $(M,J,g)$ be a conformal K\"ahler manifold with Norden metric, and $\nabla^{\prime}$ be the complex connection defined by (\ref{2-1}) and (\ref{2-2}). Then, $R^{\prime}$ is a K\"ahler curvature-like tensor if and only if $\lambda_7=-\lambda_5$ and $\lambda_8=-\lambda_6$. In this case, from (\ref{2-1}) and (\ref{2-2}) we obtain a six-parametric family of complex connections $\nabla^{\prime}$ whose curvature tensors $R^{\prime}$ have the form \begin{equation} \begin{array}{l}\label{Rpr} R^{\prime} = R^0 + \frac{\lambda_7}{n}\left\{\psi_1 - \psi_2\right\}(S_1) + \frac{\lambda_8}{n}\left\{\psi_1-\psi_2\right\}(S_2) \medskip\\ \phantom{R^{\prime}}+ \frac{\lambda_7(4\lambda_8-1)}{2n^2}\left\{\psi_1-\psi_2\right\}(S_3) + \frac{\lambda_7(1-2\lambda_8)\theta(J\Omega)}{n^2}\left\{\pi_1-\pi_2\right\}\medskip\\ \phantom{R^{\prime}}+\frac{2\lambda_7\lambda_8\theta(\Omega)}{n^2}\pi_3, \end{array} \end{equation} where $R^0$ is given by (\ref{R0}), (\ref{R01}), and \begin{equation}\label{111} \begin{array}{l} S_1(x,y) = \left(\nabla_x\theta\right)y + \frac{\lambda_7}{n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)] - \frac{\lambda_7\theta(\Omega)}{2n}g(x,y) \medskip\\ \phantom{S_1(x,y)} + \frac{\lambda_7\theta(J\Omega)}{2n}g(x,Jy),\medskip\\ S_2(x,y) = \left(\nabla_x\theta\right)Jy + \frac{1-2\lambda_8}{2n}[\theta(x)\theta(y)-\theta(Jx)\theta(Jy)]\medskip\\ \phantom{S_2(x,y)}+\frac{\lambda_8\theta(\Omega)}{2n}g(x,y) + \frac{(1-\lambda_8)\theta(J\Omega)}{2n}g(x,Jy),\medskip\\ S_3(x,y) = \theta(x)\theta(Jy) + \theta(Jx)\theta(y). \end{array}\end{equation} \end{theorem} From (\ref{Rpr}), (\ref{111}) and Theorem \ref{1} we get \begin{corollary}\label{cor} Let $(M,J,g)$ be a conformal K\"ahler manifold with Norden metric and $\nabla^{\prime}$ be the eight-parametric family of complex connections defined by (\ref{2-1}) and (\ref{2-2}). Then $R^{\prime}=R^0$ if and only if $\lambda_i=0$ for $i=5,6,7,8$. \end{corollary} Let us remark that by putting $\lambda_i=0$ for $i=1,2,5,6,7,8$ in (\ref{2-2}) we obtain a two-parametric family of complex connections whose K\"ahler curvature tensors coincides with $R^0$ on a $\mathcal{W}_1$-manifold with closed 1-form $\theta^{\ast}$. Theorem \ref{thW} and Corollary \ref{cor} imply \begin{corollary} On a conformal K\"ahler manifold with Norden metric the Weyl tensor is invariant by the transformation of the Levi-Civita connection in any of the complex connection $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}) for $\lambda_i=0$, $i=5,6,7,8$. \end{corollary} Since for the Bochner tensor of $\{\psi_1-\psi_2\}(S)$ it is valid $B\left(\{\psi_1-\psi_2\}(S)\right)=0$, where $S$ is symmetric and hybrid with respect to $J$, from Theorem \ref{1} and (\ref{psi}) it follows \begin{equation}\label{BR0} B(R^{\prime}) = B(R^0). \end{equation} By this way we proved the truthfulness of the following \begin{theorem}\label{thB} Let $(M,J,g)$ be a conformal K\"ahler manifold with Norden metric, $R^{\prime}$ be the curvature tensor of $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}) for $\lambda_7=-\lambda_5$, $\lambda_8=-\lambda_6$, and $R^0$ be the curvature tensor of $\nabla^0$ given by (\ref{nabla-0}). Then the Bochner tensor is invariant by the transformations $\nabla^0\rightarrow\nabla^{\prime}$. \end{theorem} \section{Conformal transformations of complex connections} In this section we study usual conformal transformations of the complex connections $\nabla^{\prime}$ defined in the previous section. Let $(M,J,g)$ and $(M,J,\bar{g})$ be conformally equivalent almost complex manifolds with Norden metric by the transformation (\ref{conf}). It is known that the Levi-Civita connections $\nabla$ and $\overline{\nabla}$ of $g$ and $\overline{g}$, respectively, are related as follows \begin{equation}\label{con-trans1} \overline{\nabla}_{x}y = \nabla_{x}y + \sigma(x)y + \sigma(y)x - g(x,y)\Theta, \end{equation} where $\sigma(x)=du(x)$ and $\Theta=\textrm{grad}\hspace{0.02in} \sigma$, i.e. $\sigma(x)=g(x,\Theta)$. Let $\overline{\theta}$ be the Lie 1-form of $(M,J,\overline{g})$. Then by (\ref{1-3}) and (\ref{con-trans1}) we obtain \begin{equation}\label{theta-bar} \overline{\theta} = \theta + 2n\big(\sigma\circ J\big),\qquad\quad \overline{\Omega}=e^{-2u}\big(\Omega + 2nJ\Theta\big). \end{equation} It is valid the following \begin{lemma}\label{lemma1} Let $(M,J,g)$ be an almost complex manifold with Norden metric and $(M,J,\overline{g})$ be its conformally equivalent manifold by the transformation (\ref{conf}). Then, the curvature tensors $R$ and $\overline{R}$ of $\nabla$ and $\overline{\nabla}$, respectively, are related as follows \begin{equation}\label{Rbar} \overline{R}=e^{2u}\big\{R-\psi_{1}\big(V\big) - \pi_{1}\sigma\big(\Theta\big)\big\}, \end{equation} where $V(x,y)=\big(\nabla_{x}\sigma\big)y - \sigma(x)\sigma(y)$. \end{lemma} Let us first study the conformal group of the natural connection $\nabla^0$ given by (\ref{nabla-0}). Equalities (\ref{nabla-0}) and (\ref{con-trans1}) imply that its conformal group is defined analytically by \begin{equation}\label{nabla0-bar} \overline{\nabla}^{\hspace{0.02in}0}_{x}y = \nabla^{0}_{x}y + \sigma(x)y. \end{equation} It is known that if two linear connections are related by an equation of the form (\ref{nabla0-bar}), where $\sigma$ is a 1-form, then the curvature tensors of these connections coincide iff $\sigma$ is closed. Hence, it is valid \begin{theorem}\label{thR0} Let $(M,J,g)$ be a $\mathcal{W}_1$-manifold with closed 1-form $\theta^{\ast}$. Then the curvature tensor $R^0$ of $\nabla^0$ is conformally invariant, i.e. \begin{equation} \overline{R}^0 = e^{2u}R^0. \end{equation} \end{theorem} Further in this section let $(M,J,g)$ be a conformal K\"ahler manifold with Norden metric. Then $(M,J,\overline{g})$ is a K\"ahler manifold and thus $\overline{\theta}=0$. From (\ref{theta-bar}) it follows $\sigma =\frac{1}{2n}(\theta \circ J)$. Then, from (\ref{2-1}) and (\ref{2-2}) we get $\overline{\nabla}^{\hspace{0.02in}\prime}=\overline{\nabla}$ and hence $\overline{R}^{\prime}=\overline{R}$ for all $\lambda_i\in\mathbb{R}$, $i=1,2,...,8$. In particular, $\overline{R}^\prime=\overline{R}^0$. Then, Theorems \ref{thB} and (\ref{BR0}) imply \begin{theorem} On a conformal K\"ahler manifold with Norden metric the Bochner curvature tensor of the complex connections $\nabla^{\prime}$ defined by (\ref{2-1}) and (\ref{2-2}) with the conditions $\lambda_7=-\lambda_5$ and $\lambda_8=-\lambda_6$ is conformally invariant by the transformation (\ref{conf}), i.e. \begin{equation} B(\overline{R}^{\prime})=e^{2u}B(R^{\prime}). \end{equation} \end{theorem} Let us remark that the conformal invariancy of the Bochner tensor of the canonical connection on a conformal K\"ahler manifold with Norden metric is proved in \cite{Gan-Gri-Mih2}. From Theorem \ref{thR0} and Corollary \ref{cor} we obtain \begin{corollary} Let $(M,J,g)$ be a conformal K\"ahler manifold with Norden metric and $\nabla^{\prime}$ be a complex connection defined by (\ref{2-1}) and (\ref{2-2}). If $\lambda_i=0$ for $i=5,6,7,8$, then the curvature tensor of $\nabla^{\prime}$ is conformally invariant by the transformation (\ref{conf}). \end{corollary} \section{An Example} Let $G$ be a real connected four-dimensional Lie group, and $\mathfrak{g}$ be its corresponding Lie algebra. If $\{e_1,e_2,e_3,e_4\}$ is a basis of $\mathfrak{g}$, we equip $G$ with a left-invariant almost complex structure $J$ by \begin{equation}\label{J1} Je_1 = e_3,\qquad Je_2 = e_4,\qquad Je_3=-e_1,\qquad Je_4=-e_2. \end{equation} We also define a left-invariant pseudo-Riemannian metric $g$ on $G$ by \begin{equation}\label{g1} \begin{array}{l} g(e_1,e_1)=g(e_2,e_2)=-g(e_3,e_3)=-g(e_4,e_4)=1,\medskip\\ g(e_i,e_j)=0,\quad i\neq j,\quad i,j=1,2,3,4. \end{array} \end{equation} Then, because of (\ref{11}), (\ref{J1}) and (\ref{g1}), $(G,J,g)$ is an almost complex manifold with Norden metric. Further, let the Lie algebra $\mathfrak{g}$ be defined by the following commutator relations \begin{equation}\label{lie1} \begin{array}{l} [e_1,e_2]=[e_3,e_4]=0,\medskip\\ \lbrack e_1,e_4]=[e_2,e_3]=\lambda (e_1 + e_4) + \mu (e_2 - e_3),\medskip\\ \lbrack e_1,e_3]=-[e_2,e_4] = \mu (e_1 + e_4) - \lambda (e_2 - e_3), \end{array} \end{equation} where $\lambda,\mu\in\mathbb{R}$. The well-known Koszul's formula for the Levi-Civita connection of $g$ on $G$, i.e. the equality \begin{equation}\label{K} 2g(\nabla_{e_i} e_j,e_k) = g([e_i,e_j],e_k) + g([e_k,e_i],e_j)+g([e_k,e_j],e_i), \end{equation} and (\ref{g1}) imply the following essential non-zero components of the Levi-Civita connection: \begin{equation}\label{nabla1} \begin{array}{ll} \nabla_{e_1}e_1 = \nabla_{e_2}e_2 = \mu e_3 + \lambda e_4,\quad & \nabla_{e_3}e_3 = \nabla_{e_4}e_4 = -\lambda e_1 + \mu e_2,\medskip\\ \nabla_{e_1}e_3 = \mu (e_1 + e_4),\quad & \nabla_{e_1}e_4 = \lambda e_1 - \mu e_3,\medskip\\ \nabla_{e_2}e_3 = \mu e_1 + \lambda e_4,\quad & \nabla_{e_2}e_4 = \lambda (e_2 - e_3). \end{array} \end{equation} Then, by (\ref{F}), (\ref{Fp}) and (\ref{nabla1}) we compute the following essential non-zero components $F_{ijk}=F(e_i,e_j,e_k)$ of $F$: \begin{equation}\label{Fijk} \begin{array}{l} F_{111} = F_{422} = 2\mu,\quad F_{222}=-F_{311} = 2\lambda,\medskip\\ F_{112} = -F_{214} = F_{314} = -F_{412} = \lambda,\quad F_{212}=-F_{114}=F_{312}=-F_{414}=\mu. \end{array} \end{equation} Having in mind (\ref{1-3}) and (\ref{Fijk}), the components $\theta_i=\theta(e_i)$ and $\theta_i^\ast=\theta^\ast(e_i)$ of the 1-forms $\theta$ and $\theta^\ast$, respectively, are: \begin{equation}\label{theta} \theta_2=\theta_3=\theta^\ast_1=-\theta^\ast_4 = 4\lambda,\qquad\theta_1=-\theta_4=-\theta^\ast_2=-\theta^\ast_3=4\mu. \end{equation} By (\ref{1-3}) and (\ref{theta}) we compute \begin{equation}\label{22} \begin{array}{c} \Omega = 4\mu(e_1+e_4) + 4\lambda(e_2 -e_3),\qquad J\Omega = 4\lambda(e_1 + e_4) - 4\mu(e_2 - e_3),\medskip\\ \theta (\Omega) = \theta(J\Omega) = 0. \end{array} \end{equation} By the characteristic condition (\ref{w1}) and equalities (\ref{Fijk}), (\ref{theta}) we prove that the manifold $(G,J,g)$ with Lie algebra $\mathfrak{g}$ defined by (\ref{lie1}) belongs to the basic class $\mathcal{W}_1$. Moreover, by (\ref{nabla1}) and (\ref{theta}) it follows that the conditions (\ref{cK}) hold and thus \begin{proposition} The manifold $(G,J,g)$ defined by (\ref{J1}), (\ref{g1}) and (\ref{lie1}) is a conformal K\"ahler manifold with Norden metric. \end{proposition} According to (\ref{nabla-0}), (\ref{g1}), (\ref{nabla1}) and (\ref{theta}) the components of the natural connection $\nabla^0$ are given by \begin{equation}\label{nabla-01} \begin{array}{ll} \nabla_{e_1}^0 e_1 = - \nabla_{e_4}^0 e_1 = \mu e_2,\qquad & \nabla_{e_2}^0 e_1 = \nabla_{e_3}^0 e_1 = \lambda e_2,\medskip\\ \nabla_{e_1}^0 e_2 = -\nabla_{e_4}^0 e_2 = -\mu e_1,\qquad & \nabla_{e_2}^0 e_2 = \nabla_{e_3}^0 e_2 = - \lambda e_1,\medskip\\ \nabla_{e_1}^0 e_3 =-\nabla_{e_4}^0 e_3 = \mu e_4,\qquad & \nabla_{e_2}^0 e_3 = \nabla_{e_3}^0 e_3 = \lambda e_4,\medskip\\ \nabla_{e_1}^0 e_4 = -\nabla_{e_4}^0 e_4 = -\mu e_3,\qquad & \nabla_{e_2}^0 e_4 = \nabla_{e_4}^0 e_4 = - \lambda e_3. \end{array} \end{equation} By (\ref{nabla-01}) we obtain $R^0=0$. Then, by (\ref{R0}) and (\ref{R01}) the curvature tensor $R$ of $(G,J,g)$ has the form \begin{equation} R = \frac{1}{4}\psi_1(A),\qquad A(x,y) = (\nabla_x\theta)Jy + \frac{1}{4}\theta(x)\theta(y). \end{equation} Moreover, having in mind (\ref{111}), (\ref{nabla1}), (\ref{theta}) and (\ref{22}), we compute $S_1=S_2=S_3=0$. Hence, for the tensors $R^\prime$ of the complex connections $\nabla^\prime$ given by (\ref{Rpr}), it is valid $R^\prime = 0$. \begin{proposition} The complex connections $\nabla^\prime$ defined by (\ref{2-1}) and (\ref{2-2}) are flat on $(G,J,g)$. \end{proposition}
2,877,628,089,281
arxiv
\section{Introduction} It is a common experience to most skyrmion practitioners that the mass tends to come out too high. Even if one uses rather sophisticated chiral lagrangians, incorporated vector and axial vector mesons (e.g. \cite{vector}) one finds the predicted nucleon mass to lie nearly 50\% too high and the situation becomes much worse in SU(3) extensions. Most other observables, in contrast to that, seem to be predicted with a much better accuracy. This has led to the belief that the mass may have a somewhat special status. In this talk, I would like to try to convince you that this is not the case and that the problem with the mass has a very natural explanation once one tries to perform the semi-classical (which is also the 1/$N_c\ $) expansion in a systematic way. This, in fact, is not usually done in the context of the skyrmion. Following ref.\cite{anw}, one identifies collective coordinates and quantization is effected only at the level of these coordinates. If one does this for the mass, one finds besides the classical O($N_c\ $) contribution a O(1/$N_c\ $) one. This procedure gives no contribution of order $N_c^0$. This contribution is precisely what one calls the "Casimir energy" (CE) and its evaluation requires that one deals with the non-collective coordinates. \vfill \noindent{IPNO/TH 92-94} \eject What one has to do in principle can be inferred from the semi-classical soliton theory which was developped a long time ago (see e.g.\cite{coleman} ). In practice, however, one seems to face the difficulty that in early work, solitons were taken to live in 1+1 dimensions with a renormalizable lagrangian, while the skyrmion lives in 3+1 dimensions and the lagrangian is not renormalizable. A safe approach to this problem is to use a lagrangian involving only chiral field degrees of freedom. This is after all what Skyrme originally did and we nowadays have a theory, Chiral perturbation theory (ChPT)\cite{w76}\cite{gl84}, which precisely gives us such a lagrangian and furthermore tells us how to make sense out of loops. Quite remarkably, recent determinations of the values of the parameters appearing in the lagrangian of ChPT turn out to be surprinsingly close to those which Skyrme has guessed thirty years ago. \section{Basic ideas and formulae} Vibrational degrees of freedom for the skyrmion were first considered in ref.\cite{biedenharn} who did not attempt, however, a complete evaluation of their role. One approximation scheme to actually calculate the CE was proposed by Schnitzer\cite{schnitzer} who found a large positive value (of the order of 500 MeV). Later on, the presence of vector mesons in the lagrangian were found (in the same approximation) to considerably reduce this effect to a negligible 50 MeV\cite{chemtob}. Concurrently, an estimate was made in ref.\cite{zahed} which makes use of the relationship between the CE and the effective action to one-loop. They recognized that an ultra-violet divergence was present and they used a derivative expansion approximation of the effective action. A somewhat better approximation was proposed later on \cite{dobado} which seems nevertheless to yield a rather similar result of approximately $-200$ MeV. How come that all these results are different from each other? The answer is that one must be very careful with the approximations that one makes. Most approximations will in fact kill the most important part of the effect. To show this, we will start from a formula which is exact and which relates the CE to the pion-skyrmion phase shifts. The correct size of the effect will be controlled by the fact that the phase-shifts are very large at the origin because there a six zero-modes (associated with three translations and three rotations). Any approximation of the Born type for the phase-shifts will be essentially incorrect. An approximation which respects the low-energy behaviour of the phases was proposed in ref.\cite{nous}. Let us start with a simple 1+1 dimensional situation as a warm up. Many features are exactly similar to the 3 dimensional case. Consider, for instance, the action: $$ S= {1\over 2}\int d^2x \left((\partial_\mu\phi)^2 +m^2\phi^2 -{\lambda\over 2} \phi^4\right) \eqno(1)$$ It is convenient to rescale the coordiates and the fields: $$ \phi\to{m\over \sqrt{\lambda}}\phi\qquad x\to{1\over m}x $$ so the action now looks like: $$ S={m^3\over 2\lambda} \int d^2x \left( (\partial_\mu\phi)^2 +\phi^2 - {\scriptstyle1\over\scriptstyle2}\phi^4\right) \eqno(2)$$ We see that $1/\lambda$ appears in front of the action so the semi-classical expansion is identical to the weak coupling expansion. We have exactly the same situation in three dimensions with $N_c\ $ replacing $1/\lambda$. The classical solution (the "kink") and the classical mass are easily found: $$ \phi_{class}=\tanh({x-x_0\over \sqrt{2} }),\qquad M_{class}= {2\sqrt{2}m^3\over 3\lambda} \eqno(3)$$ Next, one has to consider fluctuations around the classical solution and it is not difficult to show that the leading correction to the classical mass (i.e. of order $\lambda^0$) has the following formal expression \cite{coleman}: $$ M^{(0)} = {1\over 2}\left[ {\rm Tr}\,\big(-\partial_x^2+3\tanh^2({x\over\sqrt{2}})-1\big)^ {1\over 2}- {\rm Tr}\,\big(-\partial_x^2+2 \big)^ {1\over 2}\right] \equiv {1\over 2}\big({\rm Tr} H^ {1\over 2} -{\rm Tr} H_0^ {1\over 2}\big) \eqno(4)$$ Obviously if we rewrite the trace in terms of eigenvalues we obtain that $$ M^{(0)} =\sum\omega_n-\sum\omega_n^0\eqno(5)$$ which is why this contribution is called the Casimir energy, by analogy with the classic QED effect \cite{casimir}. The operator involved in (4) is obtained by expanding the action to second order in powers of the fluctuation. The corresponding operator for the skyrmion is obtained by exactly the same procedure. A careful derivation can be found in ref.\cite{moi}. As was mentioned already, an extremely useful formula for practical purposes arises upon expressing (4) in terms of phase shifts\cite{dhn}. A simple direct derivation, valid for any space dimension, is as follows. One first uses the identities: $$ {\rm Tr} H^ {1\over 2}=2{\rm Tr}\int_M^\infty dE\, E^2\delta(E^2-H)= -{1\over i\pi}{\rm Tr}\int_M^\infty dE\,E \left({E\over E^2-H+i\epsilon} -{E\over E^2-H-i\epsilon}\right) \eqno(6)$$ where $M$ is the lower bound of the continuous spectrum (in the kink example $M=\sqrt{2}m$ ). Next one recognizes that: $$ {\rm Tr}\left({E\over E^2-H+i\epsilon}-{E\over E^2-H_0+i\epsilon}\right)= {d\over dE}\ln\Delta^+,\eqno(7)$$ where $$ \Delta^+={\rm det}\left(1-{1\over E^2-H_0+i\epsilon}(H-H_0)\right) \eqno(8)$$ So we obtain a formula: $$ M^{(0)} ={1\over 4i\pi}\int_M^\infty dE\, E {d\over dE}\,(\ln\Delta^+ -\ln\Delta^-) \eqno(9)$$ (This takes care of the continuous part of the spectrum. Obviously, one must also add the discrete part in the trace if there is any). Now $\Delta^+$ is nothing but the Fredholm determinant. In one space dimension it is a well know fact from scattering theory that its phase is the phase-shift \cite{newton}: $$ \Delta^\pm=\vert\Delta\vert \exp(\pm i\delta(E)) \eqno(10)$$ so our expression (4) becomes: $$ M^{(0)} ={1\over2\pi}\int_M^\infty dE E\,\delta'(E) \eqno(11)$$ In order to generalize to higher dimensions we must return to (9). If $H-H_0$ is a radial potential for instance one can again express the phase of the Fredholm determinant in terms of the phase-shifts of the radial operators $\delta(E)\equiv{\rm phase}(\Delta^+)=\sum(2j+1)\delta_j(E)$. The generalization to the skyrmion case is obvious: for every value of the grand-spin quantum number, J, we have three eigen-phase-shifts\cite{karliner} so we must sum over these before summing over J. At this point it would seem that we are running into trouble. A look at refs.\cite{karliner} reveals that (except for J=0) the phase-shifts are linearly diverging functions of $E$ and one can show that after performing the J sum things become worse: one ends up with a cubic divergence. Even in one dimension, in fact, the integral (11) is logarithmically divergent. In that case, however , it is enough to remember that at order one in $\hbar$ there is an extra contribution to the energy which comes from the one-loop counterterm in the lagrangian. Once this is taken into account the divergence disappears. We will see later how one can generalize this mechanism to the skyrmion. To begin with, we must discuss the lagrangian. \section{Chiral perturbation theory and the skyrmion:} In the limit where the masses of the $N_f\ $ light quarks are set to zero the QCD lagrangian is invariant under the chiral SU($N_f\ $){$\times$} SU($N_f\ $) group. This invariance is spontaneously broken by the QCD vacuum down to SU($N_f\ $). The spectrum thus consists of $N_f\ $ zero-mass goldstone bosons and there is a mass gap of around $\Lambda=1$ GeV above which one finds meson resonances, baryons, etc....ChPT is a systematic framework for describing low energy phenomena (with typical energy E) as an expansion in powers of E$/\Lambda$. In fact, it is a more general expansion which involves quark masses, external fields etc... but let us assume that these are zero for the moment. It is convenient to encode the goldstone bosons in a unitary matrix on which the chiral group operates linearly: $U=\exp(i\vec\tau.\vec\pi)$ then the most general dynamics can be expressed with the aid of a chiral lagrangian: $$ {\cal L}(U)={\cal L}_2 +{\cal L}_4 +{\cal L}_6 +... \eqno(12)$$ where the subscripts denote the numbers of derivatives of the matrix field $U$. As shown by Weinberg\cite{w76} if one wants to expand an amplitude to order E$^n$ with $n>2$ one must compute loops. There are simple counting rules, for example that one loop made out of two ${\cal L}_2$ vertices contribute at order 4, one loop with one ${\cal L}_2$ vertex and one ${\cal L}_4$ contributes at order 6, a two-loop amplitude with ${\cal L}_2$ vertices is also of order 6, etc... The chiral lagrangian contains the counterterms which render the loops finite. If we are able to construct the skyrmion out of the chiral lagrangian (12) then, at least in principle, one should be able to renormalize the Casimir energy, which is a one-loop effect. Note that it is not clear that the chiral expansion does apply for the soliton but, after all, one might expect the average energy of a pion "inside" a skyrmion to be of the order of 200 MeV since the skyrmion size is expected to be of the order of 1 fm. So why shouldn't we try? The chiral lagrangian has so far been computed up to order four\cite{gl84} and it reads (external sources being switched off): $$ {\cal L}_{ChPT}={F^2\over 2}\vec{A}^\mu.\vec{A}_\mu +{1\over {96\pi^2}}(\bar l_1-1+\ln {M^2\over \mu^2} ) \left( \vec{A}^\mu.\vec{A}_\mu\right)^2 +{2\over {96\pi^2}}(\bar l_2-1+\ln {M^2\over \mu^2} ) \left( \vec{A}^\mu.\vec{A}_\nu\right)^2\eqno(13)$$ where $$ i\vec\tau.\vec{A}^\mu=U^\dagger{\partial^\mu} U,\qquad M\simeq m_\pi,\qquad F\simeq f_\pi. $$ We see that a scale $\mu$ appears in (13). This is because it incorporates counterterms. If one uses a regularization prescription like dimensional regularization, one gets rid of the 1/(n-4) pole and one is left with a scale dependence. A similar but more convenient scheme for our purposes is the zeta function one\cite{mckeon} as we will see later. In order to discuss solitons we must perform an $N_c\ $ expansion so we might start by assuming: $$ {\cal L}^{(1)}={\cal L}_{ChPT} (\mu=m_\rho) \eqno(14)$$ where the superscript designates the $N_c\ $ order. This is a meaningful assumption provided subleading terms in $N_c\ $ are very small for this particular scale value $\mu=m_\rho$. This seems indeed to be the case as follows from the recent work of ref.\cite{riggenbach} who have analysed Kl4 decays and showed that this provides a quantitatively neat test for this suppression. Note that the fact that the terms of order $N_c^0$ in the chiral lagrangian are found to be small by no means implies that the correction of the same order to the skyrmion mass should also be small. However, this will turn out to be important for the accuracy of the CE determination, as we will see. Let us now rewrite the fourth order lagrangian using notations which are familiar in the skyrmion context: $$ {\cal L}^{(1)}_4={1\over 4e^2}\,(\vec{A}^\mu\wedge\vec{A}^\nu).(\vec{A}_\mu\wedge\vec{A}_\nu)+ {\gamma\over 2e^2}(\vec{A}^\mu.\vec{A}_\mu)^2 \eqno(15)$$ The first question that one might ask is whether the values of the parameters $e$ (Skyrme parameter) and $\gamma$ that are determined from ChPT are compatible with a stable soliton solution. Remember in this context that the $\gamma$ term tends to destabilize the soliton. An upper bound for stability was found to be $\gamma < 0.12 $\cite{truong}. If we look at the 1984 values of Gasser and Leutwyler we find: $$ e=6.8\pm 4.15\qquad\qquad \gamma=-0.06\pm 0.2 $$ These numbers are not very conclusive because of the large error bars. Let us now consider the more recent determination of Riggenbach et al.\cite{riggenbach}: $$ e=7.1\pm2.30\qquad\qquad\gamma=0.03\pm0.03 $$ The error bars are considerably smaller and we see that ChPT now seems perfectly compatible with a stable skyrmion. The results are in fact strickingly similar with the numbers proposed by Skyrme 30 years ago: $$ e=6.28\phantom{\pm2.30}\qquad\qquad\gamma=0\phantom{\pm0.20} $$ Unfortunately, chiral order four is not enough for a consistent description of the skyrmion. Firstly it is not consistent with the chiral expansion because the virial theorem implies that the contribution of ${\cal L}_2$ is identical to that of ${\cal L}_4$ instead of being much larger. Secondly it it is not consistent with the large $N_c\ $ expansion either, because, as we will soon discover, the $N_c\ $ contribution turns out to be nearly identical to the $N_c^0$ one instead, again, of being significantly larger. The situation is perhaps not desperate. In fact I claim that a mere extension to chiral order six is sufficient to cure, apparently, all the difficulties. We need however to have a model in order to make a guess for the coefficients of the terms appearing in ${\cal L}_6$ since they have not yet been worked out from ChPT. A reasonable starting point seems to be provided by the observation\cite{ecker} that all of the 10 parameters which appear in the chiral order four lagrangian can be saturated to a very good approximation by the contribution of the low-lying vector, axial vector, scalar and pseudo-scalar resonances. In particular since the rho meson saturates the Skyrme term it is natural to consider the omega meson as well which contributes at sixth order: $$ {\cal L}_{6,\omega}=- {1\over 2}{\beta^2\over M^2_\omega}\, B_\mu B^\mu,\qquad B_\mu ={\epsilon_ {\mu\nu\lambda\sigma}\over 12\pi^2}(\vec{A}_\nu,\vec{A}_\lambda,\vec{A}_\sigma) \eqno(16)$$ The value of the coupling parameter $\beta$ can be estimated to be rather large $\beta\simeq 9.3$\cite{vinhmau} so it is tempting to assume that ${\cal L}_{6,\omega}$ is the dominant sixth order term. This is supported by estimates by Walliser\cite{walliser} who showed that the rho induced sixth-order term is much smaller than the omega one. One also notices that the rho and scalar contributions tend to cancel each other. If one includes the contribution (16) in the chiral lagrangian and looks for a soliton solution, the result is found to be in much better agreement with the chiral expansion than before. Due to the repulsive effect of the omega induced sixth order term the profile size is much larger than before and, as a result, the contribution of ${\cal L}_2$ becomes five times larger than that of ${\cal L}_4$. We will see in the sequel that the $N_c\ $ expansion also seems to become coherent, but before we can actually evaluate the CE in order to check that, we must discuss the $\zeta$ function method of regularization which is a very important technical ingredient in the calculation. \vfill\eject \section{$\zeta$ function regularization method} The basic idea is to introduce instead of a sum like $\sum (\omega^2)^ {1\over 2}$ the sum $\sum (\omega^2)^{ {1\over 2}-s}$ depending on the complex parameter $s$. One can eventually show that the function is analytic in $s$ and further define the limit of interest, $s=0$, by analytic continuation. The non trivial part is to actually perform this analytic continuation in a situation like ours where the eigenvalues $\omega_n$ are known only numerically. As we will see, the phase-shift representation provides a simple solution to this problem. Before we turn to that, however, we must make sure that the regularization procedure that we use for the Casimir energy is the same as the one which is used to regularize Green's functions (leading to the counterterms as they appear in (13) ). Now Green's functions to one loop are generated by an effective action which can be written as a trace log of a four dimensional operator. If we call $O$ such an operator then the corresponding $\zeta$ function is defined as\cite{ramond}: $$ \zeta_O(s)={1\over\Gamma(s)}{\rm Tr}\,\int_0^\infty d\tau\,\tau^{s-1} \exp\left(-{\tau O\over\mu^2}\right) \eqno(17)$$ the scale $\mu$ is introduced at this level in order to make $\tau$ dimensionless. The regularized form of the trace log is then given by: $$ {\rm Tr}\log(O)\equiv-\zeta '(s=0) \eqno(18)$$ we can relate the Casimir energy to a four dimensional operator via the identity: $$ {\rm Tr}(H^ {1\over 2})=\lim_{T\to\infty}{1\over T}{\rm Tr}\log\, (-\partial_t^2 + H) \eqno(19)$$ which holds for a time independent operator $H$. If we let $O=(-\partial_t^2 + H)$ in formula (17) a simple calculation shows that the $\zeta$ function regularization of the Casimir energy which matches the one used for the effective action is: $$ {\rm Tr}(H^ {1\over 2})\equiv -\zeta '(0)\quad{\rm with}\quad\zeta(s)=-{\mu^{2s}\Gamma (s- {1\over 2})\over\Gamma(s)\Gamma(- {1\over 2})} {\scriptstyle1\over\scriptstyle2}{\rm Tr} H^{ {1\over 2}-s} \eqno(20)$$ The same derivation as before yields the appropriate phase-shift formula: $$ {\rm Tr} H^{ {1\over 2}-s}-{\rm Tr} H_0^{ {1\over 2}-s}={1\over\pi}\int_0^\infty\,dp\,\delta '(p) (p^2 +M^2)^{ {1\over 2}-s} \eqno(21)$$ where a finite pion mass $M$ was introduced. It can be shown that the large momentum behavior of the phase function is of the form: $$ \delta(p)=\bar a_0 p^3 + \bar a_1 p +{\bar a_2\over p} + ... \eqno(22)$$ where the $\bar a_i$'s are numbers which are simply related to the heat kernel expansion of the operator $H$. One can subtract and add this leading asymptotic behavior and perform the analytic continuation in $s$ by using the well-known integrals\cite{abramowitz} $$ \int_0^\infty dp p^m (p^2+M^2)^{ {1\over 2}-s}= {1\over 2} M^{m+2-2s}{\Gamma({m+1\over2}) \Gamma(s-1-{m\over2})\over\Gamma(s- {1\over 2})} \eqno(23)$$ Finally, one obtains the finite and closed form formula for the regularized CE: $$\eqalign{ E_{cas}(\mu)= &{1\over2\pi}\bigg\lbrace \int_0^\infty dp\,\Big[-{p\over\sqrt{p^2+M^2}} (\delta(p)-\bar a_0p^3-\bar a_1p) + {\bar a_2\over\sqrt{p^2+\mu^2}}\Big]\cr &-{3 \bar a_0\over8}M^4({3\over4}+{1\over2}\ln{\mu^2\over M^2}) +{\bar a_1\over4}M^2 (1+\ln{\mu^2\over M^2})-M\delta(0)\bigg\rbrace\cr }\eqno(24)$$ Note that it is well defined for zero as well as finite pion mass $M$. It is also clear that, by construction, only the low-energy behavior of the phase shift is important in the integration. Now, in a way similar to the 1+1 dimensional case we must add counterterms to (24), i.e. the O($(N_c)^0$) part of the chiral lagrangian: $$ E_{ct}(\mu)=- ({\cal L}_2^{(1)}+...+{\cal L}_n^{(1)} - {\cal L}_{ChPT} ) \eqno(25)$$ Actually, since ${\cal L}^{(1)}$ is being truncated at chiral order $n$ (in practice $n=4$ or 6) $E_{ct}$ contains terms of order O($N_c\ $) and of chiral order $n+2,..,2n$. Consistency requires that these should be of the same order in magnitude (or smaller) that the terms of lower chiral order but subleading in $N_c\ $. In this respect already, $n=6$ is more satisfactory than $n=4$. Next, when we add the two pieces: $$ M^{(0)}=E_{cas}(\mu)+E_{ct}(\mu) \eqno(26)$$ the scale dependence should disappear (at least up to O(1/$N_c\ $) terms). In practice, of course, it does not since we do not know enough terms in ${\cal L}_{ChPT}$. One would therefore like to argue that $$ E_{ct}(\mu)<< E_{cas} (\mu) \eqno(27)$$ This, of course, cannot hold for arbitrary values of $\mu$. Now as we have seen, it follows from ref.\cite{riggenbach} ( and also from ref.\cite{ecker} ) that for the particular value $\mu=m_\rho$ terms of order $N_c^0$ are strongly suppressed in the chiral lagrangian (by a factor of 10 or so) so it seems a good idea for us to choose $\mu=m_\rho$. According to formula (25) this will suppress the contributions to $E_{ct}$ up to chiral order $n$. Those of order $n+2, n+4,...$ are of order $N_c\ $ so one must assume that they are suppressed because of they high chiral order. The main reason why (27) should hold, though, is that $E_{cas}(\mu)$ is enhanced because it incorporates the zero-mode contributions. In our formalism, they show up via Levinson's theorem forcing the phase-shift at the origin to be fairly large (=$6\pi$). An approximate way to estimate the CE by singling out the zero-mode contributions was imagined recently by Holzwarth\cite{holzwarth}. Let us now illustrate these points on some examples. \section{Some results and conclusions} Let us first consider the Skyrme lagrangian ${\cal L}={\cal L}_2+{\cal L}_{4,sk}$ (see (15) ) with physical values of $f_\pi$ and $m_\pi$ and \noindent a)e=5.5 (by analogy with ref.\cite{anw}). Following the method described above one finds: $$ M^{(0)}=-957 + (-72) =-1029\ {\rm MeV} $$ where the number in parenthesis is the counterterm contribution (which is indeed small). This is to be compared with the leading $N_c\ $ contribution: $$ M^{(1)}=1263\,{\rm MeV} $$ Clearly, the 1/$N_c\ $ expansion seems to be in trouble and furthermore, the nucleon mass is found to be much too small. In fact, better phenomenological results are expected from the Skyrme lagrangian if one takes a smaller value for e. Let us consider then: \noindent b)e=4.0 (which gives for example a reasonable delta-nucleon splitting of 250 MeV). In this case, one finds: $$ M^{(0)}=-805 + (-452) =-1297 {\rm MeV} $$ while the leading contribution is $$ M^{(1)}=1761 {\rm MeV} $$ This is slightly better than before and one could eventually arrive at a reasonable nucleon mass, but an unsatisfactory feature now is that the counterterm contribution is rather large. This is because of the large mismatch between the value of $e$ and the one compatible with ChPT (see sec. (3) ). In this situation one can no longer argue that the unknown contributions of $E_{ct}$ are necessarily small. The only way out of this dilemma seems to include higher order terms in the chiral lagrangian. Let us add an omega induced sixth order term then, and consider:${\cal L}^{(1)}= {\cal L}_2+{\cal L}_{4,sk}+{\cal L}_{6,\omega} $ with the parameters $e=7.22$ (from the meson saturation fit of \cite{ecker}) and $\beta=9.3$ from \cite{vinhmau}. The calculation gives, in that case: $$ M^{(0)} =- 604 + (+153)= -451 {\rm MeV} $$ while the classical value $$ M^{(1)} =1553 {\rm MeV} $$ The stricking feature is the strong reduction of the Casimir contribution. Now the large $N_c\ $ expansion looks much more reasonable than before (the correction is a factor of 3 smaller than the classical mass). We argued in sec. (3) that the chiral expansion was also more justified. The two things are in fact related: the reason why $E_{cas}\ $ is smaller is that the phase-shift function $\delta(p)$ drops faster as a function of the momentum, and this is because the classical profile function has a larger extension in space. Note that the counterterm contribution is positive now (because the choice of $e$ is slightly larger than that of ChPT) but the Casimir energy itself is always found to be negative. If we add $M^{(1)}$ and $M^{(0)}$ we find that the nucleon mass is correctly predicted to within 20\% while we did not attempt to fit any parameter in the chiral lagrangian. In conclusion, a reasonable picture seems to emerge provided one incorporates besides the fourth order rho induced term a sixth order omega meson one. This is not really a surprise. In fact, the most succesfull skyrmion phenomenology seems to require that one has these resonances (and a few others) explicitely in the lagrangian. The rationale for including resonances is that this should extent the range of validity of the effective lagrangian from $E<<\Lambda$ to $E\simeq\Lambda$. It is likely that one could extend the calculation of the CE to that case but the value of the "optimal scale" should perhaps be larger. An open question at the moment is how to estimate loop corrections for other nucleon observables. In particular, it is not clear (to me) whether the zero-mode enhancement which lead to a sizeable effect in the case of the mass will also operate for some other observables and how. \nonumsection{References}
2,877,628,089,282
arxiv
\section{Introduction} \label{sec:intro} The intracluster medium (ICM) embedded in the deep gravitational well of clusters of galaxies has a complex multi-temperature structure with different cospatial phases ranging from $\sim10^{6}$ to above $10^{8}$\,K. It is thought to contain most of the baryonic mass of the clusters and its density strongly increases in their cores where the radiative cooling time is less than 1\,Gyr and therefore shorter than their age. In the absence of heating, this would imply the cooling of hundreds of solar masses of gas per year below $10^{6}$\,K \citep{Fabian1994}. The gas is expected to produce prominent emission lines from O\,{\small VI} in UV, peaking at $T\sim3\times10^{5}$\,K, as well as O\,{\small VII} ($T\sim2\times10^{6}$\,K) and Fe\,{\small XVII} ($T\sim6\times10^{6}$\,K) in X-rays, suggesting that spectroscopy is a key to understand the cooling processes in clusters of galaxies. Evidence of weak O\,{\small VI} UV lines was found by \cite{Bregman2005,Bregman2006} at levels of $30\,M_{\odot}$\,yr$^{-1}$ or lower, significantly less than the predicted $100\,M_{\odot}$\,yr$^{-1}$. Fe\,{\small XVII} emission lines have been discovered, but with luminosities much lower than expected from cooling-flow models (see e.g. \citealt{Peterson2003}). O\,{\small VII} lines were detected for the first time in a stacked spectrum of a sample of cool objects by \cite{Sanders2011} and more recently in individual elliptical galaxies by our group \citep{Pinto2014}, but in most cases their fluxes are lower than those predicted by cooling flow models. There is an overal deficit of cool ($\lesssim0.5$\,keV or $\lesssim6\times10^{6}$\,K) gas in the cores of clusters of galaxies and nearby elliptical galaxies. Several energetic phenomena are occurring in the cores and in the outskirts of clusters of galaxies and isolated galaxies such as feedback from active galactic nuclei (AGN, see e.g. \citealt{Churazov2000,McNamara2007,Fabian2012}). Briefly, energetic AGN outflows drive turbulence in the surrounding ICM, which then dissipates and heats the ICM balancing the cooling \citep[see e.g.][]{Zhuravleva2014}. {AGN can also heat the surrounding gas via dissipation of sound waves (see e.g. \citealt{Fabian2003_waves,Fabian2005_waves}).} The phenomenology can be more complex because galactic mergers and sloshing of gas within the gravitational potential also produce high turbulence (see e.g. \citealt{Ascasibar2006}; \citealt{Lau2009}). In this work we study the coolest X-ray emitting gas in clusters and groups of galaxies and in elliptical galaxies, which is crucial to understand the ICM cooling from $10^8$\,K down to $10^4$\,K. We use high quality archival data and new observations taken with the high-resolution Reflection Grating Spectrometer (RGS) aboard XMM-\textit{Newton}. We search for a relationship between the cool O\,{\small VII} gas and the turbulence, evidence of resonant scattering and charge exchange in the ICM where neutral gas is observed. We present the data in Sect.\,\ref{sec:data} and the spectral modeling in Sect.\,\ref{sec:spectral_modeling}. We discuss the results in Sect.\,\ref{sec:discussion} and give our conclusions in Sect.\,\ref{sec:conclusion}. \begin{table*} \caption{XMM-\textit{Newton}/RGS observations used in this paper, extraction regions and O\,{\small VII} detection.} \label{table:log} \renewcommand{\arraystretch}{1.1} \small\addtolength{\tabcolsep}{-2pt} \scalebox{1}{% \begin{tabular}{c c c c c c c c c c} \hline Source & t\,$^{(a)}$& $d\,^{(b)}$ & W\,$^{(b)}$ & CIE\,$^{(c)}$ & $<kT>$\,$^{(c)}$ & $N_{\rm H}$\,$^{(d)}$ & $R_{(f/r) | \rm Fe}$\,$^{(e)}$ & W(O\,{\small VII})\,$^{(e)}$ & P(O\,{\small VII})\,$^{(e)}$ \\ & (ks) & (Mpc) & (') & (Nr) & (keV) & ($10^{20}\,{\rm cm}^{-2}$) & & (') & ($\sigma$) \\ \hline {{A 262}} (NGC 708) & 172.6 & 63.7 & 0.20 & 2 & $ 1.19 \pm 0.02$ & 7.15 & $1.91 \pm 0.89 $ & --- & --- \\ Centaurus ({{A 3526}}) & 152.8 & 51.2 & 0.25 & 2 & $ 1.17 \pm 0.02$ & 12.2 & $1.53 \pm 0.22 $ & 0.4 & 3.0 \\ Fornax ({{NGC 1399}}) & 123.9 & 17.8 & 0.72 & 2 & $ 1.21 \pm 0.03$ & 1.56 & $1.27 \pm 0.18 $ & --- & --- \\ Perseus ({{A 426}}) & 162.8 & 72.3 & 0.18 & 2 & $ 2.71 \pm 0.15$ & 20.7 & $4.30 \pm 2.10 $ & 0.8 & 2.7 \\ Virgo (M 87) & 129.0 & 16.6 & 0.77 & 2 & $ 2.05 \pm 0.05$ & 2.11 & $1.68 \pm 0.19 $ & --- & --- \\ {{HCG 62}} (NGC 4761) & 164.6 & 66.1 & 0.24 & 2 & $ 0.85 \pm 0.01$ & 3.76 & $1.57 \pm 0.21 $ & --- & --- \\ {{IC 1459}} & 145.4 & 24.0 & 0.53 & 2 & $ 0.59 \pm 0.04$ & 1.16 & $1.90 \pm 0.65 $ & 3.4 & 3.3 \\ {{M 49}} (NGC 4472) & 81.4 & 15.8 & 0.81 & 2 & $ 0.88 \pm 0.01$ & 1.63 & $1.70 \pm 0.20 $ & --- & --- \\ {{M 84}} (NGC 4374) & 91.5 & 16.7 & 0.77 & 2 & $ 0.83 \pm 0.07$ & 3.38 & $1.86 \pm 0.17 $ & 0.6 & 4.1 \\ {{M 86}} (NGC 4406) & 63.5 & 16.1 & 0.80 & 2 & $ 0.84 \pm 0.05$ & 2.97 & $2.12 \pm 0.29 $ & 3.4 & 5.5 \\ {{M 89}} (NGC 4552) & 29.1 & 16.0 & 0.80 & 1 & $ 0.62 \pm 0.08$ & 2.96 & $1.62 \pm 0.25 $ & 3.4 & 3.1 \\ {{NGC 507}} & 94.5 & 59.6 & 0.25$^{f}$ & 2 & $ 1.06 \pm 0.02$ & 6.38 & $1.59 \pm 0.67 $ & --- & --- \\ {{NGC 533}} & 34.7 & 61.6 & 0.25$^{f}$ & 2 & $ 0.87 \pm 0.03$ & 3.38 & $2.36 \pm 1.04 $ & --- & --- \\ {{NGC 1316}} & 165.9 & 19.3 & 0.66 & 2 & $ 0.70 \pm 0.02$ & 2.56 & $1.90 \pm 0.25 $ & 0.8 & 6.9 \\ {{NGC 1332}} & 63.9 & 22.9 & 0.56 & 2 & $ 0.66 \pm 0.03$ & 2.42 & $3.01 \pm 0.80 $ & --- & --- \\ {{NGC 1404}} & 29.2 & 19.2 & 0.67 & 2 & $ 0.69 \pm 0.01$ & 1.57 & $2.06 \pm 0.25 $ & 0.6 & 2.6 \\ {{NGC 3411}} & 27.1 & 79.1 & 0.25$^{f}$ & 1 & $ 0.93 \pm 0.02$ & 4.55 & $1.24 \pm 0.36 $ & --- & --- \\ {{NGC 4261}} & 134.9 & 29.9 & 0.43 & 1 & $ 0.71 \pm 0.01$ & 1.86 & $1.60 \pm 0.28 $ & --- & --- \\ {{NGC 4325}} & 21.5 & 112 & 0.25$^{f}$ & 2 & $ 0.89 \pm 0.02$ & 2.54 & $1.22 \pm 0.32 $ & --- & --- \\ {{NGC 4636}} & 102.5 & 16.0 & 0.80 & 2 & $ 0.72 \pm 0.01$ & 2.07 & $1.95 \pm 0.09 $ & 3.4 & 5.0 \\ {{NGC 4649}} & 129.8 & 16.6 & 0.77 & 1 & $ 0.90 \pm 0.01$ & 2.23 & $1.27 \pm 0.20 $ & --- & --- \\ {{NGC 5044}} & 127.1 & 35.8 & 0.36 & 2 & $ 0.89 \pm 0.01$ & 6.24 & $1.44 \pm 0.22 $ & --- & --- \\ {{NGC 5813}} & 146.8 & 29.2 & 0.44 & 2 & $ 0.68 \pm 0.01$ & 6.24 & $2.61 \pm 0.43 $ & 3.4 & 3.2 \\ {{NGC 5846}} & 194.9 & 26.9 & 0.48 & 2 & $ 0.74 \pm 0.01$ & 5.12 & $1.67 \pm 0.35 $ & 0.8 & 3.7 \\ \hline \end{tabular}} $^{(a)}$ RGS net exposure time. $^{(b)}$ Source distance (average value taken from the Ned database: https://ned.ipac.caltech.edu/) and width of the extraction region. $^{(c)}$ Number of thermal components and best-fit temperature for a single isothermal model (see Sect.~\ref{sec:gas_turbulence}). \\ $^{(d)}$ Hydrogen column density (see http://www.swift.ac.uk/analysis/nhtot/). $^{(e)}$ Fe\,{\small XVII} line ratio, width (W) of the region that maximizes the O\,{\small VII} detection and O\,{\small VII} cumulative significance (P) with ``---'' referring to significance below 99\% (see Sect.~\ref{sec:search_ovii}).\\ $^{(f)}$ For these objects we had to adopt a larger width for the extraction region to obtain enough statistics in the Fe\,{\small XVII} lines. \end{table*} \section[]{The data} \label{sec:data} The observations used in this paper are listed in Table~\ref{table:log}. Most objects were already included in our recent work \citep{Pinto2015}, but here we only focus on those which exhibit cool gas producing Fe\,{\small XVII} emission lines. The original catalog, also known as the CHEERS sample, consists of 44 nearby, bright clusters and groups of galaxies and elliptical galaxies with a $\gtrsim5\sigma$ detection of the O\,{\small VIII} 1s--2p line at 19\,{\AA} and with a well-represented variety of strong, weak, and non cool-core objects. In addition to the CHEERS sources exhibiting Fe\,{\small XVII} emission, here we include two cool objects: NGC\,1332 and IC\,1459. In NGC\,1332, O\,{\small VIII} was detected just below $5\sigma$, however its Fe\,{\small XVII} emission lines are much stronger. IC\,1459 data were enriched by $\sim120$\,ks new data awarded during the AO-14. In total we have 24 sources. The XMM-\textit{Newton} satellite is provided with two main X-ray instruments: RGS and EPIC (European Photon Imaging Camera). We have used RGS data for the spectral analysis and EPIC (MOS\,1 detector) data for imaging. The RGS spectrometers are slitless and the spectral lines are broadened by the source extent. We correct for spatial broadening through the use of EPIC/MOS\,1 surface brightness profiles. We repeat the data reduction as previously done in \citet{Pinto2015}, but with newer calibration files and software versions (available by January, 2016). All the observations have been reduced with the XMM-\textit{Newton} Science Analysis System (SAS) v14.0.0. We correct for contamination from soft-proton flares with the standard procedure. The sources in our sample span a large range of distances (Table~\ref{table:log}). Therefore, we tried to extract spectra in slices with widths of the same physical size. Before chosing an absolute scale, we have tested several extraction regions. For the nearby objects, such as NGC\,4636 which is the nearest X-ray bright giant elliptical galaxy, we adopted a width of about 0.8' ($\sim4$\,kpc) because it provides a good coverage of the inner Fe\,{\small XVII} bright core, strengthens the Fe\,{\small XVII} lines with respect to those produced by the hotter gas phase, and maximizes the detection of the O\,{\small VII} emission lines. The spectra of all objects were then extracted in regions centered on the Fe\,{\small XVII} emission peak with widths scaled by the ratio between the distance of the objects and that of NGC\,4636. For NGC 507, 533, 3411 and 4325 we had to adopt a slightly larger width because it was the minimum to provide enough statistics in the Fe\,{\small XVII} lines. The spectra extracted in these regions of approximately equal physical size have been used to measure the Fe\,{\small XVII} line ratios. Finally, we have also extracted spectra in different regions, with widths up to 3.4' which is the RGS sensitive field of view, to improve the O\,{\small VII} detection. We subtracted the model background spectrum, which is created by the standard RGS pipeline and is a template background file based on the count rate in CCD\,9. The spectra were converted to SPEX\footnote{www.sron.nl/spex} format through the SPEX task \textit{trafo}. We produced MOS\,1 images in the $8-27$\,{\AA} wavelength band and extracted surface brightness profiles to model the RGS line spatial broadening with the following equation: $\Delta\lambda = 0.138 \, \Delta\theta \, {\mbox{\AA}}$ (see the XMM-\textit{Newton} Users Handbook). \section{Spectral analysis} \label{sec:spectral_modeling} \subsection{Baseline model} \label{sec:baseline_model} Our analysis focuses on the $8-27$\,{\AA} first and second order RGS spectra. We perform the spectral analysis with SPEX version 3.00.00. We scale elemental abundances to the proto-Solar abundances of \citet{Lodders09}, which are the default in SPEX, use C-statistics and adopt $1\,\sigma$ errors. We have described the ICM emission with an isothermal plasma model of collisional ionization equilibrium (\textit{cie}). The basis for this model is given by the mekal model, but several updates have been included (see the SPEX manual). Free parameters in the fits are the emission measure $Y=n_{\rm e}\,n_{\rm H}\,dV$, the temperature $T$, and the abundances (N, O, Ne, Mg, and Fe). Nickel abundance was coupled to iron. Most objects required two \textit{cie} components (see Table\,\ref{table:log}). Here, we coupled the abundances of the two \textit{cie} components and assumed that the gas phases have the same abundances because the spectra do not allow us to measure them separately. The \textit{cie} emission models were corrected for redshift, Galactic absorption, see Table~\ref{table:log}, and line-spatial-broadening through the multiplicative \textit{lpro} component that receives as input the MOS\,1 surface brightness profile (see Sect.\,\ref{sec:data}). We do not explicitly model the cosmic X-ray background in the RGS spectra because any diffuse emission feature would be smeared out into a broad continuum-like component. For several objects, including the Perseus and Virgo clusters, we have added a further power-law emission component to account for any emission from the central AGN {(see \citealt{Russell2013} and references therein)}. This is not convolved with the spatial profile because it is produced by a point source. For each source, we have simultaneously fitted the spectra of individual observations by adopting the same model, apart from the emission measures of the \textit{cie} components which were uncoupled to account for the different roll angles of the observations. We have successfully applied this multi-temperature model to the RGS spectra. However, as previously shown in \citet{Pinto2015}, the model underestimates the 17\,{\AA} Fe\,{\small XVII} line peaks and overestimates its broadening for some sources, e.g. Fornax, M\,49, M\,86, NGC\,4636, and NGC\,5813. This is due to the different spatial distribution of the gas responsible for the cool Fe\,{\small XVII} emission lines and that producing most of the high-ionization Fe-L and O\,{\small VIII} lines. The Fe\,{\small XVII} gas is indeed to be found predominantly in the cores showing a profile more peaked than that of the hotter gas. The spatial profiles estimated with MOS\,1 images strongly depend on the emission of the hotter gas due to its higher emission measure and therefore they overestimate the spatial broadening of the 15--17\,{\AA} lines. It is difficult to extract a spatial profile for these lines because MOS\,1 has a limited spectral resolution and the images extracted in such a narrow band will lack the necessary statistics (see e.g. \citealt{Sanders2013}). The 17\,{\AA}\,/\,15\,{\AA} line ratio is also affected by resonant scattering (see e.g. \citealt{Gilfanov1987, Sanders2008}), which requires a different approach. In Sect.\,\ref{sec:gas_location} and \ref{sec:gas_turbulence} we account for the different location of the different phases and the Fe\,{\small XVII} (and O\,{\small VII}) resonant scattering. \subsection{Search for O\,{\small VII}} \label{sec:search_ovii} Following \citet{Pinto2014}, we have removed the {O\,{\small VII}} ion from the model and fitted two delta lines fixed at 21.6\,{\AA} and 22.1\,{\AA}, which reproduce the {O\,{\small VII}} resonance and forbidden lines, respectively. The intercombination line at 21.8\,{\AA} is generally weak or insignificant and blends with the resonance line. These lines are corrected by the redshift, the Galactic absorption, and the spatial line broadening as done for the \textit{cie} models. If the resonant line was comparable or stronger than the forbidden lines, we have determined the {O\,{\small VII}} total significance by fixing the resonance-to-forbidden line flux ratio to $(r/f) = 1.3$ as predicted by the thermal model. Otherwise the {O\,{\small VII}} total significance was calculated as the squared-sum of the significance of each line. The latter refers to Perseus, M\,89, and NGC\,4636 and 5813. We applied this technique to spectra extracted in regions of different widths in order to search for that one maximizing the {O\,{\small VII}} detection. We adopt as threshold for the {O\,{\small VII}} detection the 99\% confidence level because the objects are distributed in two subsamples with detection levels $<2.0\sigma$ and $>2.6\sigma$ showing a gap in between. The results are reported in Table\,\ref{table:log} and discussed in Sect.~\ref{sec:discussion}. \subsection{The location of the cool gas} \label{sec:gas_location} It is possible to probe the extent of the cool ({O\,{\small VII}} $-$ Fe\,{\small XVII}) gas by comparing its linewidth to that of the hot ({O\,{\small VIII}} $-$ Fe\,{\small XVIII+}) gas. The dominant line broadening effect in grating spectra is indeed produced by the spatial extent of the source (normally a few $1000$\,km\,s$^{-1}$), which is almost an order of magnitude larger than the thermal + turbulent broadening (few $100$s\,km\,s$^{-1}$, see e.g. \citealt{Pinto2015} and references therein). The turbulence and thermal broadening are not expected to be significantly different between the two phases (see e.g. \citealt{Pinto2015}). We therefore did the same exercise for the Fe\,{\small XVII} emission lines by removing the Fe\,{\small XVII} ion from the model and fitting four delta lines fixed at 15.01\,{\AA}, 15.26\,{\AA}, 16.78\,{\AA}, and 17.08\,{\AA}, which are the main Fe\,{\small XVII} transitions. We do not tabulate the significance of the Fe\,{\small XVII} lines because they are typically much larger than $5\sigma$. The \textit{lpro} model in SPEX that corrects for the line broadening has an additional scale parameter \textit{s}, which allows to fit the width of the spatial broadening by a factor free to vary (see the SPEX manual). We therefore use one \textit{lpro} model to account for the spatial broadening in the \textit{cie} components that produce the high-temperature lines and another \textit{lpro} model to fit the spatial broadening of the low-temperature {O\,{\small VII}} and Fe\,{\small XVII} lines. Averaging between all objects in our sample, we find that the \textit{lpro} scale parameter of the cool gas is half of that measured for the hot gas. In Fig.~\ref{Fig:Spectra_ratios} we show the RGS spectra of three interesting sources, the Centaurus cluster, M\,84 and M\,89 (from top to bottom). Overlaid on the data there are three spectral models: the baseline \textit{cie} model (thick black line), the delta line model for {O\,{\small VII}} and Fe\,{\small XVII} lines adopting the same spatial broadening as the \textit{cie} models (solid green line), and finally the {O\,{\small VII}} and Fe\,{\small XVII} lines with the spatial scale parameter \textit{s} free to vary (dashed red line). In order to better visualize the effect on the fit from spatial broadening, we calculate the ratios from the best-fit models obtained with the delta lines and the best-fitting \textit{cie} model and display them in the bottom panel of each figure. The color is coded similarly to the top panel: the green line is the ratio between the {O\,{\small VII}}--Fe\,{\small XVII} delta model and the \textit{cie} components (with the same spatial broadening); the red line shows the same ratio but with a different spatial broadening. The Fe\,{\small XVII} lines appear clearly narrower then the hot--gas lines in the Centaurus cluster (A\,3526), even taking into account the slightly different thermal broadening, but there is no significant wavelength shift (in agreement with \citealt{Sanders2008}). This suggests that the Fe\,{\small XVII} cool ($\sim6\times10^6$K) gas peaks in the central regions and has a smaller extent than that of the hot gas responsible for the {O\,{\small VIII}} at 19\,{\AA} and the higher-ionization Fe\,{\small XX+} ($\gtrsim10^7$K) lines between $11-13$\,{\AA}. A similar trend is observed in Fornax, M\,49, M\,84, NGC\,4636--5044--5813, and Perseus. The M\,84 and M\,89 elliptical galaxies, whose spectra are dominated by the Fe\,{\small XVII} lines, show an {O\,{\small VII}} excess with respect to the two-phase \textit{cie} model. Interestingly, in M\,84 (and NGC\,5846) the {O\,{\small VII}} resonant line at 21.6\,{\AA} is in excess, while in M\,89 (and NGC\,4636) the excess is shown by the forbidden line at 22.1\,{\AA}. The quality of the spectra of the other objects is not good enough to detected {O\,{\small VII}} in excess to that already produced by the two-\textit{cie} model. A stronger forbidden line may indicate resonant scattering for both {O\,{\small VII}} and Fe\,{\small XVII} lines as we previously suggested in \citet{Pinto2014}. \begin{figure} \includegraphics[width=0.7\columnwidth, angle=90]{paper_ovii_fig01a.ps} \includegraphics[width=0.7\columnwidth, angle=90]{paper_ovii_fig01b.ps} \includegraphics[width=0.7\columnwidth, angle=90]{paper_ovii_fig01c.ps} \caption{From top to bottom: RGS spectra of the Centaurus cluster, M\,84, and M\,89. Three spectral models are overlaid: 2-\textit{cie} model (thick black line), delta-line Fe\,{\small XVII} model (thick green line) and different spatial broadening (dashed red line). The bottom panels show the ratios between the Fe\,{\small XVII} line models and the 2-\textit{cie} model. The blue dotted lines show the $1\sigma$ uncertainties.} \label{Fig:Spectra_ratios} \end{figure} \begin{figure*} \includegraphics[width=1.5\columnwidth, angle=90, bb=65 80 530 680]{paper_ovii_fig02.ps} \caption{Fe\,{\small XVII} forbidden-to-resonance line ratio versus average temperature. O\,{\small VII} detections are reported with red points. The dashed green line shows the best fit in the log-log space for the objects below 1\,keV. The theoretical predictions from SPEX and Atomdb v3.0.2 are also shown. The objects above 1 keV are grey-shaded because there is little Fe\,{\small XVII} at those average temperatures and most of it should be produced by a cooler phase. The small grey box shows the average Fe\,{\small XVII} ratio ($2.00\pm0.29$) of the elliptical galaxies below 1\,keV with O\,{\small VII} detected above the 99\% confidence level.} \label{Fig:Resonant_scattering} \vspace{-0.4cm} \end{figure*} \subsection{O\,{\small VII} VS Turbulence} \label{sec:gas_turbulence} When turbulence is low the resonant line can be optically thick; it is therefore absorbed and re-emitted in a random direction with the line being suppressed towards the bright core and enhanced outside. This does not occur at high turbulence due to the energy shift of the transitions (see e.g \citealt{Werner2009} and \citealt{dePlaa2012}). The forbidden lines have a smaller oscillator strength and are much less affected. It is then interesting to measure the Fe\,{\small XVII} line resonant scattering of the sources in our sample, which is an indicator of (low-) turbulence, and compare it to the O\,{\small VII} detection, in a certain temperature range. We have used the Fe\,{\small XVII} line fluxes measured in Sect.~\ref{sec:gas_location} to calculate the Fe\,{\small XVII} (f/r) line ratios for the models with a different spatial broadening between these lines and the hot gas, and quote the results in Table\,\ref{table:log}. In order to estimate an average temperature for each source, we fit again the RGS spectra with only one single \textit{cie} component. The average temperatures estimated through these models are quoted in Table\,\ref{table:log}. We plot the Fe\,{\small XVII} (f/r) line ratios versus the temperature in Fig.~\ref{Fig:Resonant_scattering} with the red points showing the sources with O\,{\small VII} detection above the 99\% confidence level. The point size scales with the average S/N ratio of the RGS spectra at 17\,{\AA}. We also show the Fe\,{\small XVII} line ratios as predicted by a thermal model without resonant scattering according to the Atomdb v3.0.2 and SPEX to visualize the strength of the resonant scattering in each source and the systematic uncertainties in the atomic data. Points below the theoretical predictions would be unphysical because it is difficult to strengthen only the resonant line in the line-of-sight towards the center of the galaxies (although charge-exchange can slightly enhance the Fe\,{\small XVII} resonant line). All our Fe\,{\small XVII} (f/r) line ratios agree with the theoretical predictions or are above the theoretical curves, indicating moderate resonant scattering and therefore low-to-mild (subsonic) turbulence. We fit a straight-line in the log-log space for the objects with $T<1$\,keV, and found a significant anti-correlation between the Fe\,{\small XVII} (f/r) line ratio and the average temperature (well above the $3\sigma$ confidence level, $p-$value$=0.00026$, with slope $-0.79\pm0.18$, see the green dashed line in Fig.~\ref{Fig:Resonant_scattering}). This may indicate either a decrease in optical depth or an increase in turbulence or both. We caution against the comparison with the brightest cluster galaxies (BCG), i.e. those in A\,262, Centaurus, Fornax, Perseus, and Virgo clusters, because above 1\,keV the ICM becomes optically thin to the Fe\,{\small XVII} emitted by the cooler gas phases ($kT<0.9$\,keV) and therefore resonance scattering becomes insensitive to turbulence in the cores of these systems. All results are discussed in Sect.~\ref{sec:discussion}. \subsection{Systematic effects} \label{sec:systematics} {There are several systematics that may affect our results and their interpretation such as the background subtraction, the line blending and the uncertainties in the atomic database.} {The model background spectra used throughout this work adopt long exposures of blank fields. This is a safe approach since any background contribution to the weak O\,{\small VII} or the strong Fe\,{\small XVII} lines would be smeared out in a continuum like feature. For some bright and compact objects such as NGC\,1316--1404 M\,89 we could extract a background spectrum in the outer regions of the RGS detector and match it with the model background spectrum. The spectra were comparable and no significant difference in the line ratios were found. We also tested a different continuum with a local (14.5--18.0\,{\AA}) fit using a power-law and a few delta lines obtaining larger statistical uncertainties and consistent results with the previous Fe\,{\small XVII} measurements. We tested a power-law continuum for the 19.5--22.5\,{\AA} range obtaining similar O\,{\small VII} detections.} {We have also checked the effects of blending with other lines. The O\,{\small VII} resonant and forbidden lines are located in a rather clean spectral range apart from O\,{\small VI} and O\,{\small VII} intercombination lines. As mentioned in Sect.\,\ref{sec:intro}, there are only small amounts of O\,{\small VI} in these objects as clearly shown in far-UV spectra. The O\,{\small VI} stronger line at 22.0\,{\AA} is also expected to be resolved by RGS due to the smaller extent, and therefore line broadening, of the O\,{\small VI-VII} cool phases. The O\,{\small VII} intercombination line is 5.5 weaker than the resonant line and also not expected to significantly affect our results. The Fe\,{\small XVII} resonant and forbidden lines are in a crowded spectral region, but they are much stronger than the neighbor lines. We have artificially doubled the flux of the brightest neighbor lines, which is more than the statistical uncertainties. The Fe\,{\small XVII} (f/r) line ratio was consistent with the standard measurements.} {The uncertainties in the atomic database do not affect our measurements of line ratios, but of course the interpretation of resonant scattering. There is a significant ($>20$\%) difference between the Fe\,{\small XVII} (f/r) line ratio as predicted by AtomDB and SPEX. This means that we do not know the absolute value of resonant scattering in our sources, which is crucial to estimate the absolute scale of turbulence, but the relative differences between line ratios measured in different objects should not be highly affected.} \section{Discussion} \label{sec:discussion} In Sect.\,\ref{sec:search_ovii} we have searched for O\,{\small VII} ($\sim 2 \times 10^6$\,K) gas in a sample of 24 objects, including clusters and groups of galaxies and elliptical galaxies, with strong ($>5\sigma$) Fe\,{\small XVII} line emission. We have detected O\,{\small VII} above the 99\% confidence level in 11 sources {and shown that O\,{\small VII} is preferably found in the cores of the sources, possibly following the distribution of the Fe\,{\small XVII} ($> 5 \times 10^6$\,K) gas. Exceptions are IC\,1459 and M\,89 where the lower count rate requires to integrate photons over a larger region. For M\,86, NGC\,4636 and NGC\,5813 the O\,{\small VII} is also better detected in the wider slit most likely due to their more extended cool cores. In order to search for a link between cooling and turbulence, we have plotted the Fe\,{\small XVII} forbidden-to-resonant line ratio with the temperature and the O\,{\small VII} significant detections in Fig.\,\ref{Fig:Resonant_scattering}. The high quality data points show some evidence for O\,{\small VII} to be mainly detected in sources with significant resonant scattering, which indicates the low level of turbulence. Although our sample is incomplete and the resonant scattering is more sensitive at lower temperatures, our results are consistent with a picture where turbulence is heating the gas and preventing it to cool below $\sim0.45$\,keV, where O\,{\small VII} line emission begins to be important. \subsection{O\,{\small VII} charge exchange or scattering?} \label{sec:ngc4636} At temperatures of 0.2-to-0.6\,keV the O\,{\small VII} resonance-to-forbidden line ratio is predicted to be between 1.25--1.35. We found (r/f) line ratios lower than 1.25 in the RGS spectra of NGC\,4636, M\,89, and NGC\,1404 as already shown in \citet{Pinto2014}. Our values could be due to either suppression of the resonant line via resonant scattering or enhancement of the forbidden line by photoionization or charge exchange. The O\,{\small VII} resonance line at 21.6\,{\AA} may be subject to resonant scattering. At the temperature of $\sim0.5$\,keV, where the Fe\,{\small XVII} ionic concentration peaks, the O\,{\small VII} is optically thin and no longer self-absorbed along the line-of-sight. However, it is possible that the gas is distributed in various non-volume filling phases at different temperatures. We multiplied the two \textit{cie} emission components for a collisionally-ionized absorbing model (\textit{hot} model in SPEX) and re-fit the RGS spectrum of NGC\,4636. We obtained a column density of $1.24\pm0.30\times10^{20}\,{\rm cm}^{-2}$ with a temperature of $0.23\pm0.03$\,keV, which is lower than the $0.43\pm0.07$\,keV value measured for the \textit{cie} component responsible for the O\,{\small VII} emission. The presence of such cool gas is suggested by the detection of a large amount of H$\alpha$ emission in the core of NGC\,4636 \citep{Werner2014}. It is suspicious, however, that the absorbing gas is cooler than the emitting gas despite the need to be located (on average) in outer regions where higher temperatures are expected unless the cool gas is clumpy. The astrophysical processes that strengthen the O\,{\small VII} forbidden line emission are photoionization and charge exchange. We can rule out photoionization because no bright AGN is observed in NGC\,4636. Charge exchange (CX hereafter) occurs when ions interact with neutral atoms or molecules; one or more electrons are transferred to the ion into an excited state, which decays and emits a cascade of photons increasing the forbidden-to-resonance ratios of triplet transitions. This process is often observed in supernova remnants (e.g. Puppis A, \citealt{Katsuda2012}), starburst galaxies (e.g. M\,82, \citealt{Liu2011}) and colliding stellar winds (e.g. Solar Wind, \citealt{Snowden2004}). The CX plasma code recently provided by \citet{Gu2016} is implemented in SPEX v3.00.00 (\textit{cx} model). \citet{Gu2015} first used this code to successfully describe the unidentified 3.5\,keV feature in the lower resolution CCD spectrum of the Perseus cluster. We re-fit the NGC\,4636 spectrum with a new \textit{cie} (driven by the Fe\,{\small XVII-XVIII} lines) + \textit{cx} (mainly, O\,{\small VII-VIII}, Ne\,{\small X}, and Mg\,{\small XI}) model corrected by redshift and Galactic absorption and obtain results comparable to the resonant scattering (\textit{hot}) model described above. In the fit we exclude the $13.8-15.5$\,{\AA} spectral range because it contains several Fe\,{\small XVII} lines suppressed by resonant scattering which would lead to a wrong estimate of the temperature. In Fig.\,\ref{Fig:NGC4636} we show the best fit with the contribution from the \textit{cie} and \textit{cx} components. Charge exchange provides a reasonable description of the O\,{\small VII} lines and produces significant O\,{\small VIII}, Ne\,{\small X}, and Mg\,{\small XI} emission and accounts for $\sim10\%$ of the flux in the 0.3--2.0\,keV energy band. \begin{figure} \includegraphics[width=0.7\columnwidth, angle=-90,bb=50 100 510 752]{paper_ovii_fig03.ps} \caption{NGC\,4636 RGS first order spectrum with hybrid model consisting of isothermal and charge-exchange components.} \label{Fig:NGC4636} \end{figure} The ionic temperature of the \textit{cx} component was coupled to the $\sim0.7$\,keV temperature of the \textit{cie} component. If left free to vary, a better fit provides $T_{\rm ion}=0.40\pm0.05$\,keV, in agreement with the 2-\textit{cie} model, which may suggest that the charge exchange is occurring between neutrals and the cooler O\,{\small VII} phase rather than the hotter gas phase associated with the Fe\,{\small XVII} lines. This may indicate that the cool O\,{\small VII} gas is a better tracer of the cold neutral phase and that they could be somewhat cospatial, both distributed in clumps. The CX code calculates velocity-dependent rates with which we measure a collision velocity lower than 50\,km\,s$^{-1}$ (at 68\% level), in agreement with the low turbulence found in NGC\,4636 \citep{Werner2009}. This is the first time that a charge exchange model is successfully applied to a high-resolution X-ray spectrum of a giant elliptical galaxy. \subsection{Resonant scattering in Perseus?} In Fig.\,\ref{Fig:Resonant_scattering} we have shown that the Perseus cluster has an unexpected, high ($4\pm2$), Fe\,{\small XVII} (f/r) line ratio. The spectrum extracted within a larger region of width $\sim0.8'$ (see Fig.\,\ref{Fig:Perseus}) holds much smaller error bars and constrains Fe\,{\small XVII} (f/r) $\geq4$. This value is higher than that measured in any other object and remarkable if compared to the other clusters (A\,262, Centaurus, Fornax and Virgo). The inner core of the Perseus cluster is dominated by a hot $\sim3$\,keV plasma, but it has been clearly shown to be multiphase with the inner arcminute ($\sim20$\,kpc) having significant emission from 0.5--4\,keV \citep[see e.g.][]{Sanders2007}. Below 1\,keV and in a low-turbulence regime the 15\,{\AA} resonance line is optically thick and it may therefore be subject to resonant scattering in the line-of-sight. We have therefore re-fitted the Perseus spectrum multiplying the two thermal components by a collisionally-ionized absorption model (\textit{hot} model) to test the suppression of the Fe\,{\small XVII} resonant line (as previously done for the O\,{\small VII} lines in NGC\,4636 in Sect.\,\ref{sec:ngc4636}). We have ignored the first order spectra between 10 and 14\,{\AA} due to high pile up and use the second order RGS 1 and 2 spectra because they are not significantly affected by pileup and their statistics peak in this wavelength range. This model reasonably describes the 15--17\,{\AA} Fe\,{\small XVII} lines (see Fig.\,\ref{Fig:Perseus}) and provides a column density of $\sim2\times10^{20}\,{\rm cm}^{-2}$ and a temperature of $\sim0.6$\,keV. \citet{Fabian2015} suggested that high-resolution X-ray spectra enable to search for evidence of ICM absorption onto the AGN continuum in NGC\,1275, the brightest cluster galaxy in Perseus, with a focus on the hard X-ray band where Fe\,K lines dominate. We have tested the same approach in the soft RGS band by applying the \textit{hot} absorption model only to the nucleus, which was fitted with a power law; the two \textit{cie} emission line components are only absorbed by the Galactic neutral ISM. This AGN-only absorption model is statistically indistinguishable to the previous one, with $\Delta\chi^2$ and $\Delta$C-stat of 6 for 1948 degrees of freedom, but a column density of $\sim1.5\times10^{21}\,{\rm cm}^{-2}$ is required, in good agreement with the predictions of \citet{Fabian2015}. {If the suppression of the 15\,{\AA} Fe\,{\small XVII} resonant line and the detection of absorption are interpreted as resonant scattering, which is a very likely scenario, then this means that the cool gas in Perseus is characterized by low turbulence.} \begin{figure} \includegraphics[width=0.7\columnwidth, angle=-90,bb=80 63 540 675]{paper_ovii_fig04.ps} \caption{Perseus RGS first order spectrum with multiphase thermal emission model absorbed by isothermal gas at 0.6\,keV.} \label{Fig:Perseus} \end{figure} \section{Conclusions} \label{sec:conclusion} In this work we have confirmed and extended our previous discovery of O\,{\small VII} emission lines in spectra of elliptical galaxies as well as groups and clusters of galaxies. This is the coolest X-ray emitting intracluster gas and seems to be connected to the mild Fe\,{\small XVII} gas, being located preferably at small (1-10\,kpc) scales. The O\,{\small VII} is often detected in objects with strong resonant scattering of photons in the Fe\,{\small XVII} lines, indicating {subsonic} turbulence. This would be consistent with a scenario where cooling is suppressed by turbulence in agreement with models of AGN feedback, gas sloshing and galactic mergers. {We note that a larger sample of sources and consequently more observations are needed to better disentangle resonant scattering effects due to temperature and turbulence; the current sample is incomplete.} In some objects the O\,{\small VII} resonant line is weaker than the forbidden line either due to resonant scattering or to charge-exchange processes occurring in the gas as we have shown for NGC\,4636. The Perseus cluster shows an anomalous, high, Fe\,{\small XVII} forbidden-to-resonance line ratio, which can be explained with resonant scattering by cool gas in the line-of-sight under a regime of low turbulence. In two forthcoming papers (Ogorzalek et al., Pinto et al.) we will compare the measurements of Fe\,{\small XVII} line ratios with those predicted by theoretical models of resonant scattering that take into account thermodynamic properties of these objects in order to estimate the turbulence in both their cores and outskirts. This will provide further insights onto the link between cooling, turbulence, and the phenomena of AGN feedback, sloshing, and mergers occurring in clusters and groups of galaxies. \section*{Acknowledgments} This work is based on observations obtained with XMM-\textit{Newton}, an ESA science mission funded by ESA Member States and USA (NASA). We also acknowledge support from ERC Advanced Grant Feedback 340442 and new data from the awarded XMM-\textit{Newton} proposal ID 0760870101. Y.Y.Z. acknowledges support by the German BMWi through the Verbundforschung under grant 50OR1506. \bibliographystyle{aa}
2,877,628,089,283
arxiv
\section{Introduction} The Helmholtz free energy is one of the most widely used thermodynamic state functions because, for a system of $N$ particles in a fixed volume $V$ at a temperature $T$, the Helmholtz free energy $F$ must be at a minimum when the system has reached equilibrium. Computing free energies is therefore important: it allows us to predict the relative stabilities of different states (e.g. phases) of a system. In thermodynamics, the free-energy difference between two states of a system is related to the reversible work required to bring the system from one state (say $A$) to the other ($B$). The work expended during an {\em irreversible} transformation from $A$ to $B$ is larger than the reversible work, and is therefore not a good measure for the free-energy change. It was therefore a great surprise when Jarzynski~\cite{jarzynski1997equilibrium,jarzynski1997nonequilibrium} showed that the free-energy difference between two systems can be related to the non-equilibrium work ($W$) required to transform one system into the other in an arbitrarily short time \begin{equation}\label{eq:JR} \exp(-\beta \Delta F) = \overline{\exp(-\beta W(t_s))}\;, \end{equation} where $\beta = \frac{1}{k_BT}$, and the bar over $\exp(-\beta W(t_s))$ denotes averaging over a ``sufficiently large'' number of independent simulations: the term ``sufficiently large'' is necessarily vague because we do not know {\em a priori} how much averaging will be needed for eqn.~\ref{eq:JR} to hold. Jarzynski's result stimulated much theoretical work, in particular by Crooks~\cite{crooks1999entropy,crooks2000path,crooks1998nonequilibrium}, who generalized Jarzynski's approach. Moreover, many experiments and simulations have been reported that validated Eqn.~\ref{eq:JR}~ \cite{collin2005verification,douarche2005experimental,toyabe2010experimental}. However, in spite of its great conceptual value, it seems that Jarzynski method is not a more accurate or more efficient method to compute free-energy differences than the standard, reversible thermodynamic integration methods~\cite{dellago2014computing,lechner2007efficiency,oberhofer2009efficient,geiger2010optimum}, in situations where such methods can be used. Here we focus on a problem where thermodynamic integration cannot be used, namely estimating the free energy of a glass. Glasses are non-equilibrium systems that do not relax on experimentally accessible time-scales (see e.g. \cite{angell1995formation,debenedetti2001supercooled}). It is for this reason that the free energy of a glass cannot be determined by thermodynamic integration: one might even argue that the equilibrium free energy of a glass is an oxymoron. However, as has been demonstrated for instance in simulations of polydisperse glasses, \textcolor{black}{it is sometimes possible to equilibrate glassy structures numerically, using so-called ``swap'' moves~\cite{ber171}. Such an approach will only work for systems where the acceptance of such moves is sufficiently high~\cite{ADSparmar}. Here we consider the free energy of a glass for which swap moves are inefficient and the approach of ref~\onlinecite{ber171} will not work.} In this Communication, we show that for glasses where the approach of ref.~\cite{ber171} will not work, the Jarzynski method yields much lower estimates for the free energy of a glass than the thermodynamic integration method, and thereby provides an interesting method for estimating the free energy of systems that cannot relax to equilibrium. Knowledge of the equilibrium free energy of a glass can be of practical use, for instance for estimating a lower bound to the solubility of a glass. There is, however, a problem: validating our approach against exact free energies is not possible for the widely used Kob-Andersen glass model system that we study~\cite{kob1995testing}. Hence, as a proxy, we will test which approach yields the lower free-energy estimate, and we will also compare our results with a naive extrapolation of the free energy of a supercooled liquid. For equilibrium systems, the free-energy change of a system upon cooling from a temperature ($T_H$) to a lower temperature $T_L$ can be obtained by thermodynamic integration (TI): \begin{equation}\label{eq:delf_TI} \beta_L F(T_L) - \beta_H F(T_H) = \int_{\beta_H}^{\beta_L} d\beta \langle U(\beta) \rangle \end{equation} We prepare glassy structures by quenching equilibrium liquid configurations from a temperature $T_H$ with cooling rate $C_r$. We obtain the free energy of glasses by computing the average work done during the process. The relation between cooling and work is discussed in the Supplementary Material (SM). We can then rewrite the Eqn.~\ref{eq:delf_TI} as follows: \begin{equation}\label{eq:delf_JR} \beta_L F(T_L)= \beta_H F(T_H) - \ln \left\langle e^{-[\beta_Lf^{n}(T_L) -\beta_Hf^{n}(T_H)]}\right\rangle_{C_r} \end{equation} The difference \begin{equation}\label{Delta_f} \beta_Lf^{n}(T_L) -\beta_Hf^{n}(T_H)\equiv \Delta \left(\frac{f^{n}}{T}\right) \end{equation} denotes the non-equilibrium work required to change the state of the system within a finite switching time or, equivalently, cooling rate, in a single cooling run. In Eq.~\ref{Delta_f}, $n$ stands for the $n^{th}$ cooling run. $\Delta \left(\frac{f^{n}}{T}\right)$ is evaluated by computing the potential energy $E^{n}$ as a function of the inverse temperature $\beta$: \begin{equation}\label{eq:fb_JR} \beta_Lf^{n}(T_L) = \beta_Hf^{n}(T_H) + \int_{\beta_H}^{\beta_L}d\beta\ E^{n}(\beta) \end{equation} As explained in the SM, we can recast Eq. \ref{eq:delf_JR} as the effect of scaling the potential energy $U$ rather than the effect of changing the temperature. That is, $\beta F(T)$ for the original temperature but with potential energy function $\lambda U$ has the same value as $\beta^\prime F(T^\prime)$ for the original potential energy function, but temperature $T^\prime = T/\lambda$. In our calculations, we compute the variation of $\beta F$, as we change the potential energy at constant $\beta$ for $U$ to $\lambda U$. To estimate the free energy difference for the Jarzynsky Relation (JR), we compute \begin{equation}\label{eq:FU_JR} \beta [F(T;\lambda U) - F(T;U)] = -\ln\langle e^{-\beta\int_1^\lambda d\lambda^\prime \; \overline{U}_{\lambda^\prime} }\rangle , \end{equation} \textcolor{black}{where $\langle...\rangle$ denotes averaging over all independent slow-cooling runs, whereas $\overline{U}_{\lambda^\prime}$ denotes the average of $U_{\lambda^\prime}$ during a single cooling run.}\\ For our free-energy calculations, we use a well-studied glassy system that can be prepared by slow cooling~\cite{westergren2007silico}, namely the Kob-Andersen (KA) binary Lennard-Jones model glass former~\cite{kob1995testing,sastry2001relationship,sengupta2011dependence}. We simulated $N$=256 ($N_A$=204, $N_B$=52) bi-disperse spheres, 80-20 (A-B) mixture, interacting via $V(r) = 4\epsilon_{\alpha\beta} \left[ \left(\frac{\sigma_{\alpha\beta}}{r}\right)^{12} - \left(\frac{\sigma_{\alpha\beta}}{r}\right)^6 \right] + 4\epsilon_{\alpha\beta} \left[c_0 + c_2 \left(\frac{r}{\sigma_{\alpha\beta}}\right)^2\right]$, for $r_{\alpha\beta} < r_c$, and zero otherwise. $r$ denotes the distance between the two particles within in the cutoff distance \cite{sengupta2011dependence}. We used the standard KA parameter values $\sigma_{AA} = 1.0$, $\sigma_{AB} = 0.8$,$\sigma_{BB} = 0.88$,$r_c = 2.5*\sigma_{\alpha\beta}$, $\epsilon_{AA} = 1.0$, $\epsilon_{AB} = 1.5$, $\epsilon_{BB} = 0.5$. $r$ denotes the distance between the two particles within in the cutoff distance \cite{sengupta2011dependence}. $c_0 = 0.01626656, c_2 = -0.001949974$ are fixed by the condition that the potential and force go to zero continuously at the cutoff-distance $r_c$. In what follows, all the thermodynamic quantities are expressed in reduced units: $\sigma_{AA}$ is our unit of length, the unit of energy is $\epsilon_{AA}$, $m_{A}=m_{B}=1$ is defined as the unit of mass, and the reduced temperature is expressed in units $\frac{\epsilon_{AA}}{k_B}$, where $k_B$ is Boltzmann's constant. Below, we report the excess free energy of the system, as the ideal gas part can be computed analytically. We performed NVT MC simulations at different cooling rates. Starting with the equilibrium liquid configurations at $T=0.5$ for $N=256$, we performed a stepwise cooling runs to a final temperature of $T$=0.1, in steps of $\Delta T$ = 0.1. The ``duration'' of a single cooling step $\Delta t$ is the number of Monte-Carlo cycles that the system spends at any given temperature. We define the cooling rate $C_{\text{r}}$ as $\Delta T/\Delta t$. For instance, for $C_{\text{r}} = 10^{-6}$ we perform $\Delta t = 10^5$ MC cycles at a given temperature. Each MC cycle comprises $N$ trial displacement moves. To obtain good statistics, we performed $1000$ independent simulation runs for $N=256$ and used a cooling rate $C_r$ = $10^{-6}$. \\ \begin{figure}[h!] \includegraphics[scale=0.4]{Fex_temps_TIJR_diff.pdf} \includegraphics[scale=0.4,angle=0]{./intene_diff_KA_TI_crate_N256} \caption{\label{Fex_JRTI} {\bf(a)} The difference in the excess free energy obtained using Jarzynski relation (JR) and the thermodynamic integration (TI) method. We show the data for $N$=256, $C_r$ = $10^{-6}$, the number of samples is $10^3$. \textcolor{black}{Panel {\bf(b)} shows the scatter in the values of the work performed during different cooling runs. This wide scatter is typical for glassy system, and would not be observed in systems that equilibrate on the timescale of the simulations. It is this scatter that makes it necessary to use the Jarzynski relation, rather than thermodynamic integration (see text).}} \end{figure} For each cooling run, we use a quadratic fit form ($U = a_0+a_1 T + a_2 T^2$) to fit the $T$- dependence of The values of $a_0$, $a_1$ and $a_2$ are different for all cooling runs. We then use the TI expression, Eq. \ref{eq:fb_JR}, to compute the difference $\Delta \left(\frac{f^{n}}{T}\right)$ for each run and, using Eq. \ref{eq:delf_JR} and averaging over different runs, we obtain the JR estimates of $F_{\text{JR}}$, the free energy of the low-temperature glass, down to a temperature $T$=0.1. For the sake of comparison, we also use the TI method (Eq.~\ref{eq:delf_TI}) to estimate the free energies of the glasses ($F_{\text{TI}}$) at temperatures down to $T$=0.1. Note that we use exactly the same simulation data for the JR and TI estimates (see the SM for more details). In Fig. \ref{Fex_JRTI}, we show the difference in the free energy per particle in units of $k_B T$ obtained using TI and JR, for glasses, starting at $T_H=0.5$. We note that, as the system is cooled slowly, a cooling run that is started at a higher temperature will traverse all lower temperatures and will therefore, if anything, equilibrate better than a cooling run started a lower temperature. We observe that at $T=0.1$, the TI estimate of the free energy per particle is about 0.5 $k_BT$ higher than the estimate obtained using the Jarzynski relation (see Fig. \ref{Fex_JRTI}a). \textcolor{black}{Both free-energy estimates are based on the same raw data: the only difference is how we analyse the data.\\ At low temperatures, the thermodynamic properties of the system are dominated by low-energy inherent structures. If we perform many TI runs some runs will sample lower-energy glassy states than others. The result of the TI procedure is unweighted average over all these runs, but in the Jarzynski approach, the results are strongly dominated by those runs that, upon cooling, end up in low energy inherent structures. Fig.\ref{Fex_JRTI}b shows the difference between the work done for different cooling runs and the average work done in the TI runs. Of course, as the sampling of low energy states is not exhaustive, in particular at lower temperatures than considered in our study, it is to be expected that even the Jarzynski relation will eventually (at sufficiently low temperatures) yield an overestimate of the free energy of the glass. In a non-glassy system, all TI runs will give the same result (apart from statistical fluctuations), and the JR and TI estimates should agree.} In section 3 of the SM, we also show that the Jarzynski methods yields lower estimates of the free energy than the so-called basin volume method of ref.~\cite{vinutha2020numerical}. \begin{figure}[h!] \includegraphics[scale=0.4]{dmuex_kafluid_qcut_bv_T0p1_ebar.pdf} \caption{\label{delmu} The difference in the chemical potential of the two components computed using the thermodynamic integration (TI) for the equilibrated liquid configurations at temperatures $T=1.0$ to $0.5$. The dashed line is a linear fit to the supercooled-liquid data. $\Delta \mu$ is computed using Jarzynski relation (stars) and TI (checkered circles) for glassy configuration between $T = 0.1$ and $0.4$. The average $\Delta \mu$ computed using TI for low-$T$ glasses is close to the average $\Delta \mu$ of the liquid configurations at $T_H=0.6$ (horizontal bold line). The error bars correspond to the standard deviation of $\Delta \mu$ values.} \end{figure} As a separate test of the method, we also compute the chemical potential difference between the components of the KA glass, using the Jarzynski relation. In the case of dense KA liquids, computational techniques to probe the chemical potential, such as the Widom particle-insertion method or even the Widom particle-swap method~\cite{frenkel2001understanding} will not work in practice. Rather, we used a method where we gradually transform the interaction potential of a particle from type $B$ to type $A$~\cite{mon1985chemical} and performed thermodynamic integration to estimate the chemical potential difference $\Delta \mu$ \cite{vinutha2021computation}. For a system in equilibrium, this method can be used at any density and temperature. In our case, below $T<0.5$, the system no longer relaxes, even during the longest simulation runs. In the range $T=0.1-0.4$, we use the Jarzynski relation to estimate $\Delta \mu$. For the low-$T$ glasses, we performed NVT MC simulations starting with the initial configurations in one of the basins which are obtained by quenching liquid configuration at $T_H = 0.6$. We obtain the non-equilibrium work $W(t_s)$ needed to transform a B-type particle to an A-type particle from the thermodynamic integration of interaction parameters \cite{vinutha2021computation}. For a glassy configuration, we pick different B-type particles and transform them to an A-type particle to obtain the average amount of work done. We can average over different low-T glass configurations and compute $\overline{\exp(-\beta W(t_s))}$. Then, from Eq. \ref{eq:JR}, we obtain an estimate of $\Delta F$ for the glass configurations. The number of samples is ${97, 67, 66, 68}$ for temperatures ${0.1, 0.2, 0.3, 0.4}$, respectively. Again, Fig.~\ref{delmu} shows that the chemical potential differences estimated using the thermodynamic integration method show a wide scatter with an average value well above the extrapolated value of the supercooled liquid \cite{vinutha2021computation}. In contrast, the Jarzynski method matches well with the extrapolated value of the supercooled liquid~\cite{note}. \textcolor{black}{We also computed the average $\Delta \mu$ for glasses at $T=0.1$ starting with $T_H=1.0$. Preliminary results suggested that the average $\Delta \mu$ obtained using JR is independent of $T_H$. Additional simulations are required to systematically study the dependence of $F_{JR}$ on the cooling protocol to obtain low-T glasses.} \\\\ We have shown that the Jarzynski method offers a a powerful tool to estimate the equilibrium properties of glasses. We illustrate this by using the non-equilibrium free energy expression due to Jarzynski to compute equilibrium free energies for glassy structures obtained using different cooling rates and the chemical potential difference between the components of the Kob-Andersen glass. The present results are of broader interest because, to our knowledge, there are no earlier examples where the Jarzynski method massively outperforms conventional free-energy calculation methods. We expect that our work will provide a new tool to probe the physics of amorphous solids, such as gels, glasses and jammed packings, prepared using different non-equilibrium protocols. \section*{Supplementary Material} In the supplementary material (SM), we present supporting data and discussion. In SM section 1, cooling as work; section 2, work distributions; section 3, comparison between the Jarzynski and basin volume method. \begin{acknowledgments} We gratefully acknowledge the funding by the International Young Scientist Fellowship of Institute of Physics (IoP), the Chinese Academy of Sciences under Grant No. 2018008. We gratefully acknowledge IoP and the University of Cambridge for computational resources and support. \end{acknowledgments} \section*{Data availability} The data that support the findings of this study are openly available in the University of Cambridge repository at https://www.repository.cam.ac.uk/. \\ \section{Cooling as work} In the main text, we apply the Jarzynski relation to estimate the free energy change upon cooling. This may seem strange, because changing the temperature is achieved by heat transfer, rather than by work. Here we show that the free energy change upon cooling may be interpreted in terms of work. Consider a system with a potential energy function $U(x)$, where $x$ denotes the set of $dN$ coordinates. Then the configurational part of the free energy is: \begin{equation} F(T)=-k_BT\ln \int d x\; e^{-\beta U(x)} \end{equation} We wish to estimate $F(T^\prime)$ with $T^\prime=T/\lambda$. Clearly, \begin{equation}\label{eq:JR} F(T^\prime)=-k_BT^\prime\ln \int d x\; e^{-(\beta \lambda) U(x)} \end{equation} We write this as \begin{equation} \beta^\prime F(T^\prime;U)= - \ln \int d x\; e^{-(\beta \lambda) U(x)}= - \ln \int d x\; e^{-\beta (\lambda U(x))} = \beta F(T;\lambda U) \end{equation} In other words, the scaled free energy of a system at the original temperature but with potential energy function $\lambda U(x)$ has the same value as the scaled free energy $\beta^\prime F(T^\prime)$ for the original potential energy function, but at temperature $T^\prime=T/\lambda$. For the calculation, we compute the variation of $\beta F$, as we change the potential energy at constant $\beta$ from $U(x)$ to $\lambda U(x)$. This transformation can be viewed as mechanical work. In practice, we compute \begin{equation} \beta [F(T;\lambda U) - F(T;U)] = -\ln\langle e^{-\beta\int_1^\lambda d\lambda^\prime \; \overline{U}_{\lambda^\prime} }\rangle, \end{equation} \textcolor{black}{where $\langle...\rangle$ denotes averaging over all independent cooling runs, and $\overline{U}_{\lambda^\prime}$ denotes the average of $U_{\lambda^\prime}$ during a single cooling run.} \section{\textcolor{black}{Work distributions}} \begin{figure}[h!] \includegraphics[scale=0.4,angle=0]{./cumudistri_work_temps_N256} \includegraphics[scale=0.4,angle=0]{./cumu_dmudistr_B_T0p14} \caption{\label{work} \textcolor{black}{ {\bf (a)} Cumulative work distributions shown for different temperatures, for the cooling runs data. {\bf(b)} Cumulative distribution of $\Delta \mu$ as a function of $T$. Each data point corresponds to the work done to transform a B-type particle to an A-type particle at that temperature.}} \end{figure} \textcolor{black}{In Fig. \ref{work}, we show the cumulative work distributions for the cooling runs at different temperatures and the $\Delta \mu$ calculations. In Fig. \ref{work}(a), we show the difference between the work done for different cooling runs and the average work done value $\int d\beta U$. In Fig. \ref{work}(b), we show the work needed to transform different B-type particles to A-type particles for different low-T glass configurations. We observe that the distribution becomes broader as the temperature decreases. We know that the low-energy inherent structures are dominant at low temperatures. The exponential weighting in Eq.~\ref{eq:JR} biases the average towards the lower energy states. Therefore, we obtain the lower free energy estimates from JR compared to TI. However, it is surprising that even at $T=0.1$, which is well below the glass transition temperature $T_g \approx 0.34$ [16], with the limited sampling we can obtain reasonable estimates of the equilibrium free energies for glasses and the average $\Delta \mu$ that matches the extrapolated supercooled value.} \section{Comparison between the Jarzynski and basin volume method} \begin{figure*}[h!] \centering \includegraphics[scale=0.35]{Fex_temps_TI_JY_SC} \includegraphics[scale=0.35]{./Fex_temps_TIBVJR_diff} \includegraphics[scale=0.35]{./cumudistri_JRBV_IS_N256_zoom} \caption{\label{JRBV} {\bf a} Comparison between the low-T free energy obtained using TI, the basin volume (BV) method and JR. We can see the difference clearly in {\bf b}. We observe that the estimates from JR are accurate than the BV method. {\bf c} Cumulative distribution of the inherent structure energies ($e_{IS}$), shown for $T_H = 0.5$ and from the cooling runs at $T=0.1$.} \end{figure*} Here, we discuss the comparison between the Jarzynski method (JR) and the recently developed basin volume (BV) technique to estimate the equilibrium free energy of glassy materials [22]. In Fig. \ref{JRBV} (a), we compare the free energies computed using different methods for low-T glasses. We see in Fig. \ref{JRBV}(b) that the basin volume method performs better than TI but the Jarzynski method beats the BV and TI methods. In the BV method, we perform more than $900$ instantaneous quenches from the liquid configurations at $T_{\text{H}}=0.5$ using the conjugate gradient minimization method. The initial liquid configurations of the quench are obtained by Monte Carlo sampling and are therefore Boltzmann weighted. Now, we know the free energy at high temperature $F(T_H)$, we can obtain configurational free energy for low-temperature ($T_L$) glasses, by performing a large number of thermodynamic integrations where we cool the system confined to a given basin from $T_H$ to $T_L$, see Ref. 22 for more details. For the cooling runs, we use the same set of initial configuration $T_H$. To understand the difference in the free energy estimates from the two methods we looked at the cumulative distribution of the inherent structure energies. In Fig. \ref{JRBV}(c),for $T=0.1,C_r = 10^{-6}$, we see that the system samples lower energy inherent structures compared to the distribution at $T_H = 0.5$. The reason is that, whereas in the basin volume method, the system is constrained to remain in a single basin during cooling, transitions to lower energy basins are possible during the Jarzynski cooling protocol. As a consequence, the Jarzynski method achieves better sampling of the low-energy inherent structures of the system. But, of course, the Jarzynski method is also approximate. Hence, we should expect the true equilibrium free energy of the glassy system to be even lower than the Jarzynski estimate. This observation also suggests that with the Jarzynski approach, it is better to carry out many simulations on a small system, rather than fewer on a large system: for TI we would not see much difference in accuracy. However, the smaller the number of runs, the smaller the chances of sampling the relevant low-energy structures. Indeed, we found that, for the same computational effort, the Jarzynski method yielded significantly higher free energies estimates for a system of N=1000 particles than for N=256. \textcolor{black}{Presumably, with sufficient sampling, the free energy estimates for larger systems would match the $N=256$ results. In view of the high computational costs, we did not attempt long simulations of the $N$=1000-system.} \end{document} \begin{figure}[h!] \includegraphics[scale=0.43]{PEvsT_Ns_coolr} \includegraphics[scale=0.43]{Fex_temps_TI_JY_SC_N1k} \caption{\label{TI_1k}{\bf a} The average potential energy as a function of $T$, shown for $N=1000,C_r = 10^{-7}, T_H=1.0$. We use a quadratic fit for the data. In the inset, we show the potential energy data for $N=256,C_r =10^{-6},T_{\text{H}}=0.5$. {\bf b} The excess free energy as a function of temperature obtained using the thermodynamic integration (TI) and the Jarzynski method (JR).} \end{figure} In Fig. \ref{TI_1k}a, we show the average potential energy as a function of $T$, obtained from $100,1000$ samples for $N=1000,256$, respectively. We use the quadratic fit and then perform thermodynamic integration, see Eq.(2), to obtain the free energy for cooling runs. In Fig. \ref{TI_1k}b, we also show the free energy obtained using the Jarzynski method. We report the difference in the free energies in Fig.1(b) of the paper.
2,877,628,089,284
arxiv
\section{Introduction} A distinguishing feature of non-equilibrium states is the presence of currents \cite{{ligget85},{schmittmann95},{schutz00}}. Fluctuations of currents often exhibit universal behavior \cite{Prahofer2000} and shed light on the nature of non-equilibrium systems. Current fluctuations in classical systems have been extensively investigated, see e.g. \cite{Ferrari1994, Derrida1998, Lebowitz1999, Johansson2000, Prahofer2000, bodineau04, harris05, Sasamoto2005a, Ferrari2006, Rakos2006a, PK} and a review \cite{Derrida2007}. The total current grows linearly with time and current fluctuations usually exhibit an algebraic growth. Quantum fluctuations in general, and spin current fluctuations in particular, are much less understood; even in the simplest systems quantum fluctuations often behave very differently from standard statistical fluctuations (see e.g. \cite{YC}). In this paper we study current fluctuations in quantum spin chains. How to impose currents in spin chains? Perhaps the simplest way is to start with a spin chain in the following inhomogeneous product state \begin{equation} \label{initial} |\cdots \uparrow\uparrow\uparrow\downarrow\downarrow\downarrow\cdots\rangle \end{equation} This choice \cite{spin99} allows one to avoid complications and arbitrariness of coupling the chain to spin reservoirs. State \eqref{initial} evolves according to the Heisenberg equations of motion. The average magnetization profile has been computed for the simplest quantum chains, e.g. for the $XX$ model where the perturbed region was found to grow ballistically \cite{spin99}. Numerical works \cite{gobert05} suggest that the growth is also ballistic for the $XXZ$ chain in the critical region (described by Hamiltonian \eqref{hami} with $|\Delta|<1$). The main goal of this paper is to study the fluctuations of spin current in quantum chains. After a brief description of the model in section \ref{model}, in section \ref{hidro} we present a simple derivation of the asymptotic properties of non-equilibrium states in free fermion systems with special initial conditions. In section \ref{fluct} we probe fluctuations of the current specifically in the $XX$ spin chain. First we present a back-of-the-envelope calculation for the variance of the time integrated current; then we establish an exact result from which we extract the long time asymptotical behavior. We discuss the more general $XXZ$ chain in section \ref{XXZ}. A summary of our results and relation to other work is given in section \ref{discussion}. \section{Model} \label{model} Most generally, we consider the quantum $XXZ$ Heisenberg spin chain with Hamiltonian \begin{equation} \label{hami} \mathcal{H}=-\sum_{n} \left(s^x_n s^x_{n+1} + s^y_n s^y_{n+1} + \Delta s^z_n s^z_{n+1}\right) \end{equation} Here we set the coupling constants to unity in the $x$ and $y$ directions. The coupling constant $\Delta$ in the $z$ direction is called the anisotropy parameter. The $z$ component of the total magnetization $M^z=\sum_{n=-\infty}^\infty s^z_n$ is a conserved quantity in this model, and our aim is to study the corresponding current. In the following we shall focus on the time evolution of this spin chain starting from an inhomogeneous initial state, whereby the left and right halves of the infinite chain are set to different quantum states and are joined at time zero. Such initial states provide a particularly convenient framework to study currents and their fluctuations in quantum spin chains. We mainly consider the special case of $\Delta=0$, where the model reduces to free fermions. In this system, known as the $XX$ model, the time evolution can be written in a compact form, which enables us to perform exact calculations. In particular, it is possible to evaluate the scaling limit of the magnetization profile and other physical quantities \cite{spin99,ogata02}. Corrections to this scaling behavior were considered in \cite{{karevski02},{hunyadi04}}. Interesting non-equilibrium behavior was found in disordered spin chains \cite{abreit02,platini05} and chains at finite temperatures \cite{ogata02,ogata02b,platini07}. An alternative method has been proposed to generate stationary currents in spin chains using a Lagrange multiplier \cite{spin97,spin98,cardy00,kosov04,racz00}. Using this method fluctuations of basic quantities have been studied in non-equilibrium steady states in \cite{eisler03}. The relaxation from a large class of initial conditions was considered in numerous studies starting from late sixties; see \cite{niemeijer67,tjon70,berouch69,igloi00} and \cite{berim02} for a review of more recent work. Our focus, however, is on non-equilibrium states with a non-vanishing current. \section{Hydrodynamic description} \label{hidro} We begin by describing a simple method that allows one to obtain the long time asymptotic behavior of a free fermion system by employing a continuous hydrodynamic description. This helps to avoid a lengthy exact calculation and yet the final results are asymptotically exact. Specifically, a justification of this approach for the $XX$ model is given by exact results \cite{spin99,ogata02}, which yield the same asymptotical behavior as the hydrodynamical description discussed below. The Hamiltonian of the free fermion system can be written in the form \begin{equation} \mathcal{H}=\sum_k \epsilon(k) \eta_k^\dag \eta_k \end{equation} where $\eta^\dag_k$ and $\eta_k$ are creation and annihilation operators of fermions with momentum $k$ and $\epsilon(k)$ is the energy of an excitation with wave number $k$. In the simplest situation, the system is initially divided into two half infinite chains, each of them being in a homogeneous pure state. In this case, the elementary excitations can be considered initially homogeneously distributed in each half chains. At time zero, each mode starts moving with velocity $v(k)=\epsilon'(k)$. As the excitations are entirely independent, they do not interact and keep moving with their initial velocities. This argument suggests that whether an excitations is present at a space-time point $(n,t)$ depends only on the ratio $x=n/t$. Moreover, keeping $x$ fixed, a finite neighborhood of site $n$ becomes asymptotically homogeneous for $t\to\infty$. This physical picture is not exact due to the finite lattice spacing. However, we believe that the above description becomes asymptotically exact for any free fermion system in the scaling limit: $n\to\infty, t\to\infty, ~n/t=\mathrm{const}$. (See \cite{ogata02b} for a rigorous derivation of this scaling limit for the $XX$ model.) Whether an excitation is present at position $n$ at time $t$ can be decided by noting that for $n>0$, the modes which are present were initially on the left side of the chain with $v(k)>n/t$ and on the right side of the chain with $v(k)<n/t$. Similar argument applies for $n<0$. This method can be extended to the general case when the two half-infinite chains are initially in mixed states, e.g., one can consider the situation when the two half-infinite chains are set to different temperatures \cite{ogata02b}. As an illustration, let us calculate the magnetization profile in the $XX$ model with the simplest initial condition \eqref{initial}. The spectrum of the model is $\epsilon(k)=-\cos(k)$, that is, $v(k)=\sin(k)$. Initially, all the modes are filled on the left, while the right side of the chain is in the vacuum state. At time $t$, around site $n>0$, the modes with $\sin(k)>n/t$ are filled, that is, all the modes with $k_0<k<\pi-k_0$, where $k_0=\arcsin(n/t)$. Similarly, for $n<0$, only the modes with $\sin(k)>n/t$ are filled, that is, the modes with $-\pi<k<-\pi-k_0$, and $k_0<k<\pi$. For an illustration see Fig.~\ref{fig:XX_hydro}. As each mode carries a unit magnetization, the average $z$ magnetization can be obtained by simply integrating through the filled modes $m(x=n/t) = 1/2 + \int dk/2\pi$. This results in the well known profile $m(x=n/t) = -\frac{1}{\pi} \arcsin(x)$ for $-1<x<1$, and the magnetization keeps its initial values outside this region. This limiting profile was obtained in \cite{spin99} by exact calculation. \begin{figure} \includegraphics{fig1} \caption{(Color online) Hydrodynamic description for the $XX$ chain started from the $|\cdots \uparrow\uparrow\uparrow\downarrow\downarrow\downarrow\cdots\rangle$ initial condition. We consider the scaling limit where $t\to\infty$ and $x=n/t=\text{const}$. The shaded region shows the elementary excitations that are present in the scaling points indexed by $x$.} \label{fig:XX_hydro} \end{figure} Other applications are given in Appendix \ref{examples}. \section{Fluctuations of the Current} \label{fluct} The local magnetization current operator for a quantum spin chain can be obtained through a continuity equation for the local magnetization \cite{spin98}. For the $XX$ model this gives \begin{equation} j_n = s^y_n s^x_{n+1} - s^x_n s^y_{n+1} \end{equation} for the current between spin $n$ and $n+1$. (We measure time in units of $\hbar$). The time integrated current $C_0$, i.e., the net transported magnetization up to time $t$ through the bond between spin 0 and spin 1, is a quantity which is less obvious to define for a quantum system in general. However, in the case of the setup (\ref{initial}), the integrated current $C_0$ can be expressed in a simple way \begin{equation} C_0=\sum_{n\geq 1} (s^z_n+1/2). \end{equation} The average of the integrated current $\langle C_0 \rangle$ through the central bond grows asymptotically as $\pi^{-1}t$ \cite{spin99}; alternatively, this can be seen from the hydrodynamic picture described above. The hydrodynamic approach of Sec.~\ref{hidro} does not allow to probe fluctuations. Therefore we must return to the microscopic description. Below we shall focus on the variance of the total current $D(t)\equiv \langle C_0^2\rangle -\langle C_0\rangle^2$. Let us define the left, right and total magnetization as follows: \begin{gather}\label{def} M_L = \sum_{n\le 0} s_n^z, \qquad M_R = \sum_{n\ge 1} s_n^z, \cr M = M_L+M_R. \end{gather} The variance of the integrated current is equal to the variance of the left (right) magnetization: \begin{equation}\label{egy} D(t) = \langle M_R^2\rangle_t -\langle M_R \rangle_t^2 = \langle M_L^2\rangle_t -\langle M_L \rangle_t^2. \end{equation} Since the total magnetization is conserved and the initial state is an $M$ eigenstate, the fluctuation of $M$ remains zero for any time $t$ \begin{equation}\label{harom} \langle(M_L+M_R)^2\rangle_t -\langle M_L+M_R\rangle_t^2=0. \end{equation} By exploiting this property we can rewrite (\ref{egy}) as \begin{multline} \label{negy} D(t)= \langle M_L \rangle_t \langle M_R \rangle_t - \langle M_L M_R \rangle_t \\ = \sum_{l\leq 0, m\ge1} \left( \langle s_l^z \rangle_t \langle s_m^z \rangle_t - \langle s_l^z s_m^z \rangle_t \right). \end{multline} Before presenting an exact calculation we provide a back-of-the-envelope derivation of our main result. The idea is to evaluate $D(t)$ by substituting correlations in \eqref{negy} with their stationary values in the local state which builds up at the origin for $t\to\infty$ (see \cite{spin99} and our Fig.~\ref{fig:XX_hydro}). The reason is that the main contribution comes from those spins for which $(l-m)$ is not too large, and for $t\gg 1$ these ``points'' are located near the origin. This leads to \begin{equation}\label{ot} D(t) = - \sum_{n>0} n \rho^z(n), \end{equation} where \begin{equation}\label{hat} \rho^z(n) = \langle s_k^z s_{k+n}^z \rangle - \langle s_k^z \rangle \langle s_{k+n}^z \rangle. \end{equation} In the (homogeneous) ``maximal current'' stationary state of the $XX$ model, the correlator $\rho^z(n)$ takes the same form as in the ground state \cite{spin98} where it is given by the well-known expression: \begin{equation}\label{het} \rho^z(n) = \begin{cases} -\frac{1}{\pi^2 n^2} & n=\text{ odd} \\ 0 & n=\text{ even} \end{cases}. \end{equation} This gives a logarithmical divergence for $D(t)$, which one can regularize by truncating the sum in (\ref{ot}). For a finite but large $t$ the volume of the region around the origin, which can be described by this maximal current state, grows linearly with $t$. Hence we choose the upper limit in the sum in (\ref{ot}) to be proportional to $t$ and obtain \begin{equation}\label{nyolc} D(t) \sim - \sum_{n=1}^{\sim t} n \rho^z(n) = \frac{1}{2\pi^2}\ln(t). \end{equation} In this expression, the factor $1/2$ appears since correlations between evenly spaced sites vanish. This argument remains valid for a more general class of initial conditions, where on the left (right) half of the chain the fermions are filled up to the Fermi energy $\mu_L$ ($\mu_R$). In this case, the asymptotic state --- which builds up near the origin --- includes fermions with momenta varying from $-k_R$ to $k_L$, where $k_L$ and $k_R$ are the Fermi momenta corresponding to $\mu_L$ and $\mu_R$. (For an illustration see Fig.~\ref{fig:XX_hydro2}; more details are given in Appendix \ref{examples_XX}.) The correlation function $\rho^z(n)$ for these asymptotic states can easily be calculated \cite{spin98}. One finds that in general it behaves as $\rho^z(n) = -\frac{1}{\pi^2 n^2} \sin^2(n\varphi)$, where $\varphi=(k_L+k_R)/2$ \cite{spin98}. As the $\varphi$ dependence of the asymptotic form averages out we conclude that the result \eqref{nyolc} is unchanged for this class of initial states. The exact evaluation of $D(t)$ is based on (\ref{negy}). Following the strategy of \cite{spin99} we write $s_i^z$ in terms of the local fermionic creation and annihilation operators $c^\dagger, c$ as \begin{equation}\label{siz} s_n^z=c_n^\dagger c^{}_n -\frac{1}{2}. \end{equation} In the Heisenberg picture, the time dependence of these operators, under the dynamics of the $XX$ chain, has a simple form \begin{equation} \label{ct} c_n(t) = \sum_{j=-\infty}^\infty i^{j-n} J_{j-n}(t) c_j, \end{equation} where $J_{n}(t)$ are the Bessel functions. Inserting this into (\ref{negy}) one gets \begin{multline} D(t) = \sum_{l\leq 0, m\ge1} \sum_{\alpha, \beta, \gamma, \delta} i^{-\alpha+\beta-\gamma+\delta} \\ \times J_{\alpha-l}(t) J_{\beta-l}(t) J_{\gamma-m}(t) J_{\delta-m}(t) \\ \times \left( \langle c_\alpha^\dagger c_\beta \rangle \langle c_\gamma^\dagger c_\delta \rangle - \langle c_\alpha^\dagger c_\beta c_\gamma^\dagger c_\delta \rangle \right). \end{multline} The expectation value in the above formula is taken in the initial state. One finds \begin{multline}\label{expextation} \langle c_\alpha^\dagger c^{}_\beta \rangle \langle c_\gamma^\dagger c_\delta \rangle - \langle c_\alpha^\dagger c_\beta c_\gamma^\dagger c_\delta \rangle = \\ \begin{cases} - \delta_{\alpha, \delta} \delta_{\beta, \gamma} & \text{if } \alpha\uparrow, \beta\downarrow \\ 0 & \text{otherwise} \end{cases} \end{multline} for initial states which are product states of individual spins pointing either up or down (like \eqref{initial}). Here, $\alpha\uparrow$, $\beta\downarrow$ are shorthand notations for $\alpha,\beta$ with $s_\alpha=\uparrow, s_\beta=\downarrow$. Using identities $J_{-k}(t)=(-1)^k J_k(t)$ and \cite{gradshteyn} \begin{equation*} \sum_{k\ge1} J_{k+p}(t) J_{k+q}(t) = t\, \frac{J_p(t)J_{q+1}(t) - J_{p+1}(t)J_q(t)}{2(p-q)} \end{equation*} for the sums over $l$ and $m$, one obtains \begin{equation} \label{DJJ} D(t)=\frac{t^2}{4} \sum_{\alpha\uparrow, \beta\downarrow} \left[ \frac{J_{\alpha-1}(t) J_{\beta}(t) - J_\alpha(t) J_{\beta-1}(t)}{\alpha-\beta}\right]^2. \end{equation} This expression for our initial condition \eqref{initial} becomes \begin{equation} \label{Dt-exact} D(t)=\frac{t^2}{4}\sum_{l,m\geq 1}\left[\frac{ J_{l-1}(t) J_{m-1}(t) + J_l(t) J_m(t)}{l+m-1}\right]^2. \end{equation} This is an {\em exact} expression for the variance of the current which is valid at any time $t\geq 0$. From this formula we have deduced the long time asymptotic behavior \begin{equation} \label{Dt-asymp} D(t)=\frac{1}{2\pi^2}(\ln t + C) \end{equation} with constant $C = 2.963510026\ldots$. The leading term coincides with our heuristic argument \eqref{nyolc}. The derivation of \eqref{Dt-asymp} is relegated to appendix \ref{long}. We also evaluated expression \eqref{Dt-exact} numerically and plotted it in Fig.~\ref{dtime} for $t<50$. We observe a logarithmic increase in time. The numerical estimate for the constant, $C\approx 2.9633$, is in a good agreement with the exact result. In addition to this logarithmic growth, one observes oscillations with decreasing amplitude. We found that the formula \begin{equation} \label{improved_fit} D(t)=\frac{1}{2\pi^2}\left[ \ln t + C - \frac{\cos 2t }{t}\left( \ln t + C' \right) \right], \end{equation} with $C'\approx 1.95$ (and $C$ given exactly in \eqref{C}) gives a very good fit to the numerical data even for relatively short times. Since the mean current is $1/\pi$, in average one fermion crosses the origin in time $\pi$. Hence the $\cos 2t$ oscillations can be interpreted as a consequence of the quantum nature of the magnetization: each passing fermion causes a bump in the fluctuations. Similar arguments were used to explain oscillations in the magnetization profile in the same system \cite{hunyadi04}. \begin{figure}[htb] \includegraphics{fig2} \caption{(Color online) Shown is the variance $D(t)$ vs.\ time $t$. Red curve is a result of the exact numerical evaluation of \eqref{Dt-exact}, blue curve shows the result \eqref{Dt-asymp} of our asymptotical analysis. On top of the logarithmic growth we find a subleading oscillating term. Eq.~\eqref{improved_fit} gives a fit, which is almost indistinguishable from the numerical data (red line).} \label{dtime} \end{figure} \section{$XXZ$ spin chain} \label{XXZ} In a general $XXZ$ chain \eqref{hami}, the coupling is nonzero in the $z$ direction. There is no explicit solution for the time evolution in this general case. Due to the symmetry in the $XY$ plane, however, the $z$ component $M_z$ of the total magnetization is still conserved, hence the magnetization current can be studied. It has been investigated numerically by Gobert {\it et. al.} \cite{gobert05}. Our main interest lies in the so-called critical region ($-1<\Delta<1$); we shall also discuss the isotropic ferromagnetic ($\Delta=1$) and anti-ferromagnetic ($\Delta=-1$) spin chains. The $XX$ model belongs to the critical region, and the behavior in the entire critical region is believed to be similar to the behavior of the $XX$ model. In particular, the magnetization profile plausibly scales linearly with time: $m(n,t) \to \mathcal{M}(n/t)$ \cite{gobert05}. On the other hand, due to the existence of kink-like ground states \cite{alcaraz95,matsui96}, the magnetization profile is expected to become frozen when $|\Delta|>1$ \cite{gobert05}. Algebraic scaling $m(n,t)\to \mathcal{M}(n/t^a)$ seems to emerge for the $XXX$ model ($|\Delta|=1$) in the scaling regime $n\to\infty$, $t\to\infty$ with $n/t^a$ kept finite. Numerically the exponent is $a=0.6\pm 0.1$ \cite{gobert05}, so the non-trivial part of the profile is sub-ballistic. For the $XX$ spin chain, we have obtained the correct current fluctuations in the leading order from the simple formula \eqref{ot} when the upper limit of the sum was chosen to grow linearly with time. The reason for this choice of upper limit is that the front and the whole profile ``moves'' linearly with time, hence the cutoff must behave similarly. We shall use \eqref{ot} also for the $XXZ$ chain. We shall assume that the upper bound moves linearly, namely as $vt$, in the critical region ($-1<\Delta<1$). The actual value of the `velocity' $v$ is unknown, but it does not affect the leading order term anyway. The next issue is whether one can use the equilibrium spin correlations $\rho^z(n)$ in the {\it presence} of current. For the $XX$ chain we know \cite{spin98,spin99} that current does not affect the $z$ component of the correlations significantly (may introduce a modulation), only the $x$ and $y$ components. Here we boldly assume the same for the $XXZ$ model, at least for its large distance behavior. Thus we use the equilibrium correlations in \eqref{ot}. The asymptotic formulae for $\rho^z(n)$ are \cite{lukyanov99,giamarchi04} \begin{equation} \label{rn} \rho^z(n)\!=\! \begin{cases} -\delta n^{-2} &0<\Delta<1\\ -[1+(-1)^n](2\pi^2)^{-1}\,n^{-2} & \Delta=0\\ (-1)^n A\, n^{-4\pi^2 \delta}-\delta n^{-2} &-1<\Delta<0\\ (-1)^n B\,n^{-1}\sqrt{\ln n} -\delta n^{-2} &\Delta=-1 \end{cases} \end{equation} where we used the shorthand notation \begin{equation} \label{delta:def} \delta=\frac{1}{4\pi \arccos(\Delta)} \end{equation} We now insert \eqref{rn} into equation \eqref{ot}. The amplitude $A=A(\Delta)$ has been guessed relatively recently (see \cite{lukyanov99}), yet we do not need this result. Indeed, the leading oscillating term in $\rho^z(n)$ has the exponent $a=4\pi^2 \delta$ varying in the range $1<a<2$ when the anisotropy parameter varies in the $-1<\Delta<0$ range. Because this leading term oscillates, we form pairs and find that the oscillating terms in (\ref{ot}) yield the contribution \begin{equation} \label{An} A\, n^{-a+1}-A\, (n+1)^{-a+1}\simeq aA\,n^{-a} \end{equation} that decays faster than $n^{-1}$. Hence the oscillating term provides merely a constant contribution to the variance $D(t)$ while the sub-leading $n^{-2}$ term results in the leading logarithmically diverging contribution. Therefore \begin{equation} \label{Dt:XXZ} D(t) = \delta \ln t \end{equation} This prediction implies that the logarithmic behavior of the variance is universal. The amplitude diverges at $\Delta=1$; analytically $\delta\to (4\pi)^{-1}[2(1-\Delta)]^{-1/2}$ as $\Delta\uparrow 1$. This divergence is not very surprising since the isotropic Heisenberg ferromagnet apparently exhibits a truly different behavior. In the other extreme $\Delta\downarrow -1$, the amplitude $\delta$ approaches the finite value $(4\pi^2)^{-1}$. However, as indicated by the last formula in \eqref{rn}, the oscillating asymptotic is $Bn^{-1}\sqrt{\ln n}$ for the isotropic anti-ferromagnet \cite{affleck98}. A calculation similar to \eqref{An} gives $n^{-2}\sqrt{\ln n}$ after canceling the oscillations. This leads to the following quite surprising behavior: \begin{equation} \label{D:AFM} D\sim \int \frac{dn}{n}\,\sqrt{\ln n} \sim (\ln t)^{3/2} \end{equation} Note that for the isotropic anti-ferromagnet the magnetization is non-trivial in the interval that grows slower than linearly with time. (The natural guess is the diffusive $\sqrt{t}$ growth.) However, the upper limit in the integral in \eqref{D:AFM} would affect only the pre-factor. Thus our tentative predictions for the variance of the time integrated current are: (i) The enhanced logarithmic growth $D\sim (\ln t)^{3/2}$ for $\Delta=-1$; (ii) The universal logarithmic behavior $D(t) = \delta \ln t$ with known pre-factor \eqref{delta:def} in the critical region $-1<\Delta<1$. \section{Discussion} \label{discussion} We have studied the fluctuations of the time integrated magnetization current in the quantum $XX$ chain that evolves starting from an inhomogeneous non-stationary initial state \eqref{initial}. We have derived an exact formula \eqref{Dt-exact} for the variance $D(t)$ of the current. We have shown that the variance increases logarithmically in the long time limit \eqref{Dt-asymp}, which is consistent \cite{disagree} with numerical evaluation of the exact formula (see Fig.~\ref{dtime}). In addition to this logarithmic growth, we have observed oscillations with decreasing amplitude. We have argued that this logarithmic leading order behavior remains unchanged for a more general class of initial conditions (where the magnetization on the two half-chains is not saturated). These small logarithmic fluctuations reflect the ideal conductor nature of the integrable $XX$ quantum chain. The current simply ``slides'' through the system ballistically with no disturbance, hence the tiny fluctuations. Conversely, if an impurity is present at the origin, the variance grows linearly with time \cite{schon07}. Similarly, in stochastic particle systems \cite{Ferrari1994, Derrida1998, Lebowitz1999, Johansson2000, Prahofer2000, bodineau04} the noise -- which is intrinsically present in these models -- generates algebraic fluctuations. We have argued that current fluctuations in the inhomogeneous $XXZ$ model are also logarithmic (in the critical region). Our arguments are heuristic and a more rigorous derivation is a key challenge for future work. Intriguingly, fluctuations seem more tractable than e.g. the average magnetization profile in the $XXZ$ chain, which is completely unknown. For free fermion systems, current fluctuations were found to be asymptotically Gaussian \cite{schon07} and therefore the variance provides a complete characterization of the full current statistics. Moreover, according to \cite{PreKlich2008} this indicates that the entanglement (between the left and right halves) is simply proportional to the variance of the current (with a factor $3/\pi^2$). In order to check this relation we compared our results for $D(t)$ to numerical results for the time dependent entanglement entropy (for the same model and initial condition) presented in Fig.~19 of \cite{gobert05}, and we found a good agreement in the leading order. Little is known about higher moments of current fluctuations for interacting fermions. There is no reason to believe that they are Gaussian and the full current statistics is very difficult to probe (both theoretically and experimentally) for interesting interacting fermion systems \cite{BB}. (Even for {\em classical} interacting particles, the derivation of the full counting statistics is usually a formidable challenge, see e.g. \cite{Derrida1998, Johansson2000, bodineau04, Sasamoto2005a, Ferrari2006, Rakos2006a, PK}.) For the $XXZ$ model, however, it could be possible to compute higher order cumulants by employing the heuristic approach which we have applied to computing the variance. \section{Acknowledgments} We thank Alexander Abanov, Deepak Dhar, Viktor Eisler, R\'obert Juh\'asz, Eduardo Novais, Pierre Pujol, Zolt\'an R\'acz and Gunter M.\ Sch\"utz for illuminating discussions. We also gratefully acknowledge financial support from NIH grant R01GM078986 (T.~A.), the Hungarian Scientific Research Fund OTKA PD-72607 (R.~A.), and Jeffrey Epstein for support of the Program for Evolutionary Dynamics at Harvard University.
2,877,628,089,285
arxiv
\section{Introduction} \label{sec:intro} The study of subgroup lattices has a long history that began with Richard Dedekind~\cite{Dedekind:1877} and Ada Rottlaender~\cite{Rottlaender:1928}, and continued with important contributions by Reinhold Baer, {\O}ystein Ore, Michio Suzuki, Roland Schmidt, and many others (see Schmidt~\cite{Schmidt:1994}). Much of this work focuses on the problem of deducing properties of a group $G$ from assumptions about the structure of its lattice of subgroups, $\ensuremath{\operatorname{Sub}}(G)$, or, conversely, deducing lattice theoretical properties of $\ensuremath{\operatorname{Sub}}(G)$ from assumptions about $G$. Historically, less attention was paid to the local structure of the subgroup lattice of a finite group, perhaps because it seemed that very little about $G$ could be inferred from knowledge of, say, an \emph{upper interval}, $\ensuremath{\llbracket} H,G \ensuremath{\rrbracket} = \{K \mid H\leq K \leq G\}$, in the subgroup lattice of $G$. Recently, however, this topic has attracted more attention (see, e.g., \cite{Aschbacher:2009,Lucchini:1997,Basile:2001,Borner:1999,Kohler:1983,Lucchini:1994a,Palfy:1988,Palfy:1995}), mostly owing to its connection with one of the most important open problems in universal algebra, the \ac{FLRP}. This is the problem of characterizing the lattices that are (isomorphic to) congruence lattices of finite algebras (see, e.g., \cite{Berman:1970,DeMeo:thesis,Palfy:1995,Palfy:2001}). There is a remarkable theorem relating this problem to intervals in subgroup lattices of finite groups. \begin{theorem}[P\'alfy\ and Pudl\'ak~\cite{Palfy:1980}] \label{thm:P5} The following statements are equivalent: \begin{enumerate}[(A)] \item Every finite lattice is isomorphic to the congruence lattice of a finite algebra. \item Every finite lattice is isomorphic to an interval in the subgroup lattice of a finite group. \end{enumerate} \end{theorem} If these statements are true (resp., false), then we say the \acs{FLRP} has a positive (resp., negative) answer. Thus, if we can find a finite lattice $L$ for which it can be proved that there is no finite group $G$ with $L \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ for some $H< G$, then the \acs{FLRP} has a negative answer. In this paper we propose a new classification of group properties according to whether or not they can be deduced from the assumption that $\ensuremath{\operatorname{Sub}}(G)$ has an upper interval isomorphic to some finite lattice. We believe that discovering which group properties can (or cannot) be connected to the local structure of a subgroup lattice is itself a worthwhile endeavor, but we will also describe how this classification could provide a solution of the \acs{FLRP}. Suppose $\ensuremath{\mathcal{P}}$ is a \emph{group theoretical property}\footnote{This and other italicized terms in the introduction will be defined more formally in Section~\ref{sec:notation-definitions}.} and suppose there exists a finite lattice $L$ such that if $G$ is a finite group with $L \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ for some $H\leq G$, then $G$ has property $\ensuremath{\mathcal{P}}$. We call such a property $\ensuremath{\mathcal{P}}$ \ac{IE}. If the lattice involved is germaine to the discussion, we say that $\ensuremath{\mathcal{P}}$ is \emph{interval enforceable by} $L$. An \defn{interval enforceable class of groups} is a class of groups all of which have a common interval enforceable property. Although it depends on the lattice $L$, generally speaking it is difficult to deduce very much about a group $G$ from the assumption that an upper interval in $\ensuremath{\operatorname{Sub}}(G)$ is isomorphic to $L$. It becomes easier easier if, in addition to the hypothesis $L\cong\ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$, we assume that the subgroup $H$ is \emph{core-free} in $G$; that is, $H$ contains no nontrivial normal subgroup of $G$. Properties of $G$ that can be deduced from these assumptions are what we call \ac{cfIE}. Extending this idea, we consider finite collections $\ensuremath{\mathscr{L}}$ of finite lattices and ask what can be proved about a group $G$ if one assumes that each $L_i\in \ensuremath{\mathscr{L}}$ is isomorphic to an upper interval $\ensuremath{\llbracket} H_i, G \ensuremath{\rrbracket}\leq \ensuremath{\operatorname{Sub}}(G)$, with each $H_i$ core-free in $G$. Clearly, if $\ensuremath{\operatorname{Sub}}(G)$ has such upper intervals, and if corresponding to each $L_i\in \ensuremath{\mathscr{L}}$ there is a property $\ensuremath{\mathcal{P}}_i$ that is \ac{cfIE} by $L_i$, then $G$ must have all of the properties $\ensuremath{\mathcal{P}}_i$. A related question is the following: Given a set $\ensuremath{\mathscr{P}}$ of \ac{cfIE} properties, is the conjunction $\ensuremath{\bigwedge} \ensuremath{\mathscr{P}}$ \ac{cfIE}? Corollary \ref{cor:isle-prop-groups-1} answers this question affirmatively. In this paper, we will identify some group properties that are \ac{cfIE}, and others that are not. We will see that the \ac{cfIE} properties found thus far are negations of common group properties (for example, ``not solvable,'' ``not almost simple,'' ``not alternating,'' ``not symmetric''). Moreover, we prove that in these special cases the corresponding group properties (``solvable,'' ``almost simple,'' ``alternating,'' ``symmetric'') that are not \ac{cfIE}. This and other considerations suggest that a group property and its negation cannot both be \ac{cfIE}. As yet, we are unable to prove this. A related question is whether, for every group property $\ensuremath{\mathcal{P}}$, either $\ensuremath{\mathcal{P}}$ is \ac{cfIE} or $\neg \ensuremath{\mathcal{P}}$ is \ac{cfIE}. Our main result (Theorem~\ref{thm-wjd-1}) connects the foregoing ideas with the \acs{FLRP}, as follows:\\[6pt] {\it Statement (B) of Theorem~\ref{thm:P5} is equivalent to the following statement: \begin{enumerate} \item[(C)] Fix $n\geq 2$ and let $\ensuremath{\mathscr{L}} = \{L_1, \dots, L_n\}$ be any collection of finite lattices at least two of which have more than two elements. For each $i = 1, 2, \dots, n$, let $\ensuremath{\mathfrak{X}}_i$ denote the class that is core-free interval enforcable by $L_i$. Then there exists a finite group $G \in \bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$ such that for each $L_i \in \ensuremath{\mathscr{L}}$ we have $L_i\cong \ensuremath{\llbracket} H_i, G \ensuremath{\rrbracket}$ for some subgroup $H_i$ that is core-free in $G$. \end{enumerate} \begin{remark} By (C), the \acs{FLRP} would have a negative answer if we could find a collection $\ensuremath{\mathfrak{X}}_1, \dots, \ensuremath{\mathfrak{X}}_n$ of \acs{cfIE} classes such that $\bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$ is empty. \end{remark} } Core-free interval enforceable properties are related to permutation representations of groups. If $H$ is a core-free subgroup of $G$, then $G$ has a faithful permutation representation $\phi:G\hookrightarrow \ensuremath{\operatorname{Sym}}(G/H)$. Let $\<G/H, \phi(G)\>$ denote the algebra comprised of the right cosets $G/H$ acted upon by right multiplication by elements of $G$; that is, $\phi(g): Hx \mapsto Hxg$. It is well known that the congruence lattice of this algebra (i.e., the lattice of systems of imprimitivity) is isomorphic to the interval $\ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$ in the subgroup lattice of $G$.\footnote{See \cite[Lemma 4.20]{alvi:1987} or~\cite[Theorem 1.5A]{Dixon:1996}.} This puts statement (C) into perspective. If the \acs{FLRP} has a positive answer, then no matter what we take as our finite collection $\ensuremath{\mathscr{L}}$---for example, we might take $\ensuremath{\mathscr{L}}$ to be \emph{all} finite lattices with at most $N$ elements for some large $N< \omega$---we can always find a \emph{single} finite group $G$ such that every lattice in $\ensuremath{\mathscr{L}}$ is isomorphic to the interval in $\ensuremath{\operatorname{Sub}}(G)$ above a core-free subgroup. As a result, this group $G$ must have so many faithful representations $G\hookrightarrow \ensuremath{\operatorname{Sym}}(G/H_i)$ with systems of imprimitivity isomorphic to $L_i$, one such representation for each distinct $L_i\in \ensuremath{\mathscr{L}}$. Moreover, the group $G$ having this property can be chosen from the class $\bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$, where $\ensuremath{\mathfrak{X}}_1, \dots, \ensuremath{\mathfrak{X}}_n$ is an arbitrary collection of \acs{cfIE} classes of groups. \section{Notation and definitions} \label{sec:notation-definitions} In this paper, \emph{all groups and lattices are finite}. We use $\ensuremath{\mathfrak{G}}$ to denote the class of all finite groups. Given a group $G$, we denote the set of subgroups of $G$ by $\ensuremath{\operatorname{Sub}}(G)$. The algebra $\<\ensuremath{\operatorname{Sub}}(G), \ensuremath{\wedge}, \ensuremath{\vee}\>$ is a lattice where the $\ensuremath{\wedge}$ (``meet'') and $\ensuremath{\vee}$ (``join'') operations are defined for all $H$ and $K$ in $\ensuremath{\operatorname{Sub}}(G)$ by $H\ensuremath{\wedge} K = H\cap K$ and $H\ensuremath{\vee} K = \<H, K\> = $ the smallest subgroup of $G$ containing both $H$ and $K$. We will refer to the set $\ensuremath{\operatorname{Sub}}(G)$ as a lattice, without explicitly mentioning the $\ensuremath{\wedge}$ and $\ensuremath{\vee}$ operations. By $H \leq G$ (resp., $H < G$) we mean $H$ is a subgroup (resp., proper subgroup) of $G$. For $H\leq G$, the \emph{core of $H$ in $G$}, denoted by $\ensuremath{\operatorname{core}}_G(H)$, is the largest normal subgroup of $G$ contained in $H$. If $\ensuremath{\operatorname{core}}_G(H)=1$, then we say that $H$ is \emph{core-free in $G$}. For $H\leq G$, by the \defn{interval} $\ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$ we mean the set $\{K \mid H\leq K \leq G\}$, which is a sublattice of $\ensuremath{\operatorname{Sub}}(G)$. With this notation, $\ensuremath{\operatorname{Sub}}(G)=\ensuremath{\llbracket} 1,G \ensuremath{\rrbracket}$. When viewing $\ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ as a sublattice of $\ensuremath{\operatorname{Sub}}(G)$, we sometimes refer to it as an \defn{upper interval}. Given a lattice $L$ and a group $G$, the expression $L \cong \ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$ will mean that there exists a subgroup $H \leq G$ such that $L$ is isomorphic to the interval $\{K \mid H\leq K \leq G\}$ in the subgroup lattice of $G$. By a \defn{group theoretical class}, or \defn{class of groups}, we mean a collection $\ensuremath{\mathfrak{X}}$ of groups that is closed under isomorphism: if $G_0\in \ensuremath{\mathfrak{X}}$ and $G_1\cong G_0$, then $G_1\in \ensuremath{\mathfrak{X}}$. A \defn{group theoretical property}, or simply \defn{property of groups}, is a property $\ensuremath{\mathcal{P}}$ such that if a group $G_0$ has property $\ensuremath{\mathcal{P}}$ and $G_1\cong G_0$, then $G_1$ has property $\ensuremath{\mathcal{P}}$.\footnote{It seems there is no single standard definition of \emph{group theoretical class}. While some authors (e.g.,~\cite{Doerk:1992}, \cite{BBE:2006}) use the same definition we use here, others (e.g.~\cite{Robinson:1996}, \cite{Rose:1978}) require that every group theoretical class contains the one element group. In the sequel we consider negations of group properties, and we would like these to qualify as group properties. Therefore, we don't require that every group theoretical class contains the one element group.} Thus if $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}$ denotes the collection of all groups having the group property $\ensuremath{\mathcal{P}}$, then $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}$ is a class of groups, and belonging to a particular class of groups is a group theoretical property. If $\ensuremath{\mathscr{K}}$ is a class of algebras (e.g., a class of groups), then we say that $\ensuremath{\mathscr{K}}$ is \emph{closed under homomorphic images} and we write $\ensuremath{\mathbf{H}}(\ensuremath{\mathscr{K}}) = \ensuremath{\mathscr{K}}$ provided $\phi(G)\in \ensuremath{\mathscr{K}}$ whenever $G\in \ensuremath{\mathscr{K}}$ and $\phi$ is a homomorphism of $G$. Let $\ensuremath{\mathfrak{L}}$ denote the class of all finite lattices, and $\ensuremath{\mathfrak{G}}$ the class of all finite groups. Let $\ensuremath{\mathcal{P}}$ be a group theoretical property and $\ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}$ the associated class of all groups with property $\ensuremath{\mathcal{P}}$. We call $\ensuremath{\mathcal{P}}$ (and $\ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}$) \begin{itemize} \item \acf{IE} provided \[ (\exists L\in \ensuremath{\mathfrak{L}}) \; (\forall G \in \ensuremath{\mathfrak{G}}) \; \bigl(L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket} \; \longrightarrow \; G \in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}\bigr) \] \item \acf{cfIE} provided \[ (\exists L\in \ensuremath{\mathfrak{L}}) \; (\forall G\in \ensuremath{\mathfrak{G}}) \; \bigl( L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket} \; \ensuremath{\bigwedge} \; \ensuremath{\operatorname{core}}_G(H)=1 \; \longrightarrow \; G \in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}} \bigr) \] \item \acf{minIE} provided there exists $L\in \ensuremath{\mathfrak{L}}$ such that if $L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ for some group $G\in \ensuremath{\mathfrak{G}}$ of minimal order (with respect to $L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$), then $G \in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}$. \end{itemize} In this paper we will have little to say about min-{\small IE}\ properties. Nonetheless, we include this class in our list of new definitions because properties of this type arise often (see, e.g., \cite{Lucchini:1994a}), and a primary aim of this paper is to formalize various notions of interval enforceability that we believe are useful in applications. \section{Results} Clearly, if $\ensuremath{\mathcal{P}}$ is an interval enforceable property, then it is also core-free interval enforceable. There is an easy sufficient condition under which the converse holds. Suppose $\ensuremath{\mathcal{P}}$ is a group property, let $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}$ denote the class of all groups with property $\ensuremath{\mathcal{P}}$, and let $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c$ denote the class of all groups that do not have property $\ensuremath{\mathcal{P}}$. \begin{lemma} \label{lemma-wjd-2} Suppose $\ensuremath{\mathcal{P}}$ is a core-free interval enforceable property. If $\ensuremath{\mathbf{H}}(\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c) = \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c$, then $\ensuremath{\mathcal{P}}$ is an interval enforceable property. \end{lemma} \begin{proof} Since $\ensuremath{\mathcal{P}}$ is \acs{cfIE}, there is a lattice $L$ such that \begin{equation} \label{eq:100} L \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket} \; \ensuremath{\bigwedge} \; \ensuremath{\operatorname{core}}_G(H)=1 \; \longrightarrow \; G\in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}. \end{equation} Under the assumption $\ensuremath{\mathbf{H}}(\ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}^c) = \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}^c$ we prove \begin{equation} \label{eq:200} L \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket} \; \longrightarrow \; G\in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}. \end{equation} If~(\ref{eq:200}) fails, then there is a group $G\in \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c$ with $L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$. Let $N = \ensuremath{\operatorname{core}}_G(H)$. Then $L \cong \ensuremath{\llbracket} H/N,G/N \ensuremath{\rrbracket}$ and $H/N$ is core-free in $G/N$ so, by hypothesis~(\ref{eq:100}), $G/N \in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}$. But $G/N \in \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c$, since $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}}^c$ is closed under homomorphic images. \end{proof} In \cite{Palfy:1995}, P{\'e}ter\ P\'alfy\ gives an example of a lattice that cannot occur as an upper interval in the subgroup lattice finite solvable group. (We give other examples in Section~\ref{sec:parachute-lattices}.) In his Ph.D.~thesis~\cite{Basile:2001}, Alberto Basile proves that if $G$ is an alternating or symmetric group, then there are certain lattices that cannot occur as upper intervals in $\ensuremath{\operatorname{Sub}}(G)$. Another class of lattices with this property is described by Aschbacher and Shareshian in~\cite{Aschbacher:2009}. Thus, two classes of groups that are known to be at least \acs{cfIE} are the following: \begin{itemize} \item $\ensuremath{\mathfrak{X}}_0 = \mathfrak{S}^c = $ nonsolvable finite groups; \item $\ensuremath{\mathfrak{X}}_1 =\bigl\{G\in \ensuremath{\mathfrak{G}} \mid (\forall n<\omega) \; \bigl(G \neq A_n \ensuremath{\bigwedge} G\neq S_n\bigr) \bigr\}$, \end{itemize} where $A_n$ and $S_n$ denote, respectively, the alternating and symmetric groups on $n$ letters. Note that both classes $\ensuremath{\mathfrak{X}}_0$ and $\ensuremath{\mathfrak{X}}_1$ satisfy the hypothesis of \ref{lemma-wjd-2}. Explicitly, $\ensuremath{\mathfrak{X}}_0^c = \mathfrak{S}$, the class of solvable groups, is closed under homomorphic images, as is the class $\ensuremath{\mathfrak{X}}_1^c$ of alternating and symmetric groups. Therefore, by Lemma~\ref{lemma-wjd-2}, $\ensuremath{\mathfrak{X}}_0$ and $\ensuremath{\mathfrak{X}}_1$ are {\small IE}\ classes. By contrast, suppose there exists a finite lattice $L$ such that \[ L \cong \ensuremath{\llbracket} H, G \ensuremath{\rrbracket} \; \ensuremath{\bigwedge} \; \ensuremath{\operatorname{core}}_G(H)=1 \; \longrightarrow \; G \text{ is subdirectly irreducible.} \] Lemma~\ref{lemma-wjd-2} does not apply in this case since the class of subdirectly reducible groups is obviously not closed under homomorphic images.\footnote{Recall, for groups \defn{subdirectly irreducible} is equivalent to having a unique minimal normal subgroup. Every algebra, in particular every group $G$, has a subdirect decomposition into subdirectly irreducibles, say, $G\hookrightarrow G/N_1 \times \cdots\times G/N_n$, so there are always subdirectly irreducible homomorphic images.} In Section \ref{sec:parachute-lattices} below we describe lattices with which we can prove that the following classes are at least \acs{cfIE}: \begin{itemize} \item $\ensuremath{\mathfrak{X}}_2 = $ the subdirectly irreducible groups; \item $\ensuremath{\mathfrak{X}}_3 = $ the groups having no nontrivial abelian normal subgroups; \item $\ensuremath{\mathfrak{X}}_4 = \{G\in \ensuremath{\mathfrak{G}} \mid C_G(M) = 1 \text{ for all } 1\neq M\ensuremath{\trianglelefteqslant} G\}$. \end{itemize} We noted above that $\ensuremath{\mathfrak{X}}_2$ fails to satisfy the hypothesis of \ref{lemma-wjd-2}. The same can be said of $\ensuremath{\mathfrak{X}}_3$ and $\ensuremath{\mathfrak{X}}_4$. That is, $\ensuremath{\mathbf{H}}(\ensuremath{\mathfrak{X}}_i^c) \neq \ensuremath{\mathfrak{X}}_i^c$ for $i= 2, 3, 4$. To verify this take $H\in \ensuremath{\mathfrak{X}}_i$, $K\in \ensuremath{\mathfrak{X}}_i^c$, and consider $H\times K$. In each case ($i=2, 3, 4$) we see that $H\times K$ belongs to $\ensuremath{\mathfrak{X}}_i^c$, but the homomorphic image $(H\times K)/(1\times K) \cong H$ does not. \subsection{Negations of interval enforceable properties} \label{sec:negat-interv-enforc} If a lattice $L$ is isomorphic to an interval in the subgroup lattice of a finite group, then we call $L$ \defn{group representable}. Recall, Theorem~\ref{thm:P5} says that the \acs{FLRP} has a negative answer if we can find a finite lattice that is not group representable. Suppose there exists a property $\ensuremath{\mathcal{P}}$ such that both $\ensuremath{\mathcal{P}}$ and its negation $\neg \ensuremath{\mathcal{P}}$ are interval enforceable by the lattices $L$ and $L_c$, respectively. That is $L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ implies $G$ has property $\ensuremath{\mathcal{P}}$, and $L_c\cong \ensuremath{\llbracket} H_c,G_c \ensuremath{\rrbracket}$ implies $G_c$ does not have property $\ensuremath{\mathcal{P}}$. Then clearly the lattice in Figure~\ref{fig:twopanelchute} could not be group representable. \begin{figure}[!h] \centering \begin{tikzpicture}[scale=0.7] \node (G) at (0,6.25) [fill,circle,inner sep=1.2pt] {}; \node (K1) at (-1.75,4) [fill,circle,inner sep=1.2pt] {}; \node (K2) at (1.75,4) [fill,circle,inner sep=1.2pt] {}; \node (H) at (0,2) [fill,circle,inner sep=1.2pt] {}; \draw (-.93,5.2) node {$L$}; \draw (.93,5.2) node {$L_c$}; \draw[semithick] (K1) to (H) to (K2) (G) to [out=197,in=85] (K1) (K1) to [out=15,in=-95] (G) (G) to [out=-15,in=95] (K2) (K2) to [out=165,in=-85] (G); \end{tikzpicture} \caption{} \label{fig:twopanelchute} \end{figure} As the next result shows, however, if a group property and its negation are interval enforceable by the lattices $L$ and $L_c$, then already at least one of these lattices is not group representable. \begin{lemma} \label{lemma:ie-prop-and-neg} If $\ensuremath{\mathcal{P}}$ is a group property that is interval enforceable by a group representable lattice, then it is not the case that $\neg \ensuremath{\mathcal{P}}$ is interval enforceable by a group representable lattice. \end{lemma} \begin{proof} Assume $\ensuremath{\mathcal{P}}$ is interval enforceable by the group representable lattice $L$, and let $H\leq G$ be groups for which $L\cong \ensuremath{\llbracket} H, G\ensuremath{\rrbracket}$. If $\neg \ensuremath{\mathcal{P}}$ is interval enforceable by the group representable lattice $L_c$, then there exist $H_c\leq G_c$ satisfying $L_c\cong \ensuremath{\llbracket} H_c, G_c\ensuremath{\rrbracket}$. Consider the group $G\times G_c$. This has upper intervals $L\cong \ensuremath{\llbracket} H\times G_c, G\times G_c \ensuremath{\rrbracket}$ and $L_c\cong \ensuremath{\llbracket} G\times H_c, G\times G_c \ensuremath{\rrbracket}$ and therefore, by the interval enforceability assumptions, the group $G\times G_c$ has the properties $\ensuremath{\mathcal{P}}$ and $\neg \ensuremath{\mathcal{P}}$ simultaneously, which is a contradiction. \end{proof} To take a concrete example, nonsolvability is {\small IE}. However, solvability is obviously not {\small IE}. For, if $L\cong \ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$ then for any nonsolvable group $K$ we have $L\cong \ensuremath{\llbracket} H\times K, G\times K \ensuremath{\rrbracket}$, and of course $G\times K$ is nonsolvable. Note that here (and in the proof of Lemma~\ref{lemma:ie-prop-and-neg}) the group $H\times K$ at the bottom of the interval is not core-free. So a more interesting question is whether a property and its negation can both be \acs{cfIE}. Again, if such a property were found, a lattice of the form in Figure~\ref{fig:twopanelchute} would give a negative answer to the \acs{FLRP}, though this requires additional justification to address the core-free aspect (see Section \ref{sec:parachute-lattices}). This leads to the following question: If $\ensuremath{\mathcal{P}}$ is core-free interval enforceable by a group representable lattice, does it follow that $\neg \ensuremath{\mathcal{P}}$ is not core-free interval enforceable by a group representable lattice? We provide an affirmative answer in some special cases, such as when $\ensuremath{\mathcal{P}}$ means ``not solvable'' or ``not almost simple.'' Indeed, Lemma~\ref{lem:IE-must-have-wreaths} implies that the class of solvable groups, and more generally any class of groups that omits certain wreath products, cannot be core-free interval enforceable by a group representable lattice. \begin{lemma} \label{lem:IE-must-have-wreaths} Suppose $\ensuremath{\mathcal{P}}$ is core-free interval enforceable by a group representable lattice. Then, for any finite nonabelian simple group $S$, there exists a wreath product group of the form $W = S\wr \bar{U}$ that has property $\ensuremath{\mathcal{P}}$. \end{lemma} \begin{proof} Let $L$ be a group representable lattice such that if $L\cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$ and $\ensuremath{\operatorname{core}}_G(H)=1$ then $G\in \ensuremath{\mathfrak{X}}_\ensuremath{\mathcal{P}}$. Since $L$ is group representable, there exists a $\ensuremath{\mathcal{P}}$-group $G$ with $L \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$. We apply an idea of Hans Kurzweil (see~\cite{Kurzweil:1985}) twice. Fix a finite nonabelian simple group $S$. Suppose the index of $H$ in $G$ is $|G:H| = n$. Then the action of $G$ on the cosets of $H$ induces an automorphism of the group $S^n$ by permutation of coordinates. Denote this representation by $\phi: G \rightarrow \ensuremath{\operatorname{Aut}}(S^n)$, and let the image of $G$ be $\phi(G) = \bar{G} \leq \ensuremath{\operatorname{Aut}}(S^n)$. The wreath product under this action is the group \[ U:= S\wr_\phi G = S^n \rtimes_\phi G = S^n \rtimes \bar{G}, \] with multiplication given by \[ (s_1, \dots, s_n, x) (t_1, \dots, t_n, y) = (s_1 t_{x(1)}, \dots, s_nt_{x(n)}, x y), \] for $s_i, t_i \in S$ and $x, y \in \bar{G}$. (For the remainder of the proof, we suppress the semidirect product symbol and write, for example, $S^n\bar{G}$ instead of $S^n \rtimes \bar{G}$.) An illustration of the subgroup lattice of such a wreath product appears in Figure~\ref{fig:kurzweil}. Note that the interval $\ensuremath{\llbracket} D, S^n \ensuremath{\rrbracket}$, where $D$ denotes the diagonal subgroup of $S^n$, is isomorphic to $\ensuremath{\operatorname{Eq}}(n)'$, the dual of the lattice of partitions of an $n$-element set. The dual lattice $L'$ is an upper interval of $\ensuremath{\operatorname{Sub}}(U)$, namely, $L'\cong \ensuremath{\llbracket} D\bar{G}, U \ensuremath{\rrbracket}$.\footnote{These facts, which were proved by Kurzweil in~\cite{Kurzweil:1985}, are discussed in greater detail in~\cite[Section 2.2]{DeMeo:thesis}.} \begin{figure}[!h] \begin{center} \begin{tikzpicture}[scale=.8] \node (G) at (3,3) [fill,circle,inner sep=1pt] {}; \draw (G) node [right] {$\bar{G}$}; \node (H) at (1.75,1.5) [fill,circle,inner sep=1pt] {}; \draw (H) node [left] {$\bar{H}$}; \node (Sn) at (-5,5) [fill,circle,inner sep=1pt] {}; \draw (Sn) node [left] {$S^n$}; \node (D) at (-2.5,2.5) [fill,circle,inner sep=1pt] {}; \draw (D) node [right] {$D$}; \node (DG) at (0.5,5.5) [fill,circle,inner sep=1pt] {}; \draw (DG) node [right] {$D \bar{G}$}; \node (1) at (0,0) [fill,circle,inner sep=1pt] {}; \draw (1) node [below] {$1$}; \node (SnG) at (-2,8) [fill,circle,inner sep=1pt] {}; \draw (SnG) node [above] {$U=S^n \bar{G}$}; \draw (G) to [out=190,in=80] (1) to [out=10,in=-100] (G) (Sn) to [out=0,in=90] (1) to [out=180,in=-90] (Sn) (SnG) to [out=190,in=80] (Sn) to [out=10,in=-100] (SnG) (SnG) to [out=0,in=90] (G) to [out=180,in=-90] (SnG); \draw[dotted, semithick] (G) to [out=190,in=80] (H) to [out=10,in=-100] (G) (SnG) to [out=0,in=90] (DG) to [out=180,in=-90] (SnG) (Sn) to [out=0,in=90] (D) to [out=180,in=-90] (Sn); \draw (-3.75,3.75) node {$\ensuremath{\operatorname{Eq}}(n)'$} (-.75,6.75) node {$L'$} (2.3,2.25) node {$L$}; \end{tikzpicture} \end{center} \caption{Hasse diagram illustrating some features of the subgroup lattice of the wreath product $U$.} \label{fig:kurzweil} \end{figure} It is important to note (and we prove below) that if $H$ is core-free in $G$ -- equivalently, if $\ker \phi = 1$ -- then the foregoing construction results in the subgroup $D\bar{G}$ being core-free in $U$. Therefore, by repeating the foregoing procedure, with $H_1 = D\bar{G}$ denoting the (core-free) subgroup of $U$ such that $L' \cong \ensuremath{\llbracket} H_1, U \ensuremath{\rrbracket}$, we find that $L = L''\cong \ensuremath{\llbracket} D_1 \bar{U}, S^m\bar{U} \ensuremath{\rrbracket}$, where $m = |U:H_1|$, and $D_1$ denotes the diagonal subgroup of $S^m$. Since $D_1\bar{U}$ will be core-free in $S^m \bar{U}$ then, it follows by the original hypothesis that $S^m \bar{U} = S \wr \bar{U}$ must have property $\ensuremath{\mathcal{P}}$. To complete the proof, we check that starting with a core-free subgroup $H \leq G$ in the Kurzweil construction just described results in a core-free subgroup $D \bar{G} \leq U$. Let $N = \ensuremath{\operatorname{core}}_U(D\bar{G})$. Then, for all $w=(d,\dots, d, x) \in N$ and for all $u = (t_1,\dots, t_n, g)\in U$, we have $u w u^{-1}\in N$. Fix $w=(d,\dots, d, x) \in N$. We will choose $u\in U$ so that the condition $u w u^{-1}\in N$ implies $x$ acts trivially on $\{1, \dots, n\}$. First note that if $u = (t_1,\dots, t_n, 1)$, then \begin{align*} u w u^{-1} &= (t_1,\dots, t_n, 1) (d, \dots, d, x) (t_1^{-1},\dots, t_n^{-1}, 1)\\ &=(t_1 d \,t_{x(1)}^{-1},\dots, t_nd \,t_{x(n)}^{-1}, 1) \in N, \end{align*} and this implies that $t_1 d\, t_{x(1)}^{-1} = t_2 d\, t_{x(2)}^{-1} =\cdots = t_nd \,t_{x(n)}^{-1}$. Suppose by way of contradiction that $x(1) = j\neq 1$. Then, since $x$ is a permutation (hence, one-to-one), $x(k) \neq j$ for each $k\in \{2, 3, \dots, n\}$. Pick one such $k$ other than $j$. (This is possible since $n = |G:H|>2$; for otherwise $H\ensuremath{\trianglelefteqslant} G$ contradicting $\ensuremath{\operatorname{core}}_G(H)=1$.) Since $u \in U$ is arbitrary, we may assume $t_1 = t_k$ and $t_{x(1)}=t_j\neq t_{x(k)}$. But this contradicts $t_1 d\, t_{x(1)}^{-1} = t_k d\, t_{x(k)}^{-1}$. Therefore, $x(1) = 1$. The same argument shows that $x(i) = i$ for each $1\leq i\leq n$, and we see that $w=(d,\dots,d, x) \in N$ implies $x\in \ker \phi = 1$. This puts $N$ below $D$, and the only normal subgroup of $U$ that lies below $D$ is the trivial group. \end{proof} By the foregoing result we conclude that a class of groups that does not include wreath products of the form $S\wr G$, where $S$ is an arbitrary finite nonabelian simple group, is not a core-free interval enforceable class. The class of solvable groups is an example. \subsection{Dedekind's rule} \label{sec:dedekinds-rule} When $A$ and $B$ are subgroups of a group $G$, by $AB$ we mean the set $\{ a b \mid a\in A, b\in B\}$, and we write $A \ensuremath{\vee} B$ or $\<A, B\>$ to denote the subgroup of $G$ generated by $A$ and $B$. Clearly $AB \subseteq \<A,B\>$; equality holds if and only if $A$ and $B$ \emph{permute}, by which we mean $A B = B A$. We will need the following well known result:\footnote{See~\cite[p.~122]{Rose:1978}, for example.} \begin{theorem}[Dedekind's rule] \label{lemma-dedekind} Let $G$ be a group and let $A, B$ and $C$ be subgroups of $G$ with $A\leq B$. Then, \begin{align} \label{eq:dedekind1} A(C\cap B) &= AC \cap B,\qquad \text{ and }\\ \label{eq:dedekind2} (C\cap B)A &= CA \cap B. \end{align} \end{theorem} For $A \in \ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$, let $A^{\perp(H,G)}$ denote the set of complements of $A$ in the interval $\ensuremath{\llbracket} H, G\ensuremath{\rrbracket}$. That is, \[ A^{\perp(H,G)} := \{B \in \ensuremath{\llbracket} H, G \ensuremath{\rrbracket} \mid A \cap B = H, \, \<A, B\> = G\}. \] Clearly $H^{\perp(H,G)} = \{G\}$ and $G^{\perp(H,G)} = \{H\}$. Recall that an \emph{antichain} of a partially ordered set is a subset of pairwise incomparable elements. \begin{corollary} \label{cor:dedekind1} Let $A \in \ensuremath{\llbracket} H, G\ensuremath{\rrbracket}$ and let $\ensuremath{\mathcal{B}}$ be a nonempty subset of the set $A^{\perp(H,G)}$ of complements of $A$ in $\ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$. If every group in $\ensuremath{\mathcal{B}}$ permutes with $A$, then $\ensuremath{\mathcal{B}}$ is an antichain. \end{corollary} \begin{proof} If $\ensuremath{\mathcal{B}}$ is a singleton, the result holds trivially. So assume $B_1$ and $B_2$ are distinct groups in $\ensuremath{\mathcal{B}}$. We prove $B_1 \nleq B_2$. Indeed, if $B_1 \leq B_2$, then Theorem~\ref{lemma-dedekind} implies \[ B_1 = B_1H = B_1(A \cap B_2) = B_1A \cap B_2 = G \cap B_2 = B_2, \] which is a contradiction. \end{proof} \subsection{Parachute lattices} \label{sec:parachute-lattices} We now prove the equivalence of statements (B) and (C) of Section~\ref{sec:intro}. \begin{theorem} \label{thm-wjd-1} The following statements are equivalent: \begin{enumerate} \item[(B)] Every finite lattice is isomorphic to an interval in the subgroup lattice of a finite group. \item[(C)] Suppose $n\geq 2$ and $\ensuremath{\mathscr{L}} = \{L_1, \dots, L_n\}$ is a set of finite lattices, at least two of which have more than two elements. For each $i = 1, 2, \dots, n$, let $\ensuremath{\mathfrak{X}}_i$ denote the class that is core-free interval enforcable by $L_i$. Then there exists a finite group $G \in \bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$ such that for each $L_i \in \ensuremath{\mathscr{L}}$ we have $L_i\cong \ensuremath{\llbracket} H_i, G \ensuremath{\rrbracket}$ for some subgroup $H_i$, where every $H_i \leq Y <G$ is core-free in $G$. \end{enumerate} \end{theorem} \begin{remark} By (C), the \acs{FLRP} would have a negative answer if we could find a collection $\ensuremath{\mathfrak{X}}_1, \dots, \ensuremath{\mathfrak{X}}_n$ of \acs{cfIE} classes such that $\bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$ is empty. \end{remark} \begin{proof} Obviously (C) implies (B). Assume (B) holds and assume the hypotheses of (C). Construct a new lattice, denoted $\ensuremath{\mathscr{P}} = \ensuremath{\mathscr{P}}(L_1, \dots, L_n)$, as shown in the Hasse diagram of Figure~\ref{fig:parachute} (a), where the bottoms of the $L_i$ sublattices are atoms in $\ensuremath{\mathscr{P}}$. \begin{figure}[centering] \caption{The parachute construction.} \label{fig:parachute} \begin{center} {\scalefont{.8} \begin{tikzpicture}[scale=0.7] \node (G) at (-8,0) [fill,circle,inner sep=1.2pt] {}; \node (K) at (-11.5,-2) [fill,circle,inner sep=1.2pt] {}; \node (K1) at (-9.9,-2.8) [fill,circle,inner sep=1.2pt] {}; \node (K2) at (-8,-3.2) [fill,circle,inner sep=1.2pt] {}; \node (Kn) at (-5.2,-2.2) [fill,circle,inner sep=1.2pt] {}; \node (H) at (-8,-7) [fill,circle,inner sep=1.2pt] {}; \draw (-10,-1) node {$L_1$}; \draw (-9,-1.5) node {$L_2$}; \draw (-8,-1.6) node {$L_3$}; \draw (-6.5,-1) node {$L_n$}; \draw (-6.75,-2.8) node {$\dots$}; \draw (-8,-8.25) node {(a)}; \draw[semithick] (K) to (H) to (K1) (K2) to (H) to (Kn); \draw [semithick] (G) to [out=-140,in=0] (K) (K) to [out=55,in=185] (G) (G) to [out=-105,in=30] (K1) (K1) to [out=80,in=-140] (G) (G) to [out=-70,in=60] (K2) (K2) to [out=110,in=-110] (G) (G) to [out=-10,in=110] (Kn) (Kn) to [out=170,in=-50] (G); \node (Gr) at (1,0) [fill,circle,inner sep=1.2pt] {}; \node (Kr) at (-2.5,-2) [fill,circle,inner sep=1.2pt] {}; \node (K1r) at (-0.9,-2.8) [fill,circle,inner sep=1.2pt] {}; \node (K2r) at (1,-3.2) [fill,circle,inner sep=1.2pt] {}; \node (Knr) at (3.8,-2.2) [fill,circle,inner sep=1.2pt] {}; \node (Hr) at (1,-7) [fill,circle,inner sep=1.2pt] {}; \draw (-1,-1) node {$L_1$}; \draw (0,-1.5) node {$L_2$}; \draw (1,-1.6) node {$L_3$}; \draw (2.5,-1) node {$L_n$}; \draw (2.25,-2.8) node {$\dots$}; \draw (1,-8.25) node {(b)}; \draw (Gr) node [above] {$G$} (Kr) node [left] {$K$} (K1r) node [left] {$K_1$} (K2r) node [left] {$K_2$} (Knr) node [right] {$K_n$} (Hr) node [right] {$H$}; \draw[semithick] (Kr) to (Hr) to (K1r) (K2r) to (Hr) to (Knr); \draw [semithick] (Gr) to [out=-140,in=0] (Kr) (Kr) to [out=55,in=185] (Gr) (Gr) to [out=-105,in=30] (K1r) (K1r) to [out=80,in=-140] (Gr) (Gr) to [out=-70,in=60] (K2r) (K2r) to [out=110,in=-110] (Gr) (Gr) to [out=-10,in=110] (Knr) (Knr) to [out=170,in=-50] (Gr); \end{tikzpicture} } \end{center} \end{figure} By (B), there exist groups $H <G$ with $\ensuremath{\mathscr{P}} \cong \ensuremath{\llbracket} H,G \ensuremath{\rrbracket}$. We can assume $H$ is a core-free subgroup of $G$. (If not, replace $G$ and $H$ with $G/N$ and $H/N$, where $N=\ensuremath{\operatorname{core}}_G(H)$.) Let $K, K_1, \dots, K_n$ be the subgroups in which $H$ is maximal and for which $L_i \cong \ensuremath{\llbracket} K_i, G \ensuremath{\rrbracket},\; 1\leq i\leq n$. (Figure~\ref{fig:parachute} (b).) We will prove that, for each $1\leq i\leq n$ every proper subgroup of $G$ that contains $K_i$ is core-free in $G$. It then follows that $G\in \ensuremath{\mathfrak{X}}_i$ for all $1\leq i \leq n$, and so $G \in \bigcap\limits_{i=1}^n \ensuremath{\mathfrak{X}}_i$. Choose $Y$ such that $K_j \leq Y < G$. We will prove $Y$ is core-free. If $N = \ensuremath{\operatorname{core}}_G(Y)$ were nontrivial, then since $H$ is core-free, we would have $K_j \leq NH \leq Y$. Now, $NH$ permutes with all $X \in \ensuremath{\llbracket} H, G\ensuremath{\rrbracket}$, since for such $X$ we have $X NH = NX H = NHX$. Therefore, if $N$ is nontrivial, then the set $(NH)^{\perp(H,G)}$, the complements of $NH$ in $\ensuremath{\llbracket} H, G\ensuremath{\rrbracket}$, forms an antichain by Corollary~\ref{cor:dedekind1}. This contradicts the assumption that at least two of the lattices $L_i$ have more than two elements. \end{proof} By a \emph{parachute lattice}, denoted $\ensuremath{\mathscr{P}}(L_1, \dots, L_m)$, we mean a lattice just like the one illustrated in Figure~\ref{fig:parachute}. We identify some special group properties that are core-free interval enforceable by a parachute lattice. \begin{lemma} \label{lemma-wjd-5} Let $\ensuremath{\mathscr{P}} = \ensuremath{\mathscr{P}}(L_1, \dots, L_n)$ with $n\geq 2$ and $|L_i|>2$ for at least two $i$, and suppose $\ensuremath{\mathscr{P}} \cong \ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$ with $H$ core-free in $G$. \begin{enumerate}[(i)] \item If $1\neq N \ensuremath{\trianglelefteqslant} G$, then $NH = G$ and $C_G(N)=1$. \item $G$ is subdirectly irreducible and nonsolvable. \end{enumerate} \end{lemma} \begin{remark} If $N$ is abelian, then $N \leq C_G(N)$, so (i) implies that every nontrivial normal subgroup of $G$ is nonabelian. \end{remark} \begin{proof} (i) Assume $1\neq N \ensuremath{\trianglelefteqslant} G$. As above, we let $K_i$ denote the subgroups of $G$ corresponding to the atoms of $\ensuremath{\mathscr{P}}$, and by the same argument used to prove Theorem~\ref{thm-wjd-1}, we see that every subgroup $Y$ with $H \leq Y < G$ is core-free in $G$. Therefore, $NY=G$ for all $H \leq Y < G$. In particular, $NH=G$. To prove that $C_G(N)=1$, let $1\neq M \leq N$ be a minimal normal subgroup of $G$ contained in $N$. It suffices to prove $C_G(M)= 1$. Note that $C_G(M) \ensuremath{\trianglelefteqslant} N_G(M) =G$. If $C_G(M)$ were nontrivial, then it would follow by (1) that $C_G(M)H = G$. Consider any $H< K < G$. Then $1 < M\cap K < M$ (strictly, by Dedekind's rule). Now $M\cap K$ is normalized by $H$ and centralized (hence normalized) by $C_G(M)$. Therefore, $M\cap K \ensuremath{\trianglelefteqslant} C_G(M)H = G$, contradicting the minimality of $M$. To prove (ii) we first show that $G$ has a unique minimal normal subgroup. Let $M$ be a minimal normal subgroup of $G$ and let $N \ensuremath{\trianglelefteqslant} G$ be any normal subgroup not containing $M$. We show that $N = 1$. Since both subgroups are normal, the commutator subgroup $[M,N]$ lies in the intersection $M\cap N$, which is trivial by the minimality of $M$. Thus, $M$ and $N$ centralize each other. In particular, $N \leq C_G(M) = 1$, by (i). Finally, since $G$ has a unique minimal normal subgroup that is nonabelian, $G$ is nonsolvable. \end{proof} Given two group theoretical properties $\ensuremath{\mathcal{P}}_1$ and $\ensuremath{\mathcal{P}}_2$, we write $\ensuremath{\mathcal{P}}_1 \longrightarrow \ensuremath{\mathcal{P}}_2$ to denote that a group $G$ has property $\ensuremath{\mathcal{P}}_1$ only if is also has property $\ensuremath{\mathcal{P}}_2$. Thus, we clearly have \[ \quad \ensuremath{\mathcal{P}}_1 \longrightarrow \ensuremath{\mathcal{P}}_2 \quad \Longleftrightarrow \quad \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}_1}\subseteq \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}_2}, \] where, as above, $\ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}_i}$ is the class of groups having property $\ensuremath{\mathcal{P}}_i$. The conjunction $\ensuremath{\mathcal{P}}_1 \ensuremath{\wedge} \cdots \ensuremath{\wedge} \ensuremath{\mathcal{P}}_n$ corresponds to the class \[ \bigcap_{i=1}^n \ensuremath{\mathfrak{X}}_{\ensuremath{\mathcal{P}}_i} = \{G \in \ensuremath{\mathfrak{G}} \mid G \text{ has property $\ensuremath{\mathcal{P}}_i$ for all $1\leq i\leq n$} \}, \] and the following is an immediate corollary of the parachute construction: \begin{corollary} \label{cor:isle-prop-groups-1} If $\ensuremath{\mathcal{P}}_1, \dots, \ensuremath{\mathcal{P}}_n$ are cf-{\small IE}\ properties, then so is $\ensuremath{\mathcal{P}}_1 \ensuremath{\wedge} \cdots \ensuremath{\wedge} \ensuremath{\mathcal{P}}_n$. \end{corollary} By Theorem~\ref{thm-wjd-1}, Lemma~\ref{lemma-wjd-5}, and Corollary \ref{cor:isle-prop-groups-1}, we see that the \acs{FLRP} has a positive answer (that is, statement (B) is true) if and only if for every finite lattice $L$ there is a finite group $G$ satisfying all of the following: \begin{enumerate}[(i)] \item $L\cong \ensuremath{\llbracket} H, G \ensuremath{\rrbracket}$; \item $G$ is nonsolvable, nonalternating, and nonsymmetric; \item $\ensuremath{\operatorname{core}}_G(Y) = 1$ for all $H\leq Y < G$; \item $G$ has a unique minimal normal subgroup $M$, which satisfies $C_G(M) = 1$; in particular, $M$ is nonabelian and satisfies $MY = G$ for all $H\leq Y \leq G$. \end{enumerate}
2,877,628,089,286
arxiv
\section{Introduction} Artificial feedforward neural networks are parametric sets of functions given as fixed compositions of units, i.e.\ elementary functions consisting of a parametrized affine map followed by a fixed % activation function. They are extremely useful as sets of hypothesis functions in contemporary machine learning applications. We are interested in the geometry and combinatorics of the functions represented by networks with maxout units, which have an activation function of the form $\mathbb{R}^k\to \mathbb{R};\; (z_1,\ldots,z_k)\mapsto \max\{z_1,\ldots,z_k\}$. Maxout units were proposed in \cite{pmlr-v28-goodfellow13} generalizing the popular rectified linear units (ReLUs)~\cite{pmlr-v15-glorot11a}, which have activation function $\mathbb{R}\to \mathbb{R};\; z\mapsto \max\{0,z\}$. The corresponding functions are piecewise (affine) linear and induce subdivisions of the input space into linear regions. We will be concerned with estimating the maximum number of linear regions of the functions that can be represented by maxout networks with given architectures. The analysis of neural networks with piecewise linear activation functions based on the number of linear regions of the represented functions was proposed in \cite{pascanu2013number,NIPS2014_5422}, showing that deep networks % can represent functions which have many more linear regions than any of the functions that can be represented by shallow networks with the same number of units or the same number of parameters. These kind of results illustrate the differences in representational power and possible benefits of different network architectures. Works in this direction include \cite{6704758, telgarsky2015representation, pmlr-v49-eldan16, pmlr-v49-telgarsky16, YAROTSKY2017103, 10.5555/3298483.3298577, pmlr-v70-raghu17a} and earlier works for Boolean circuits and sum-product networks \cite{Hastad86,Hastad91,NIPS2011_8e6b42f1}. The number of linear regions of the functions represented by networks with piecewise linear activations has sparked substantial interest in the study of neural networks, with works including \cite{NIPS2014_5422, pmlr-v49-telgarsky16, montufar2017notes, arora2018understanding, DBLP:conf/icml/SerraTR18, 8756157}. Recent works have % explored approaches based on tropical geometry \cite{pmlr-v80-zhang18i, Charisopoulos2018ATA, alfarra2020on} and power diagram subdivisions~\cite{NIPS2019_9712}, % while others have studied % the expectated % number of linear regions for typical choices of the parameters in the case of ReLU networks \cite{pmlr-v97-hanin19a, NIPS2019_8328}, empirical enumeration~\cite{Serra_Ramalingam_2020}, and the relations between linear regions and the behavior of algorithms that are used to select the parameters of neural networks based on data, such as speed of convergence and implicit biases of gradient descent~\cite{Steinwart2019ASL, Zhang2020Empirical, 86441}. ReLU networks have been studied in much more detail than maxout networks. And while ReLUs % are currently more popular in applications, maxout networks are an interesting generalization % that enjoy similar benefits (e.g.\ linear operation, no saturation) but without some of the possible % drawbacks (e.g.\ dying neurons). In this work we seek to advance % the theory for maxout networks, particularly in regards to their representational power, whereupon we develop important connections to topics in combinatorial geometry and tropical geometry. The nonlinear locus of a ReLU $x\mapsto \max\{0,z(x)\}$ with a generic affine function $z$ is a hyperplane. Hence, linear regions of functions $x\mapsto [\max\{0,z_1(x)\},\ldots, \max\{0,z_m(x)\}]^\top$ represented by a layer of $m$ ReLUs are described by hyperplane arrangements. Hyperplane arrangements have been investigated since the 19th century \cite{Steiner1826}. In particular, % Buck \cite{10.2307/2303424} showed that the number of regions and bounded regions that can be obtained by slicing $n$-dimensional Euclidean space with $m$ hyperplanes is $\sum_{j=0}^n{m\choose j}$ % and ${m-1\choose n}$, respectively. Moreover, a celebrated result by Zaslavsky~\cite{zaslavsky1975facing} gives a formula for the number of faces and bounded faces of hyperplane arrangements based on the poset of intersections. For a discussion of these results and other properties of hyperplane arrangements see \cite{Stanley04anintroduction}. For maxout units one obtains a more general type of arrangement. Concretely, a rank-$k$ maxout unit $x\mapsto \max\{z_1(x),\ldots, z_k(x)\}$ with generic affine functions $z_1,\ldots, z_k$ has a nonlinear locus of the form $\{x\in\mathbb{R}^n \colon z_i(x)=z_j(x)=\max\{z_1(x),\ldots, z_k(x)\}\allowbreak \text{ for some $i\neq j$}\}$. In tropical geometry this is known as a tropical hypersurface~\cite{MaclaganSturmfels15}. Hence the linear regions of the functions $x\mapsto [\max\{z_{11}(x),\ldots, z_{1k_1}(x)\},\ldots,\max\{z_{m1}(x),\ldots, z_{mk_m}(x)\}]^\top$ represented by a layer of $m$ maxout units of ranks $k_1,\ldots,k_m$ are described by arrangements of tropical hypersurfaces. We will also refer to these as maxout arrangements. The properties of such arrangements are not as well understood, except in special cases, such as tropical hyperplane arrangements \cite{Speyer08,Federico,JaggiKatzWagner}, which correspond to networks with restricted parameters, namely maxout networks whose affine maps are coordinate projections plus constants, $z_{ij}(x)=x_j + b_{ij}$. In order to obtain counting formulas and bounds for maxout networks, we will exploit a correspondence between the regions of maxout arrangements and the upper vertices of Minkowski sums of polytopes (Proposition~\ref{prop:uppervertreg}). In the special case of hyperplane arrangements (rank-$2$ maxout units), these reduce to Minkowski sums of line segments, called zonotopes \cite{z-lop-95}. Minkowski sums of polytopes are of relevance in numerous topics, including computational commutative algebra, collision detection, robot motion planning, computer-aided design, and have been the subject of an intensive research program over the years. In particular, the work of Gritzmann and Sturmfels~\cite{GritzmannSturmfels} % showed that for sums of polytopes % with at most $r$ total nonparallel edges, % the maximum number of faces is attained by sums of $r$ line segments in general position. % Tight expressions for the maximum number of faces of Minkowski sums of two and three full-dimensional polytopes were derived in \cite{10.1007/s00454-015-9726-6,10.1145/2462356.2462368}. % Relevant to our discussion, Weibel \cite{Weibel12} obtained an expression for the number of faces of large Minkowski sums of full-dimensional polytopes in terms of the number of faces of small subsums, and tight upper bounds for the total number of vertices. Obtaining similar results for sums of polytopes of arbitrary dimensions requires significantly more complex arguments. A full solution to the so-called upper bound problem for Minkowski sums (UBPM) was obtained by Adiprasito and Sanyal \cite{KarimRaman}, giving % tight upper bounds (in non-closed form) for the number of faces of any dimension, of Minkowski sums of any polytopes with given numbers of vertices. We shall take Zaslavsky's perspective (Theorem~\ref{thm:posetcounting}) to extend Weibel's result to the case of sums of polytopes of arbitrary dimensions (Theorem~\ref{thm:central-arbdims}), including the treatment of upper vertices (Theorem~\ref{thm:facessimple}) and bounds on the number of strict lower vertices (Theorem~\ref{thm:lowerbound_strict_lower}). Combining these with an implication of Adiprasito-Sanyal's result (Proposition~\ref{thm:Mneighborly_upper_bound}), we obtain explicit tight upper bounds for the total number of vertices and for the number of upper vertices. Our results for Minkowski sums of polytopes translate to tight upper bounds on the number of linear regions of the functions represented by shallow maxout networks without and with biases (Theorem~\ref{thm:main-result}). These are the first tight results for maxout networks (except for the rank-$2$ case), closing significant gaps between the upper and lower bounds from previous works \cite{NIPS2014_5422}. Based on these results, we also derive results for deep maxout networks (Theorem~\ref{thm:deep-result}) improving previous lower and upper bounds \cite{NIPS2014_5422,DBLP:conf/icml/SerraTR18}. \smallskip This article is organized as follows. In Section~\ref{sec:perspectives} we provide definitions and different perspectives on maxout networks and their linear regions. % In Section~\ref{sec:main} we present our main results on the maximum number of linear regions of maxout networks. % The main analysis is conducted in Sections~\ref{sec:Weibel}, \ref{sec:Zaslavsky}, and \ref{sec:Weibel-Zaslavsky}. In Section~\ref{sec:Weibel} we present a modification of a result by Weibel to count the upper faces of Minkowski sums. In Section~\ref{sec:Zaslavsky} we present a modification of a result by Zaslavski to count the faces of maxout arrangements. In Section~\ref{sec:Weibel-Zaslavsky} we combine these two approaches to obtain a generalization of Weibel's result to sums of polytopes of arbitrary dimensions. In Section~\ref{sec:discussion} we offer a discussion and outlook. \section{Definitions and different perspectives} \label{sec:perspectives} \subsection{Maxout networks} We consider standard feedforward fully connected maxout networks with no skip connections, called maxout networks for short. Maxout networks were introduced in \cite{pmlr-v28-goodfellow13} as a generalization of ReLU networks. \begin{definition}[Maxout networks]\ \begin{enumerate}[leftmargin=*] \item Let $k,n\in\mathbb{N}$. A rank-$k$ maxout unit with $n$ inputs is a parametric function $$ \mathbb{R}^n\to\mathbb{R}; \quad x\mapsto \max\{ \langle A_{1}, x\rangle + b_{1} ,\ldots, \langle A_{k}, x\rangle + b_{k} \}, $$ parametrized by $\theta = (A_r,b_r)_{r=1}^k$, $A_r\in \mathbb{R}^n$, $b_r\in \mathbb{R}$, $r=1,\ldots, k$. Each affine function is called a pre-activation feature. The parameters $A_r$ and $b_r$ are called % weights and biases. % \item Let $m\in\mathbb{N}$ and $k_1,\ldots, k_m\in\mathbb{N}$. A layer with $n$ inputs and $m$ maxout output units of ranks $k_1,\ldots, k_m$ is a parametric function $\mathbb{R}^{n}\to\mathbb{R}^{m}$, whose $j$th output coordinate is a maxout unit of rank $k_j$, $j=1,\ldots, m$. A layer of maxout units is also called a shallow maxout network. \item Let $L\in\mathbb{N}$ and $n_0,n_1,\ldots, n_L\in\mathbb{N}$. A maxout network with $n_0$ inputs and $L$ layers of widths $n_1,\ldots, n_L$ is % a parametric function $\mathbb{R}^{n_0}\to \mathbb{R}^{n_L}$ of the form $f_L \circ \cdots \circ f_1$, where $f_l$ is a function represented by a layer with $n_{l-1}$ inputs and $n_l$ maxout output units of given ranks, $l=1,\ldots, L$. A network with multiple layers is called a deep network. \item The architecture of a maxout network as described above is determined by the number of inputs $n_0$, the number of layers $L$, the number of units per layer $n_1,\ldots, n_L$, and the ranks of the maxout units in each layer $k_{l,1},\ldots, k_{l,n_l}$, $l=1,\ldots, L$. The parameter of the network, which we will denote by $\theta$, is the collection of weights and biases of all units. For a given architecture, we denote $\mathcal{N}$ the set of functions that can be represented by the network for all possible choices of the parameter. \end{enumerate} \end{definition} We will be concerned mostly with the analysis of shallow networks, from which we will also derive results for deep networks. We will also present results for networks without biases, in which case the affine functions $\langle A_r,x\rangle + b_r$ reduce to linear functions $\langle A_r,x\rangle$. Notice that each function represented by a maxout network is a composition of continuous piecewise (affine) linear functions and is itself a continuous piecewise linear function. When there is no risk of confusion we refer to affine linear functions simply as linear functions. \begin{definition}[Linear regions] Let $f$ % be a continuous piecewise linear function % with $n_0$ inputs. The nonlinear locus of % $f$ is the set $V(f)\subseteq\mathbb{R}^{n_0}$ of input points $x$ at which $\nabla_x f$ is discontinuous. A linear region of $f$ is a maximal connected component of $\mathbb{R}^{n_0}\setminus V$. The number of linear regions of $f$ is denoted $N(f)$. The maximum of the number of linear regions among all functions $f$ that can be expressed by a network $\mathcal{N}$ is denoted $N(\mathcal{N}) = \max_{f\in\mathcal{N}}N(f)$. \end{definition} \subsection{Tropical hypersurfaces} Maxout units can be regarded as tropical polynomials. We give a brief description of these notions. For more details, please see \cite[Chapter 1]{JoswigBook}. \begin{definition} \label{defn:tropLaurent} Given two real numbers $a$ and $b$, their \emph{tropical sum} is $a\oplus b = \max(a,b)$ and their \emph{tropical product} is $a\odot b = a + b$. A \emph{tropical (exponential) polynomial} is a function % \[ f\colon\mathbb{R}^{n_0}\to\mathbb{R}; \quad f(x) = c_1 \odot x^{\odot\alpha_1} \oplus \dots \oplus c_k \odot x^{\odot\alpha_k}, \] where $c_1,\dots, c_k\in \mathbb{R}$, $\alpha_1,\dots, \alpha_k \in$ $\mathbb{R}^{n_0}$, and $x^{\odot\alpha_1} =\alpha_1x_1+\dots \alpha_{n_0}x_{n_0}$. We refer to the $c_i \odot x^{\odot\alpha_i}$ as \emph{tropical monomials} and call $f$ a \emph{tropical $k$-nomial} if it is the sum of $k$ distinct monomials. \end{definition} Classically, polynomials (tropical or non-tropical) only have non-negative integer exponents. However, this restriction is not needed in our % discussion. A rank-$k$ maxout unit is equivalent to a tropical $k$-nomial. \begin{definition} The tropical hypersurface of a tropical polynomial $f:\mathbb{R}^{n_0}\to \mathbb{R}$ is \[\Trop(f):= \{x\in \mathbb{R}^{n_0}: c_i x^{\alpha_i}=c_j x^{\alpha_j} = f(x)\text{ for } i\ne j \text{ two distinct monomials}\}.\] \end{definition} The complement $\mathbb{R}^{n_0}\backslash \Trop(f)$ is a union of convex polyhedral cells on which the function $f$ is linear. In particular, the nonlinear locus of a maxout unit is a tropical hypersurface. From the tropical perspective, the goal of the paper is to answer a question in tropical combinatorics, namely to bound the number of regions of an arrangement of tropical hypersurfaces. Similar questions on the combinatorics of tropical hypersurface arrangements have been studied before. However, they often focus on polynomials of bounded degree, e.g.\ degree $1$ \cite{Federico,JaggiKatzWagner} or degree $d$ \cite{Joswig_2017}, rather than a bounded number of monomials. The bounds are important for the complexity of many algorithms in tropical geometry, which often rely on a enumeration of cells in a tropical arrangement or tropical variety. \subsection{Convex conjugates and Newton polytopes} We relate the regions of a maxout network and the vertices of certain polytopes. The procedure has been explained in~\cite{pmlr-v80-zhang18i} from a tropical geometry perspective. % We give a brief description that relies only on convex duality. For an introduction to convex analysis see \cite{10.2307/j.ctt14bs1ff}. Any continuous piecewise linear function can be expressed as the difference of two convex piecewise linear functions; see~\cite{1333237}. Hence any function $f\colon \mathbb{R}^n\to\mathbb{R}^o$ expressed by a maxout network can be written, for each output coordinate $i=1,\ldots, o$, as the difference $f_i=g_i-h_i$ of two piecewise linear convex functions $g_i$ and $h_i$. Given such a decomposition, we define a surrogate function $\bar f = g+h\colon \mathbb{R}^n\to\mathbb{R}$, where $g=\sum_{i=1}^o g_i$ and $h=\sum_{i=1}^o h_i$ are convex. % This is a piecewise linear convex function and hence it can be written as \begin{equation} \bar f(x) = \max_{j=1,\ldots, M} \{\langle a_j, x\rangle + b_j\}, \quad x\in \mathbb{R}^n, \label{eq:maxoutnetworkconvexfunction} \end{equation} for a finite collection of coefficients $a_j\in \mathbb{R}^n$, $b_j\in \mathbb{R}$, $j=1,\ldots, M$. We will discuss the coefficients in more detail further below. Now, % any linear region of % $f$ is a union of linear regions of $\bar f$. % Hence the number of linear regions of $\bar f$ is an upper bound on the number of linear regions of $f$. Moreover, any two distinct linear regions $R,Q\subseteq\mathbb{R}^n$ of $\bar f$ % are also distinct linear regions of $f$ % unless $\nabla g|_{R} -\nabla g|_Q = \nabla h|_{R}- \nabla h|_{Q}$, a tie which is broken, for instance, whenever $g$ is scaled independently of $h$. One can show that for generic choices of the network parameters $f$ and $\bar f$ actually have the same number of linear regions. Each linear region of $\bar f$ corresponds to a (full dimensional) neighborhood of inputs $x$ over which one of the affine functions $\langle a_j, x\rangle + b_j$ attains the maximum. % A representation of $\bar f$ as in \eqref{eq:maxoutnetworkconvexfunction} may involve many redundant affine functions. One way to characterize the affine functions which attain the maximum over a neighborhood of the input space is as follows. Consider the convex conjugate of $\bar f$, which is the convex piecewise linear function $$ {\bar f}^\ast(x^\ast) = \sup_{x\in\mathbb{R}^n} \{ \langle x,x^\ast\rangle -\bar f(x)\}, \quad x^\ast\in\mathbb{R}^n. $$ If $\bar f(x) = \langle a, x\rangle +b$ for some $x\in\mathbb{R}^n$ and $(a,b)\in \mathbb{R}^{n+1}$, then $(x^\ast,{\bar f}^\ast(x^\ast)) = (a,-b)$. We conclude that the graph of ${\bar f}^\ast$ is the convex envelope of the points $\{(a_j,-b_j)\}\subseteq\mathbb{R}^{n+1}$. The vertices of this envelope correspond to the affine functions $\langle a_j,x\rangle +b_j$ which attain the maximum over a neighborhood of the input space. An equivalent way of expressing this is as follows. \begin{definition}[Lifted Newton polytope] \label{def:Newton-polytope} For a function % $\bar f(x) = \max_{j=1,\ldots, M} \{\langle a_j, x\rangle + b_j\}$, $x\in\mathbb{R}^n$, define its Newton polytope as $\operatorname{conv}\{a_j\colon j=1,\ldots, M\}\subseteq\mathbb{R}^{n}$, and its lifted Newton polytope as $P_{\bar f} =\operatorname{conv}\{(a_j,b_j)\colon j=1,\ldots, M\}\subseteq\mathbb{R}^{n+1}$. \end{definition} The upper vertices of a polytope $P\subseteq\mathbb{R}^{n+1}$ are the vertices $p$ which are `visible from above', meaning that their normal cones $\{r\in\mathbb{R}^{n+1}\colon \langle r, p-q\rangle> 0 \text{ for all } q\in P \setminus p\}$ intersect the upper halfspace $\mathbb{R}^n\times \mathbb{R}_{>0}$. \begin{proposition}[{\cite[Theorem 1.13]{JoswigBook}}] \label{prop:ftt} The linear regions of % $\bar f(x) = \max_{j\in\{1,\ldots, M\}}\{\langle a_j , x\rangle + b_j\}$, $x\in \mathbb{R}^n$, correspond to the upper vertices of its lifted Newton polytope $P_{\bar f}$. % \end{proposition} The duality between faces of the nonlinear locus and faces of the lifted Newton polytope goes beyond what is described in Proposition~\ref{prop:ftt}, and it is fundamental for studying the combinatorics of tropical hypersurfaces. % \subsection{Minkowski sums} The lifted Newton polytopes for single layer networks have a description as Minkowski sums. Recall that the Minkowski sum of two sets $A$ and $B$ is defined as $A+B = \{a+b\colon a\in A, b\in B\}$. For a single layer of $m$ maxout units of ranks $k_1,\ldots, k_m$, % $$ \bar f(x) \!=\! \sum_{j\in[m]} \max_{r_j\in[k_j]}\{\langle w_{j,r_j}, x\rangle + b_{j,r_j}\} \!=\! \max_{r\in [k_1]\times \cdots\times[k_m]} \Big\{ \sum_{j\in[m]} \langle w_{j,r_j}, x\rangle + b_{j,r_j}\Big\} \!=\! \max_{r} \{\langle w_r , x\rangle + b_r\}, $$ where $[k]:=\{1,\ldots, k\}$, and $w_r = \sum_{j\in[m]} w_{j,r_j}$, $b_r = \sum_{j\in[m]} b_{j,r_j}$, for $r\in [k_1]\times \cdots\times[k_m]$. To see this one may use that the distributive law holds for tropical addition and multiplication % \cite[Section~1.1]{MaclaganSturmfels15}. Notice that the resulting set of coefficients is the Minkowski sum of the sets of coefficients of the individual units, $\{(w_r,b_r)\colon r\in [k_1]\times\dots\times[k_m]\} = \sum_{j\in[m]} \{(w_{j,r_j},b_{j,r_j})\colon r_j\in[k_j]\}$. Now consider the lifted Newton polytope of the % layer and the polytopes of the individual units, $ P_{[m]} = \conv\left\{(w_r,b_r)\colon r\in [k_1]\times\dots\times[k_m]\right\}$, $P_j= \conv \left\{(w_{j,r_j},b_{j,r_j})\colon r_j\in[k_j]\right\}, \; j\in[m]$. Since the Minkowski sum of convex sets is convex, we have \begin{equation*} P_{[m]} = \sum_{j\in[m]} P_{j}. \end{equation*} In turn, the polytope for a layer is obtained by taking the Minkowski sum of the polytopes of the individual units. This is all we will need in our discussion. We note that, following the arguments of \cite{pmlr-v80-zhang18i}, one can also describe how the polytopes corresponding to layers are combined to obtain a polytope for a deep network. That work focused on ReLU networks, but the same arguments extend naturally to maxout networks. We collect a few observations in the next proposition. For maxout units of ranks $k_1,\ldots, k_m$ the polytopes $P_1,\ldots, P_m$ are arbitrary convex hulls of, respectively, $k_1,\ldots, k_m$ points in $\mathbb{R}^{n+1}$. For units without biases, the last coordinate of the coefficients is always zero, so that the polytopes are in $\mathbb{R}^n\times\{0\}$ and the upper vertices are simply the vertices. \begin{proposition} \label{prop:uppervertreg} The % linear regions of a function represented by a layer with $n$ inputs and maxout units of ranks $k_1,\ldots, k_m$ correspond to the upper vertices of a Minkowski sum of polytopes which are convex hulls of $k_1,\ldots, k_m$ points in $\mathbb{R}^{n+1}$. For a layer without biases, the % linear regions correspond to the % vertices of a Minkowski sum of polytopes which are convex hulls of $k_1,\ldots, k_m$ points in $\mathbb{R}^n$. \end{proposition} The difference between counting the vertices vs % the upper vertices of Minkowski sums of polytopes is analogous to the difference between counting the regions of a central arrangement (without biases) vs % counting the regions of a non-central arrangement (with biases). % The nonlinear locus $V(f)$ of a function $f$ represented by a layer of maxout units without biases is the normal fan of the Newton polytope. The nonlinear locus of a function with biases is the intersection of the normal fan of the lifted Newton polytope in $\mathbb{R}^{n+1}$ and the affine space $\mathbb{R}^n\times \{1\}$. The hardest part in our proofs will be to upper bound the number of regions of non-central arrangements. A result of particular importance in our analysis is the Upper Bound Theorem for Minkowski Sums by Adiprasito and Sanyal \cite[Theorem 6.11]{KarimRaman}, which shows that the maximum number % of $s$-dimensional faces % of Minkowski sums of polytopes with given numbers of vertices % is attained by sums of % so-called Minkowski neighborly families. From their result we derive the following proposition, which we will later use to obtain an explicit form of the upper bound % for vertices. % It will also be an ingredient in our upper bound for upper vertices. For a polytope $P$, let $f_s(P)$ denote the number of $s$-dimensional faces of $P$. \begin{proposition} \label{thm:Mneighborly_upper_bound} Let $0\leq s\leq n$. % If a Minkowski sum of polytopes $P = P_1+\cdots+P_m\subseteq\mathbb{R}^{n+1}$ has the maximum number of $s$-faces among all sums with given $f_0(P_1),\ldots, f_0(P_m)$, then $f_0(\sum_{i\in S}P_i)=\prod_{i\in S}f_0(P_i)$ for all $S\subseteq [m]$, $|S|\leq n$. % \end{proposition} Intuitively, this proposition states that if a sum of polytopes in $\mathbb{R}^n$ with given vertex counts reaches the largest possible number of vertices, then each partial sum of at most $n$ of the polytopes reaches a trivial upper bound on the number of vertices. \begin{proof} We explain how to derive the claim based on % \cite{KarimRaman}. We refer the reader to that paper for details on Minkowski neighborly families, Cayley polytopes, (relative) Cayley complexes, $h$-vectors, and the corresponding Dehn-Sommerville relations. That paper shows that a Minkowski sum $P_1+\cdots+ P_m$ of polytopes $P_i\subseteq\mathbb{R}^d$ attains the maximum number of $k$-faces, $0\leq k\leq d$, % if the family $(P_1,\ldots, P_m)$ is Minkowski neighborly. (Following their notation, here we use $k$ for the dimension of the faces). % Of particular interest is the classification of cases maximizing the number of faces of a particular dimension $k_0$. Following \cite[Theorem 6.11]{KarimRaman}, for a given $k_0$, a Minkowski sum $P_1+\cdots+P_m$ attains the maximum number of $k_0$-faces if and only if the $h$-vector of its relative Cayley complex attains maximum value at all entries $h_{k+m-1}$ with $k\leq k_0+1$. By the Dehn-Sommerville relations, the entries with $k+m-1>\lfloor\frac{d+m-1}{2}\rfloor$ are determined as positive linear functions of the entries with $k+m-1\leq \lfloor\frac{d+m-1}{2}\rfloor$ of the $h$-vectors for sub-families $(P_i)_{i\in U}$, $U\subseteq [m]$. Hence we only need to verify that the latter entries are maximal. By \cite[Theorem~6.11(2a)]{KarimRaman}, this is equivalent to verifying that for any $k'+m'-1 \leq \lfloor\frac{d+m'-1}{2}\rfloor$ and $U\subseteq[m]$, $|U|=m'$, the following holds. For all $S\subseteq U$, all nonfaces of the Cayley polytope $T_S$ of cardinality $k'+|S|-1$ are supported in some vertex set $V(T_R)$ with $R\subsetneq S$. One can rewrite the cases as $k'\leq \lfloor\frac{d+m'-1}{2}\rfloor - (m'-1)$ and nonfaces of cardinality $\leq \lfloor\frac{d-(m'-1)}{2}\rfloor + |S|-1$. For $m'\geq d$, the condition is trivially satisfied. Hence we only need to check cases with $m'=d-r$, $r\geq 1$, and nonfaces of cardinality $\leq \lfloor\frac{r+1}{2}\rfloor + |S|-1$, where $|S|\leq d-r$, $r\geq1$. This means that for any $S\subseteq[m]$ of cardinality $|S|\leq n$, any selection of one vertex from each $P_i$, $i\in S$, results in a vertex of the polytope $P_S=\sum_{i\in S} P_i$, and hence that $P_S$ attains the trivial upper bound on the number of vertices, $f_0(P_S) = \prod_{i\in S} f_0(P_i)$. \end{proof} \begin{figure} \centering \begin{tikzpicture} \node (edge) at (-1,0) {% \begin{tikzpicture}[font=\scriptsize] \useasboundingbox (-2.5,0,-2.5) -- (0.5,0,-2.5) -- (0.5,0,0.5) -- (-2.5,0,0.5) -- cycle; \draw[dotted] (0,0,0) -- ++(0,3) (-2,0,-2) -- ++(0,3); \fill[red] (0,0,0) circle (2pt) (-2,0,-2) circle (2pt); \draw[red, very thick] (0,0,0) -- (-2,0,-2); \node[anchor=south east,red] at (-2,0,-2) {$(0,0,0)$}; \node[anchor=north west,red] at (0,0,0) {$(2,2,0)$}; \end{tikzpicture} }; \node (edgeDual) at (-1,3) {% \begin{tikzpicture} \useasboundingbox (-2.5,0,-2.5) -- (0.5,0,-2.5) -- (0.5,0,0.5) -- (-2.5,0,0.5) -- cycle; \draw (-2.5,0,-2.5) -- node[anchor=south] {$\max(0,2x+2y)$} (0.5,0,-2.5) -- (0.5,0,0.5) -- (-2.5,0,0.5) -- cycle; \fill[black!20] (0,0,0) circle (2pt) (-2,0,-2) circle (2pt); \draw[black!20,very thick] (0,0,0) -- (-2,0,-2); \draw[very thick,red] (-1,0,-1) -- ++(1,0,-1) (-1,0,-1) -- ++(-1,0,1); \end{tikzpicture} }; \node (triangle) at (3.5,0) {% \begin{tikzpicture}[font=\scriptsize] \useasboundingbox (-0.75,0,-1) -- (2.25,0,-1) -- (2.25,0,2) -- (-0.75,0,2) -- cycle; \fill[blue!80] (1,1,0) circle (2pt) (0,1,1) circle (2pt) (1,0,1) circle (2pt); \draw[blue!80,fill=blue!20,very thick] (1,1,0) -- (0,1,1) -- (1,0,1) -- cycle; \draw[dotted] (1,1,0) -- ++(0,2) (0,1,1) -- ++(0,2) (1,0,1) -- ++(0,3); \node[anchor=east,blue] at (0,1,1) {$(1,0,1)$}; \node[anchor=west,blue] at (1,1,0) {$(0,1,1)$}; \node[anchor=north,blue] at (1,0,1) {$(1,1,0)$}; \end{tikzpicture} }; \node (triangleDual) at (3.5,3) {% \begin{tikzpicture} \useasboundingbox (-0.75,0,-1) -- (2.25,0,-1) -- (2.25,0,2) -- (-0.75,0,2) -- cycle; \draw (-0.75,0,-1) -- node[anchor=south] {$\max(x+1,y+1,x+y)$} (2.25,0,-1) -- (2.25,0,2) -- (-0.75,0,2) -- cycle; \fill[black!20] (1,0,0) circle (2pt) (0,0,1) circle (2pt) (1,0,1) circle (2pt); \draw[black!20,very thick, line join=bevel % ] (1,0,0) -- (0,0,1) -- (1,0,1) -- cycle; \coordinate (o) at (0.7,0,0.7); \fill[blue!80] (o) circle (2pt); \draw[very thick,blue!80] (o) -- ++(1,0,0) (o) -- ++(0,0,1) (o) -- ++(-1,0,-1); \end{tikzpicture} }; \node (minkowski) at (9,0) {% \begin{tikzpicture}[font=\scriptsize] \useasboundingbox (-2.5,0,-2.5) -- (2,0,-2.5) -- (2,0,2) -- (-2.5,0,2) -- cycle; % \node[anchor=west] at (1,1,0) {$(2,3,1)$}; \node[anchor=west] at (1,0,1) {$(3,3,0)$}; \node[anchor=east] at (-2,1,-1) {$(1,0,1)$}; \node[anchor=north east] at (-1,0,-1) {$(1,1,0)$}; \draw[dotted] (-2,1,-1) -- ++(0,2) (-1,1,-2) -- ++(0,2) (0,1,1) -- ++(0,2) (1,1,0) -- ++(0,2) (1,0,1) -- ++(0,3); \draw[blue!80,very thick, densely dashed] (-1,1,-2) -- (-1,0,-1); \draw[red,very thick] (-2,1,-1) -- (0,1,1) (-1,0,-1) -- (1,0,1) (-1,1,-2) -- (1,1,0); \draw[blue!80,very thick] (1,1,0) -- (0,1,1) -- (1,0,1) -- cycle (-2,1,-1) -- (-1,0,-1) (-1,1,-2) -- (-2,1,-1); % % % % \fill (1,1,0) circle (2pt) (0,1,1) circle (2pt) (1,0,1) circle (2pt) (-1,1,-2) circle (2pt) (-2,1,-1) circle (2pt); \fill[draw=black,fill=white,very thick] (-1,0,-1) circle (1.5pt); \end{tikzpicture} }; \node (minkowskiDual) at (9,3) {% \begin{tikzpicture} \draw (-2.5,0,-2.5) -- (2,0,-2.5) -- (2,0,2) -- (-2.5,0,2) -- cycle; \fill[black!20] (1,0,0) circle (2pt) (0,0,1) circle (2pt) (1,0,1) circle (2pt) (-1,0,-2) circle (2pt) (-2,0,-1) circle (2pt); \draw[black!20,very thick,line join=bevel] (1,0,0) -- (0,0,1) -- (1,0,1) -- cycle (1,0,0) -- (-1,0,-2) -- (-2,0,-1) -- (0,0,1); \coordinate (o) at (0.7,0,0.7); \fill[blue!80] (o) circle (2pt); \draw[very thick,blue!80] (o) -- ++(1,0,0) (o) -- ++(0,0,1) (o) -- ++(-2.5,0,-2.5); \draw[very thick,red] (0,0,0) -- ++(1,0,-1) (0,0,0) -- ++(-1,0,1); \end{tikzpicture} }; \node at (1.25,0) {$+$}; \node at (1.25,3) {$\cup$}; \node at (6,0) {$=$}; \node at (6,3) {$=$}; \node[font=\footnotesize,anchor=north east] (visibleText) at (9,-0.3) {not visible from above}; \draw[->] ($(visibleText.north east)+(-0.45,0)$) -- ++(0,0.4); \end{tikzpicture}\vspace{-3mm} \caption{Upper vertices of Minkowski sums and regions of maxout arrangements. % } \label{fig:my_label} \end{figure} \section{Number of linear regions for maxout networks} \label{sec:main} In this section we present our main results on the maximum number of linear regions of maxout networks. First we provide % general observations, then % turn to our main results on shallow networks, and derive implications for deep networks. For shallow networks, we obtain sharp bounds and provide a construction that attains them. The main analysis will be conducted in the upcoming Sections~\ref{sec:Weibel},~\ref{sec:Zaslavsky}, and~\ref{sec:Weibel-Zaslavsky}. \subsection{General observations} We begin with a simple general upper bound on the number of linear regions of the functions represented by maxout networks: \begin{proposition}[Simple upper bound on the number of regions] \label{prop:simple-upper-bound} For any maxout network $\mathcal{N}$ with a total of $m$ maxout units of ranks $k_1,\ldots, k_m\in\mathbb{N}$, $N(\mathcal{N})\leq \prod_{j=1}^m k_j$. \end{proposition} \begin{proof} Let $n_0$ be the number of inputs. Let $L$ be the number of layers and denote their widths $n_1,\ldots, n_L$. Write $k_{l,j}$ for the rank of the $j$th unit $j=1,\ldots, n_l$ in the $l$th layer $l=1,\ldots,L$. Fix the parameters $A_{l,j,r}\in\mathbb{R}^{n_{l-1}}$, $b_{l,j,r}\in \mathbb{R}$ of all preactivation features $r=1,\ldots, k_{l,j}$, of all units $j=1,\ldots, n_l$, of all layers $l=1,\ldots,L$. Then, for each input $x$, the represented function $f$ takes the form $(\bar A_{L} \cdots \bar A_{1}) x + (\sum_{l=1}^L \bar A_L \cdots \bar A_{l+1} \bar b_l)$, where each $\bar A_l \colon \mathbb{R}^{n_0}\to\mathbb{R}^{n_l\times n_{l-1}}$ and $\bar b_l\colon\mathbb{R}^{n_0}\to\mathbb{R}^{n_l}$ is a piecewise constant function of $x$ having $j$th row equal to one of the $k_{l,j}$ values $A_{l,j,1},\ldots,A_{l,j,k_{l,j}}\in\mathbb{R}^{n_{l-1}}$ and $b_{l,j,1},\ldots, b_{l,j,k_{l,j}}\in\mathbb{R}$, depending on which of the preactivation features assumes the maximum. % The list of preactivation features that assume the maximum for each unit is called the activation pattern of the network at the particular input. % The set of inputs with a particular activation pattern is determined by a list of linear inequalities and hence it is a convex polyhedron. % In summary, the input space is split into at most $\prod_{l=1}^L\prod_{i=1}^{n_l} k_{l,i}$ connected regions on each of which $f$ is linear. \end{proof} The following proposition states that generic perturbations of the parameters do not decrease the number of linear regions. Here, generic means up to a null set with respect to the Lebesgue measure in parameter space. This is important, as later it will allow us to obtain sharp upper bounds by considering polytopes that are in general orientation. \begin{proposition}[Generic perturbations of parameters do not decrease the number of regions]% Consider a network $\mathcal{N}$ consisting of a finite number of maxout units. Let $f_\theta\in \mathcal{N}$. Then there exists an $\epsilon=\epsilon(\theta)>0$ such that for % generic $\theta'$ with $\|\theta'-\theta\|<\epsilon$, $N(f_{\theta'})\geq N(f_{\theta})$. \end{proposition} \begin{proof} The intuition is that every linear region of $f_\theta$ contains a neighborhood of an input point $x_0$, and small perturbations of the parameter $\theta$ only cause small changes in the distance between $x_0$ and the nonlinear locus $V(f_\theta)$, so that no linear regions can `disappear' under small perturbations of the network parameters. The formal argument is based on the correspondence between regions and upper vertices of the lifted Newton polytope, Proposition~\ref{prop:ftt}. Consider the function $f_\theta$ represented by the network $\mathcal{N}$ with parameter $\theta$, and the corresponding convex function $\bar f_\theta=\max_j\{\langle a_j(\theta),x\rangle +b_j(\theta)\}$ described in~\eqref{eq:maxoutnetworkconvexfunction}. The lifted Newton polytope of $\bar f_\theta$ is the convex hull of points $(a_j(\theta),b_j(\theta))\in\mathbb{R}^{n+1}$ that have a continuous, in fact polynomial, parametrization in $\theta$. The statement now follows from the lower semi-continuity of the face numbers of polytopes discussed in \cite[Section~5.3]{Gruenbaum2003}. More precisely, denote by $\rho$ the Hausdorff metric which is defined as $\rho(A_1,A_2)=\inf\{\alpha>0\colon A_1\subseteq A_2 + B_\alpha, A_2\subseteq A_1 + B_\alpha \}$, where $B_\alpha$ is a radius-$\alpha$ ball around the origin. If $P$ is a bounded polytope, then there exists an $\epsilon = \epsilon(P)>0$ such that for every $P'$ with $\rho(P', P)< \epsilon$ we have $f_k(P')\geq f_k(P)$ for any $0\leq k\leq n$, where $f_k(P)$ is the number of $k$-faces of $P$. The same statement clearly applies to the upper faces of polytopes, and hence to the number of linear regions of $\bar f_\theta$. Since the linear regions of $f_\theta$ and $\bar f_\theta$ are equal for generic choices of parameters, the claim follows. % \end{proof} \subsection{Shallow networks} For shallow maxout networks, \cite{NIPS2014_5422} obtained the following bounds. The upper bound is based on embedding a maxout arrangement in a hyperplane arrangement and using well known upper bounds for that case. We add a minor improvement which was pointed out in \cite{DBLP:conf/icml/SerraTR18} (substituting $k^2$ with $k(k-1)/2$). \begin{proposition}[{\cite[Proposition 7]{NIPS2014_5422}}] \label{proposition:previous-bounds} For a network $\mathcal{N}$ with $n$ inputs and a single layer of $m$ rank-$k$ maxout units, $k^{\min\{n,m\}}\leq N(\mathcal{N})\leq \sum_{j=0}^{n}{m k (k-1)/2 \choose j}$. \end{proposition} Notice the significant gap between the lower and upper bounds in Proposition~\ref{proposition:previous-bounds}, of orders $\Omega(k^{n})$ and $O((m k^2)^n)$ in $m$ and $k$. The construction for the lower bound can be generalized and improved, as we show in the next proposition. In Theorem~\ref{thm:main-result} we will show that this lower bound is optimal. \begin{proposition}[Lower bound for shallow maxout networks] \label{prop:singleLayerLowerBound} For a network $\mathcal{N}$ with $n$ inputs and a single layer of $m$ maxout units of ranks $k_1,\ldots,k_m$, $N(\mathcal{N})\geq\sum_{j=0}^{n} \sum_{S\in {[m]\choose j}} \prod_{i\in S}(k_i-1)$. The bound is realized if each of the maxout units has a nonlinear locus consisting of $k_i-1$ distinct parallel hyperplanes and the normals of different units are in general position. \end{proposition} \begin{proof} We count the number of regions for the special case where each maxout unit has a nonlinear locus consisting of $k_i-1$ parallel hyperplanes, $i=1,\ldots,m$. This provides a lower bound on the maximum possible number of regions. Zaslavsky's theorem \cite{zaslavsky1975facing} states that the number of regions of an arrangement $\mathcal{A}$ of affine hyperplanes in an $n$-dimensional real vector space is $r(\mathcal{A}) = (-1)^n\chi_\mathcal{A}(-1)$, where $\chi_\mathcal{A}$ is the characteristic polynomial of $\mathcal{A}$. For \emph{generic translations of hyperplanes of a linear arrangement}, Stanley \cite[pg.\ 22]{Stanley04anintroduction} shows that $\chi_\mathcal{A}(t) = \sum_\mathcal{B}(-1)^{|\mathcal{B}|} t^{n-|\mathcal{B}|}$, where $\mathcal{B}$ ranges over all subsets of hyperplanes in $\mathcal{A}$ with linearly independent normals. Applying these two results to an arrangement $\mathcal{A}$ in $\mathbb{R}^{n}$ consisting of $m$ sets of $k_i-1$ parallel hyperplanes, $i=1,\ldots, m$, with hyperplanes in different sets being in general position, we obtain $\chi_\mathcal{A}(t) = \sum_{j=0}^{n} (\sum_{\subsmash{S\in {[m]\choose j}}} \prod_{i\in S}(k_i-1))(-1)^j t^{n-j}$ and $r(\mathcal{A})= (-1)^n\chi_\mathcal{A}(-1) = \sum_{j=0}^{n} (\sum_{S\in {[m]\choose j}} \prod_{i\in S}(k_i-1))$. \end{proof} A similar argument can be used to obtain the following lower bound for the maximum number of regions for functions represented by maxout networks without biases. In Theorem~\ref{thm:main-result} we will show that this lower bound is also optimal. \begin{proposition}[Lower bound for shallow maxout networks without biases] \label{prop:singleLayerLowerBoundNoBias} For a network $\mathcal{N}$ with $n$ inputs and a single layer of maxout units of ranks $k_1,\ldots, k_m$ and no biases, $N(\mathcal{N})\geq {m-1\choose n-1} + \sum_{j=0}^{n-1} \sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1)$. \end{proposition} \begin{proof} Consider $\mathbb{R}^n$ and the hyperplane $H = \{x'\in\mathbb{R}^n\colon x'_n=1\}$. Any linear function $x'\mapsto \langle w', x' \rangle$ on $\mathbb{R}^n$ takes over $H$ the form $\langle w , x \rangle + b $, where $w' = (w,b)$ and $x'=(x,1)$. Thus, setting the weights of our layer without biases as $w_{i}'=(w_{i},b_{i})\in\mathbb{R}^n$, with $w_i\in\mathbb{R}^{n-1}$ the weights and $b_i\in\mathbb{R}$ the biases of Proposition~\ref{prop:singleLayerLowerBound} for $n-1$ inputs, where $i$ runs over all preactivation features of all units, we obtain a function whose restriction to $H$ has $\sum_{j=0}^{n-1}\sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1)$ linear regions. % Now we argue that this function can be constructed so that it has ${m-1\choose n-1}$ additional linear regions that are not intersected by $H$. By our construction, over $H$ the nonlinear locus of the $j$th unit $\max\{\langle w_{j,1},x\rangle +b_{j,1},\ldots, \langle w_{j,k_j},x\rangle +b_{j,k_j}\}$ consists of $k_j-1$ parallel hyperplanes, meaning that all $w_{j,1},\ldots, w_{j,k_j}$ are equal to some fixed $w_j$ up to scaling. Consider the hyperplane $G=\{x'\in\mathbb{R}^n\colon x'_n=-1\}$. Over $G$, any of the maxout units takes the form $\max\{\langle \alpha_1 w_{j},x\rangle -b_{j,1} ,\ldots, \langle \alpha_{k_j} w_{j},x\rangle -b_{j,k_j}\}$ and its nonlinear locus includes (actually it consists precisely of) one hyperplane $\{x\in \mathbb{R}^{n-1}\colon (\alpha_r-\alpha_s)\langle w_j ,x\rangle - (b_{j,r}-b_{j,s})\}$. Hence, over $G$ our layer has linear regions determined by $m$ affine hyperplanes. Now, an arrangement of $m$ hyperplanes in general position in ${n-1}$ dimensions has ${m-1\choose n-1}$ relatively bounded regions. None of these intersect $H$. \end{proof} Our main result determines the maximal number of linear regions for shallow maxout networks with and without bias, for any input dimension, any number of maxout units, and any ranks. It shows that the lower bounds in Propositions~\ref{prop:singleLayerLowerBound} and~\ref{prop:singleLayerLowerBoundNoBias} are sharp. \begin{theorem}[Optimal bound for shallow maxout networks] \label{thm:main-result} For a shallow network $\mathcal{N}$ with $n$ inputs and a layer of $m$ maxout units of ranks $k_1,\ldots, k_m$, we have \begin{align*} N(\mathcal{N}) &= \sum_{j=0}^{n} \sum_{S\in {[m]\choose j}} \prod_{i\in S}(k_i-1),\\ \intertext{where ${[m]\choose j}$ is the set of subsets of $[m]=\{1,\ldots, m\}$ with cardinality $j$. Here ${[m]\choose 0} = \{\emptyset\}$, empty sums are $0$, and empty products are $1$. In particular, if $k_1=\cdots=k_m =k$, we have $N(\mathcal{N}) = \sum_{j=0}^{n}{m\choose j}(k-1)^j$. Moreover, if $\mathcal N$ is without biases, then} N(\mathcal{N}) &= {m'-1\choose n-1} + \sum_{j=0}^{n-1} \sum_{S\in {[m]\choose j}} \prod_{i\in S}(k_i-1), \end{align*} where $m'$ is the number of maxout units of rank larger than $1$. In particular, if $k_1=\cdots=k_m =k>1$, we have $N(\mathcal{N}) = {m-1\choose n-1}+\sum_{j=0}^{n-1}{m\choose j}(k-1)^j$. \end{theorem} The proof relies on several results that will be developed in Sections~\ref{sec:Weibel}, \ref{sec:Zaslavsky}, and \ref{sec:Weibel-Zaslavsky}. The most difficult part % is the upper bound for the case with biases and many units ($m\geq n+1$) possibly having small ranks (allowed to be smaller than $n+2$), which also happens to be the case of highest practical interest. The main ideas are as follows. It is not difficult to adapt a result by Weibel to count upper faces of Minkowski sums (Theorem~\ref{thm:f_vectors_upper_part_Minkowski_sum}). This provides us with a formula for the number of linear regions (and other lower-dimensional features) for shallow maxout networks. However, the formula consists of an alternating sum over sub-arrangements whose maximum value is difficult to determine when some of the polytopes are not full dimensional. Studying whole polytopes instead of their upper faces allows us to leverage Adiprasito-Sanyal's Upper Bound Theorem for Minkowski Sums and its implications for small Minkwoski subsums (Proposition~\ref{thm:Mneighborly_upper_bound}). To obtain an explicit formula for the maximum number of overall vertices, we generalize Weibel's formula for vertices to encompass possibly lower-dimensional polytopes (Theorem~\ref{thm:central-arbdims}) and upper vertices (Theorem~\ref{thm:facessimple}). To this end we formulate a maxout version of Zaslavsky's theorem (Theorem~\ref{thm:posetcounting}). In order to differentiate between all vertices and upper vertices, we study bounded regions of maxout arrangements (Theorem~\ref{thm:lowerbound_strict_lower}). In summary, the key results towards proving Theorem~\ref{thm:main-result} are the following: \setlist[description]{font=\normalfont} \begin{description}[leftmargin=3mm] \item[Propositions~\ref{prop:singleLayerLowerBound} and \ref{prop:singleLayerLowerBoundNoBias}:] constructive lower bounds for the maximum number of linear regions of shallow maxout networks with and without biases, improving a previous construction by Mont\'ufar et al.~\cite{NIPS2014_5422}. \item[Proposition~\ref{thm:Mneighborly_upper_bound}:] a consequence of the Upper Bound Theorem for Minkowski sums by Adiprasito-Sanyal~\cite{KarimRaman} to the number of vertices of small Minkowski subsums of polytopes attaining said upper bound. \item[Theorem~\ref{thm:posetcounting}:] a maxout version of Zaslavsky's theorem for hyperplane arrangements~\cite{zaslavsky1975facing}, which expresses the number of regions of a maxout arrangement in terms of the Euler characteristic and the M\"obius function on the intersection poset. \item[Theorems~\ref{thm:facessimple} and~\ref{thm:central-arbdims}:] a generalization of Weibel's counting formula for Minkowski sums~\cite{Weibel12}, which expresses the number of regions of simple non-central and central maxout arrangements in terms of the regions of small subarrangements (Minkowski subsums). \item[Theorem~\ref{thm:lowerbound_strict_lower}:] a lower bound on the number of bounded regions of a maxout arrangement (strict lower vertices of a Minkowski sum). This allows us to upper bound the number of upper vertices of a Minkowski sum, given an upper bound on the total number of vertices. \end{description} \begin{proof}[Proof of Theorem~\ref{thm:main-result}] For networks with biases, if $m\leq n$, we can apply the trivial upper bound given in Proposition~\ref{prop:simple-upper-bound}. For $m \geq n+1$, the upper bound follows from the upper bound theorem for Minkowski sums of polytopes Theorem~\ref{thm:Mneighborly_upper_bound} together with the lower bound on the number of strict upper or lower faces given in Theorem~\ref{thm:lowerbound_strict_lower} inserted into the counting formula given in Theorem~\ref{thm:facessimple} and reformulated via Lemma~\ref{lemma:reformulation}. The construction attaining the maximum is given in Proposition~\ref{prop:singleLayerLowerBound}. For networks without biases, if $m\leq n$, we can apply the trivial upper bound given in Proposition~\ref{prop:simple-upper-bound}. For $m\geq n+1$, the upper bound follows from Theorem~\ref{thm:Mneighborly_upper_bound} inserted into the counting formula given in Theorem~\ref{thm:central-arbdims} and reformulated via Lemma~\ref{lemma:reformulation}. The construction attaining the maximum is given in Proposition~\ref{prop:singleLayerLowerBoundNoBias}. \end{proof} We illustrate Theorem~\ref{thm:main-result} on a few examples. \begin{example}\ \begin{enumerate}[leftmargin=*] \item In the case of a single input, $n=1$, networks with biases represent functions on the real line which have at most $\sum_{i=1}^m (k_i-1)$ break points % and a maximum of $1 + \sum_{i=1}^m (k_i-1)$ linear regions. Networks without biases and at least one unit of rank $\geq2$ represent functions which have at most $1$ break point and a maximum of % $2$ linear regions. \item In the case of few units, $m\leq n$, networks with and without biases both have the optimal bound $\prod_{i=1}^m k_i$. To see this, use the multi-binomial theorem to evaluate the formula in our theorem. This matches the simple upper bound given in Proposition~\ref{prop:simple-upper-bound}. \item In the case of many units, $m\geq n$, the maximum number of regions is no longer exponential in $m$, but only polynomial. For $k_1=\cdots=k_m=k$, the order in $m$ and $k$ is $\Theta((mk)^n)$ and $\Theta((mk)^{n-1})$ in the cases with and without biases. This should be compared with the previous bounds $\Omega(k^{n})$ and $O((m k^2)^n)$ from Proposition~\ref{proposition:previous-bounds} for the case with biases. \item In the case $k_1=\cdots=k_m=2$, we recover the well-known formulas for the maximum number of regions of hyperplane arrangements, $\sum_{j=0}^n{m\choose j}$, and central hyperplane arrangements, ${m-1\choose n-1}+\sum_{j=0}^{n-1}{m\choose j} = 2\sum_{j=0}^{n-1}{m-1\choose j}$. These are also the optimal bounds for shallow ReLU networks with and without biases. \end{enumerate} \end{example} We finish this part with two % corollaries. The first gives a lower bound on the number of bounded regions. \begin{corollary}[Lower bound on the number of bounded regions] \label{cor:lowervert} For a network with $n$ inputs, a layer of $m$ maxout units of ranks at least $2$, and generic parameters, the number of bounded regions is at least ${m-1\choose n}$. For networks without biases all regions are unbounded. % \end{corollary} \begin{proof} This follows from Theorem~\ref{thm:lowerbound_strict_lower}. \end{proof} By Corollary~\ref{cor:lowervert}, generic maxout arrangements with rank at least $2$ have at least as many bounded regions as generic hyperplane arrangements, which have ${m-1\choose n}$ bounded regions. This observation is non-trivial, % since the polyhedral pieces of % tropical hypersurfaces do not necessarily all intersect each other. It is easy to draw examples in $\mathbb{R}^2$ showing that, in contrast to hyperplane arrangements, maxout arrangements do not have a single generic number of bounded regions. Lower bounds for generic parameters are rare in the literature. A result of this kind is \cite[Corollary 8.2]{Adiprasito2017} (supplement to \cite{KarimRaman}), which shows that a Minkowski sum of $m$ polytopes in general position has at least as many vertices as a sum of $m$ line segments. The following is a simple corollary for the number of regions over an affine subspace of the input space, which we will use in the next part on deep networks. \begin{corollary}[Number of regions over an affine subspace] \label{cor:convexRegionsOfRestriction} Consider a network $\mathcal{N}$ with $n$ inputs and a layer of $m$ maxout units. Let $A$ be an affine $n_0$-space, $n_0\leq n$. Then $N (\mathcal{N}|_A) = \sum_{j=0}^{n_0}\sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1)$. Similarly, for a network $\mathcal{N}$ without biases and $A$ a linear $n_0$-space, $n_0\leq n$, % $N(\mathcal{N}|_A) = {m-1\choose n_0-1} + \sum_{j=0}^{n_0-1}\sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1)$. \end{corollary} \begin{proof} The functions represented by $\mathcal{N}$ on $A$ can be written as a layer with $n_0$ inputs. \end{proof} \subsection{Deep networks} In this subsection we derive consequences of our analysis of shallow networks for deep networks. For deep maxout networks, \cite{NIPS2014_5422} obtained the following lower bound. The main point in that work was to show that the maximum number of linear regions is exponential in the depth of the network. Upper bounds for deep networks can be obtained by multiplying upper bounds for individual layers, whereby the effective input dimension of each layer is bounded by the dimension of the image of the previous layers % \cite{montufar2017notes}. The following upper bound of this form was given in~\cite{DBLP:conf/icml/SerraTR18}, whereby we correct a minor typo (the sum runs up to $\min\{n_0,\ldots,n_{l-1}\}$ rather than $\min\{n_0,\ldots,n_{l}\}$). \begin{proposition}[{\cite[Theorem~9]{NIPS2014_5422} and \cite[Theorem 10]{DBLP:conf/icml/SerraTR18}}] For a rank-$k$ maxout network $\mathcal{N}$ with $n_0$ inputs and $L$ layers of width $n_0$, $N(\mathcal{N})\geq k^{L-1}k^{n_0}$. For % $L$ layers of widths $n_1,\ldots, n_L$, $N(\mathcal{N})\leq \prod_{l=1}^L(\sum_{j=0}^{e_l} {n_l k(k-1)/2\choose j })$, where $e_l=\min\{n_0,\ldots, n_{l-1}\}$. \end{proposition} We observe that there is a significant gap between the lower and upper bounds, of orders $\Omega(k^{L-1 + n_0 })$ and $O(\prod_{l=1}^L (n_l k^2)^{n_0})$ in % $n_1,\ldots, n_L$ and % $k$. We can refine the approach from \cite{NIPS2014_5422} to obtain the following lower bound of order $\Omega(\prod_{l=1}^L(n_lk)^{n_0})$ % in $n_1,\ldots, n_L$ and $k$. This not only grows exponentially with the depth $L$, but also grows with the layer widths. \begin{proposition}[Lower bound for deep maxout networks] \label{prop:deep-lower} Consider a network $\mathcal{N}$ with $n_0$ inputs and $L$ layers of $n_1,\ldots, n_L$ rank-$k$ maxout units. Let $n\leq n_0, \frac12 n_1,\ldots, \frac12n_{L-1}$. Assume $\frac{n_l}{n}$ is even (else take the largest even lower bound and discard the rest). Then $N(\mathcal{N})\geq (\prod_{l=1}^{L-1} (\frac{n_l}{n} (k-1)+1)^n) (\sum_{j=0}^n{n_L\choose j}(k-1)^j)$. For the same network but without biases, assuming $\frac{n_l-1}{n-1}$ is even, $N(\mathcal{N})\geq (\prod_{l=1}^{L-1} (\frac{n_l-1}{(n-1)} (k-1)+1)^{n-1}) (% \sum_{j=0}^{n-1}{n_L\choose j}(k-1)^j)$. \end{proposition} \begin{proof} We follow the general arguments from \cite{NIPS2014_5422} but modify the construction of the weights % to be similar to the one used in the same paper for ReLU layers. The idea is to construct a many-to-one function, which allows us to multiply the regions across layers. Consider first the case with biases. We consider the restriction of the network to inputs from a subspace of dimension $n$. Further, we insert a linear layer of output dimension $n$ after each layer of maxout units. In this way, the input dimension for each layer is $n$. This does not increase the representational power of the network, since a linear layer can be subsumed into the input weights and biases of the next layer, as $A_{i+1}(B_i f_i(x) + c_i)+b_{i+1} = (A_{i+1} B_i) f_i(x) + (A_{i+1}c_i +b_{i+1})$. For each layer $l=1,\ldots,L-1$, we organize the $n_l$ units into $n$ groups of even size $\frac{n_l}{n}$. By choosing the parameters of the $i$th group appropriately, we can achieve that their sum with alternating signs represents a zig-zag function over $\mathbb{R}^n$ with $\frac{n_l}{n}(k-1)$ breakpoints along the $i$th coordinate. To see this, note that along any given direction of its input, a maxout unit can represent any piecewise linear convex function with $k-1$ break points. This way we can achieve that the $i$th layer maps $[0,1]^n$ to $[0,1]^n$ in a $(\frac{n_l}{n}(k-1)+1)^n$ to one manner. The function computed up to layer $L-1$ multiplies these multiplicities. By Proposition~\ref{prop:singleLayerLowerBound}, the last layer can create $\sum_{j=0}^n{n_L\choose j}(k-1)^j$ regions over the an $n$-dimensional subspace of its input space, which, by appropriate scaling will intersect $[0,1]^n$. Each of these regions has multiplicity $\prod_{l=1}^{L-1}(\frac{n_l}{n}(k-1)+1)^n$ over the input space of the network, thus giving the indicated lower bound. Consider now the case without biases. % For each layer $l=1,\ldots,L$, we choose the weights of all preactivation features of the $n_{l}$th unit as the coordinate vector $e_{n_{l-1}}\in\mathbb{R}^{n_{l-1}}$, so that $x^l_{n_l}=\max\{x^{l-1}_{n_{l-1}},\ldots, x^{l-1}_{n_{l-1}}\}=x^{l-1}_{n_{l-1}}$. We consider the restriction of the network to the subset of inputs given by the hyperplane $H = \{x^0\in\mathbb{R}^{n_{0}} \colon x^{0}_{n_{0}}=1\}$. The number of linear regions of a function over this subset is a lower bound on its number of regions over the entire input space $\mathbb{R}^{n_0}$. Notice that, given our choice of weights, over $H$ the last unit of each layer takes the fixed value $x^0_{n_0}=x^1_{n_1}=\cdots=x^L_{n_L}=1$. As in Proposition~\ref{prop:singleLayerLowerBoundNoBias}, we choose the weights of the units $i=1,\ldots, n_l-1$ in layer $l$ as $w'_i = (w_i,b_i)\in\mathbb{R}^{n_{l-1}\times k}$, with $w_i\in\mathbb{R}^{(n_{l-1}-1)\times k}$ the weights and $b_i\in\mathbb{R}^{1\times k}$ the biases that are used above for a layer with biases and $n_{l-1}-1$ inputs. Hence, over $H$ we obtain the same many-to-one maps as above, but now with the widths substituted to $n_0-1,\ldots, n_{L-1}-1$. Finally, note that the last layer can in fact be chosen as in Proposition~\ref{prop:singleLayerLowerBoundNoBias} with $n$ inputs and $n_L$ outputs. We take the bound $\sum_{j=0}^{n-1}{n_L\choose j}(k-1)^j$ for the number of regions intersecting $\{x^{L-1}\in\mathbb{R}^{n_{L-1}}\colon x^{L-1}_{n_{L-1}}=1\}$ and ignore other regions. \end{proof} Proposition~\ref{prop:deep-lower} is given for networks where all units have the same rank $k$, but it is straightforward to formulate corresponding results for networks with units of different ranks. Also, it is not difficult to obtain minor improvements if instead of discarding units one keeps them with small weights, without altering the general construction. However, as we will see below, the asymptotic is already tight. We now derive upper bounds the number of linear regions $N(\Phi)$ of a function $\Phi:\mathbb{R}^{n_0}\to\dots\to\mathbb{R}^{n_L}$ represented by a deep neural network. Notice that each linear region of the function computed up to the $(l-1)$th layer is split by the $l$th layer into at most % the number of linear regions of a shallow network with $n_{l-1}$ inputs and $n_l$ outputs. As pointed out in \cite{montufar2017notes}, the linear output pieces of the $(l-1)$th layer have dimension bounded above by $\min\{n_0,\ldots, n_{l-1}\}$, which allows us to slightly improve the bound based on Corollary~\ref{cor:convexRegionsOfRestriction}. Similar discussions have also appeared in \cite{DBLP:conf/icml/SerraTR18} and \cite[Theorem 6.3]{pmlr-v80-zhang18i}. We obtain the following upper bound for deep networks. When all units have rank $k$, the bound is of order $O(\prod_{l=1}^L(n_l k)^{n_0})$ in the layer widths $n_1,\ldots, n_L$ and rank~$k$, which in view of Proposition~\ref{prop:deep-lower} is tight. \begin{theorem}[Upper bound for deep maxout networks] \label{thm:deep-result} Let $\mathcal{N}$ be an network with $n_0$ inputs and $L$ layers of $n_l$ maxout units of ranks $k_{l,1},\ldots, k_{l,n_l}$, $l=1,\ldots, L$. Let $e_l=\min\{n_0,\dots, n_{l-1}\}$, $l=1,\ldots, L$. Then \begin{align*} N(\mathcal{N})&\le \prod_{l=1}^L \sum_{j=0}^{e_l}\sum_{S\in{[n_l]\choose j}}\prod_{i\in S} (k_{l,i}-1).\\ \intertext{For the same network but without biases,} N(\mathcal{N})&\le \prod_{l=1}^L \Big( {n_l-1\choose e_{l}-1} + \sum_{j=0}^{e_{l}-1}\sum_{S\in{[n_l]\choose j}}\prod_{i\in S} (k_{l,i}-1)\Big). \end{align*} Moreover, these bounds are asymptotically sharp. \end{theorem} \begin{proof} We write $\Phi^{(i,j)}:\mathbb{R}^{n_{i-1}}\to \dots\to \mathbb{R}^{n_j}$ for the function represented by the network consisting of layers $i,\ldots, j$, and write $\Phi^{(j)}$ for $\Phi^{(1,j)}$. For any $l\in[L]$ we have that $\Phi = \Phi^{(l,L)}\circ \Phi^{(l-1)}$. We consider $N_c(\Phi)$, the smallest number of convex regions that form a refinement of the linear regions of the function $\Phi$. For a positive integer $e\le n_0$, we will also need to consider \[ N_c(\Phi\mid e):= \max\{ N_c(\Phi|_\Omega): \Omega\subseteq \mathbb{R}^{n_0}\text{ is an }e\text{-dimensional affine space}\}. \] Now, if $F$ is a layer with output dimension $d$, and $H= G\circ F$, where $G$ is a layer with a compatible number of inputs, then $ N(H) \le N_c(H) \le N_c(G\mid d)\cdot N_c(F)$. See \cite[Theorem D.3]{pmlr-v80-zhang18i} for a discussion. Hence $N(\Phi)\le N_c(\Phi^{(l,L)}\mid e_l) \cdot N_c(\Phi^{(l-1)}) = N_c(\Phi^{(l,L)})\cdot N_c(\Phi^{(l-1)})$. The bounds then follow by induction % and Corollary~\ref{cor:convexRegionsOfRestriction}, which bounds the number of regions of a layer with inputs from an affine space of given dimension. Finally, the asymptotic tightness of the bounds follows in view of Proposition~\ref{prop:deep-lower}. \end{proof} The bound is based on the observation that each of the linear regions of a network with $l-1$ layers is mapped to a polyhedron of dimension at most $e_l$ in the input space of the $l$th layer. The $l$th layer will split each of these polyhedra into at most as many regions as it can create over an affine space of dimension $e_l$. In principle, one can pursue a more refined analysis by recursively investigating the arrangement that is induced by the $l$th layer on the graph of the $(l-1)$-layer network, similar to the analysis that we conduct in Sections~\ref{sec:Zaslavsky} and \ref{sec:Weibel-Zaslavsky} for maxout arrangements. \section{Face counting formulas \`a la Weibel} \label{sec:Weibel} Weibel \cite{Weibel12} obtained a counting formula for the number of faces of large Minkowski sums of full-dimensional polytopes $P_1,\ldots, P_m\subseteq\mathbb{R}^{n+1}$, $m\geq n+1$, $\dim(P_i)=n+1$, in terms of the numbers of faces of partial sums of up to $n$ of the polytopes. In the following we give a similar formula for the number of \emph{upper} faces, which also holds when the summands have arbitrary dimensions. The idea of \cite{Weibel12} is to enumerate the $s$-faces of % $P=P_1+\cdots+P_m$ by inclusion-exclusion of the $s$-faces of the partial sums $P_S=\sum_{i\in S}P_i$, $S\subseteq\{1,\ldots, m\}$ with $1\leq |S|\leq n$. In order to do this, polytopes are associated with spherical complexes, and cells of the complex are assigned a witness \emph{westernmost corner}. We use similar definitions with slight modifications. \begin{definition}[Spherical complex and upper complex of a polytope] Let $P\subseteq\mathbb{R}^{n+1}$ be a polytope. To each face $F$ of $P$ we associate the cell of directions it maximizes: $C(F,P)=\{l\in \mathbb{S}^{n}\colon \langle l, x -y\rangle > 0\; \forall x\in F, y\in P\setminus F\}$. The set of all such cells is a spherical complex $\mathcal{G}(P)$ dual to $P$. The upper complex $\mathcal{G}^+(P)$ consists of the intersections of cells in $\mathcal{G}(P)$ with $(\mathbb{R}^{n}\times \mathbb{R}_{>0})$. The upper part $P^+$ of a polytope $P$ in $\mathbb{R}^{n+1}$, is the collection of faces $F$ of $P$ % whose cells $C(F,P)$ intersect $(\mathbb{R}^{n}\times \mathbb{R}_{>0})$. Let $\mathbb{S}^n_{\geq0} = \mathbb{S}^{n}\cap(\mathbb{R}^{n}\times\mathbb{R}_{\geq0})$. \end{definition} \begin{definition}[General orientation] A polytope $P\subseteq\mathbb{R}^{n+1}$ is said to be in general orientation (relative to our coordinates) if none of the great circles defined by one-dimensional cells in $\mathcal{G}(P)$ contains a standard unit vector. A family of polytopes $P_1,\dots,P_m\subseteq\mathbb{R}^{n+1}$ is in general orientation, if each $P_i$ is in general orientation and for all $S\subseteq [m]$ and any $C_i\in \mathcal{G}(P_i)$, $i\in S$, the intersection $\bigcap_{i\in S} C_i$ is either empty or has codimension at least $\min\{\sum_i \operatorname{codim}(C_i),n\}$. % % \end{definition} For $i=n+1,n-1,n-3,\ldots$, let $U^{i}$ be the $i$-dimensional % subspace $\operatorname{span}\{e_{n+1-i+1},\ldots,e_{n+1}\}$ if $i\geq 1$ and just the zero space $\{0\}$ if $i\leq 0$. Given a fixed $U^i$, we % write $\hat e_{k} = e_{n+1-i+1+k}$ for $1\leq k\leq i$ so that $\hat e_{1},\dots,\hat e_{i}$ is a basis of it. \begin{definition}[Direction west]\label{def:directionWest} Let $i>1$. At every point in $\mathbb{S}^n\cap U^i \cong \mathbb{S}^{i-1}$ and not in $\mathbb{S}^{n}\cap U^{i-2} \cong \mathbb{S}^{i-3}$ define the direction west as the direction of increasing $\theta_1$ in the coordinate system given by $\mathbb{S}^1 = \{\sin(\theta_1)\hat e_{1} + \cos(\theta_1) \hat e_{2} \colon \theta_1\in[0,2\pi)\}$ and $\mathbb{S}^k = \{\sin(\theta_k) \mathbb{S}^{k-1} + \cos(\theta_k)\hat e_{k+1}\colon \theta_{k+1}\in[0,\pi]\}$ for $k=2,3,\ldots,i-1$. Here, $\hat e_{k}=e_{n+1-i+1+k}$. \end{definition} \begin{figure} \centering \begin{tikzpicture} % % \draw (0,-3.925) -- ++(0,-0.25); \draw (0,-4.1) ellipse (10mm and 4mm); \node[anchor=north] (westernmostCorners) at (0,0) {\includegraphics[width=4cm]{illustrations/illustrateWesternmostCorners.pdf}}; \node at (0,-4.5) {$<$}; \node[anchor=south,font=\footnotesize] at (0,-4.9) {west}; \node[anchor=north west,xshift=1mm,yshift=5mm] at (westernmostCorners.north west) {(a)}; \fill (0,-0.325) circle (1.5pt); % \draw (0,-0.325) -- ++(0,0.75) node[anchor=north west,font=\footnotesize] {$U^1=\text{span}\{e_3\}$}; \node[anchor=north east,xshift=-1mm,yshift=-1mm] at (westernmostCorners.north east) {$\mathbb S^2_{\phantom{\geq 0}}$}; \fill[black!70] (0,-3.9) circle (1.5pt); \fill (5,-3.9) circle (1.5pt); \node[anchor=north] (fullSphere) at (5,0) {\includegraphics[width=4cm]{illustrations/illustrateHalfSphere1.pdf}}; \node[anchor=north east,xshift=-1mm,yshift=-1mm] at (fullSphere.north east) {$\mathbb S^2_{\phantom{\geq 0}}$}; \node[anchor=north west,xshift=1mm,yshift=5mm] at (fullSphere.north west) {(b)}; \fill (5,-0.325) circle (1.5pt); \fill[black!70] (5,-3.9) circle (1.5pt); \node[anchor=north] (halfSphere) at (10,0) {\includegraphics[clip=true, trim=0cm 2.6cm 0cm 0cm,width=4cm]{illustrations/illustrateHalfSphere2.pdf}}; % \node[anchor=north east,xshift=-1mm,yshift=-1mm] at (halfSphere.north east) {$\mathbb S^2_{\geq 0}$}; \node[anchor=north west,xshift=1mm,yshift=5mm] at (halfSphere.north west) {(c)}; \fill (10,-0.325) circle (1.5pt); \node[anchor=north west,xshift=10mm,yshift=0mm,font=\footnotesize,text width=50mm] (fullSphereText) at (fullSphere.south west) {blue cell has no\\ westernmost point}; \draw[->] ($(fullSphereText.north west)+(0.3,0)$) -- ++(0,0.4); \node[anchor=north west,xshift=2mm,yshift=-2mm,font=\footnotesize] (halfSphereText) at (halfSphere.south west) {westernmost corner of blue cell}; \draw[->] ($(halfSphereText.north west)+(0.9,0)$) -- ++(0,0.4); \end{tikzpicture}\vspace{-3mm} \caption{(a) Shown is $\mathcal G(P_1+P_2+P_3)$, where $P_1$, $P_2$, $P_3$ have $2$, $3$, $3$ vertices respectively, and the westernmost corners of its $0$-, $1$-, and $2$-cells (westernmost corners of $2$-cells are colored by their support); (b) Cells of $\mathcal G(P)$, $P$ lower-dimensional, may not have westernmost points, but (c) cells of $\mathcal G^+(P)$ do.} \label{fig:westernmostCorners} \end{figure} \begin{definition}[Westernmost point and westernmost corner] We define a westernmost point of a cell $C\subseteq\mathbb{S}^n$ as follows: Let $U^i$ be the smallest subspace in the sequence $U^{n+1},U^{n-1},\ldots$ which has a nonempty intersection with $C$. If $i>1$, a westernmost point of $C$ is a local optimizer of the direction west in the closure of $C\cap U^i$. If $i=1$, then $C\cap U^i\subseteq\{\pm e_{n+1}\}$. For $C\cap U^i=\{\pm e_{n+1}\}$ the westernmost point of $C$ is $e_{n+1}$, otherwise it is the single point in the intersection. Finally, we define a westernmost corner of a cell as the intersection of its closure with a small ball around a westernmost point. \end{definition} The definition is illustrated in Figure~\ref{fig:westernmostCorners} (a). Example~\ref{example:westmostPoints} illustrates the existence and non-existence of westernmost corners. Lemma~\ref{lemma:existence} will state sufficient conditions for existence and uniqueness. \begin{example}\label{example:westmostPoints}\ \begin{enumerate}[leftmargin=*] \item Consider the upper sphere $C=\mathbb{S}^n_{\geq0}$. If $n$ is even, then the smallest subspace among $U^{n+1},U^{n-1},\ldots,U^{1}$ which intersects $C$ is $U^1$. Hence the westernmost point of $C$ is the north pole $C\cap U^1 = \{(0,\ldots, 0,1)\}$. If $n$ is odd, then the smallest subspace which intersects $C$ is $U^2$. Hence the westernmost point of $C$ is the optimizer of $\theta_1$ over the half-circle $\mathbb{S}^n_{\geq0}\cap U^2\cong \mathbb{S}^1_{\geq0}$, which is $\{(0,\ldots,0,1,0)\}$. \item If $P$ is not full dimensional, % not every cell of $\mathcal G(P)$ needs to have a westernmost point: Consider $P=\operatorname{conv}\{v,-v\}\subseteq \mathbb{R}^3$ for a generic $v\in\mathbb{S}^2$; see Figure~\ref{fig:westernmostCorners} (b). Then $\mathcal G(P)$ consists of % two open half-spheres and a great circle. Each of the half-spheres intersects $U^1$ at a single point, which is their westernmost point. The great circle % does not intersect $U^1$ and has no local optimum for the direction west. Hence it % has no westernmost point. Note however that each % cell of $\mathcal G^+(P)$ has a unique westernmost point; see Figure~\ref{fig:westernmostCorners}~(c). \item If a cell is not in general orientation relative to the $U^i$, then it can have multiple westernmost points: Consider a segment of a great circle in $\mathbb{S}^2$ passing through north and south poles. If it does not contain either pole, then any of its points is a westernmost point. \end{enumerate} \end{example} \begin{lemma}\label{lemma:existence} Let $P\subseteq \mathbb{R}^{n+1}$ be a polytope. If $P$ is in general orientation (relative to the coordinate system), then each cell of $\mathcal G^+(P)$ has a unique westernmost corner. Additionally, if $P$ is full-dimensional, then each cell of $\mathcal G(P)$ has a unique westernmost corner. \end{lemma} \begin{proof} Note that cells of $\mathcal G^+(P)$ and, for $P$ full-dimensional, of $\mathcal G(P)$ are spherically convex in the sense that any shortest arc between two points is inside the set. Combined with $P$ being in general orientation, \cite[Lemma 6]{Weibel12} shows that every cell has a westernmost point and % corner. % Since $P$ is in general orientation, each cell has a single westernmost point. \end{proof} \begin{definition}[Support] Let $P_1,\ldots,P_m\subseteq\mathbb{R}^{n+1}$ be a family of polytopes. The support of a point $w\in\mathbb{S}^n$ is defined as \[ \Supp_{P_1,\dots,P_m}(w)\coloneqq\{ i\in [m]\colon w \in C_i \text{ for some } C_i\in \mathcal{G}(P_i) \text{ with }\operatorname{co-dim}(C_i)\geq 1\}\subseteq [m]. \] In particular, the support of a generic point is $\emptyset$. The support $\Supp_{P_1,\dots,P_m}(W)$ of a westernmost corner $W$ of a cell % is defined to be the support of its westernmost point. \end{definition} The following lemma points out that for polytopes in general orientation, westernmost corners that appear in one sub-complex also appear in any larger sub-complex. \begin{lemma}\label{lemma:sequence} Let $P_1,\ldots, P_m$ be a family of polytopes in general orientation, and let $W$ be a westernmost corner of an $s$-cell of $\mathcal G(P_{[m]})$. Then $W$ is the westernmost corner of an $s$-cell of $\mathcal G(P_S)$ if and only if $\Supp_{P_1,\ldots, P_m}(W)\subseteq S$. % % \end{lemma} \begin{proof} Note that $\mathcal G(P_S)$ coincides with $\mathcal G(P_{[m]})$ locally around $W$ if and only if its support $\Supp_{P_1,\ldots, P_m}(W)$ is contained in $S$. Thus the claim is an immediate consequence of \cite[Lemma~7]{Weibel12}, which states that for spherically convex cells on $\mathbb S^n$ in general orientation westernmost points are % the local optima of the direction west. % % % \end{proof} We will use the following lemma in order to enumerate the westernmost points of Minkowski sums based on the above observation. Notice that ${m-r \choose j-r}$ is the number of subsets of $[m]$ of cardinality $j$ which contain some particular subset of $[m]$ of cardinality $r$. \begin{lemma}\label{lemma:inclusion-exclusion} For any integers $0\leq r\leq n<m$, we have $\sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} {m-r \choose j-r} = 1$. In particular, for any function $\xi_{(\cdot)}:2^{[m]}\rightarrow\mathbb{R}$ with $\sum_{S\in\binom{[m]}{j}} \xi_{S}={m-r \choose j-r}$ for all $0\leq j\leq n$, % \[ \sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in\binom{[m]}{j}} \xi_{S}= 1. \] \end{lemma} \begin{proof} The proof follows by induction over $m\geq n+1$, using for $m=n+1$ the fact that $\sum_{j=0}^{n+1} (-1)^j{n+1-r\choose j-r}=0$ and hence $\sum_{j=0}^n (-1)^{n-j}{n+1-r\choose j-r}=1$ for any $0\leq r<n+1$. \end{proof} We obtain the following linear relation between the number of upper $s$-faces of a Minkowski sum of $m$ polytopes and the number of upper $s$-faces of subsums of at most $n$ of the polytopes. This is a version of Weibel's theorem \cite[Theorem~1]{Weibel12} for the case of upper faces. Whereas that result is for sums of full-dimensional polytopes, our statement is valid for any dimensions. % \begin{theorem}[Number of upper faces of Minkowski sums] \label{thm:f_vectors_upper_part_Minkowski_sum} Let $P_1,\ldots, P_{m}$ be % any positive dimensional polytopes in $\mathbb{R}^{n+1}$ in general orientations, $m\geq n+1$, and $P=P_1+\cdots+P_m$. Then for the number of $s$-faces of the upper part we have $$ f_s(P^+) = \sum_{j=0}^{n} (-1)^{n-j} {m-1-j \choose n-j} \sum_{S\in{[m]\choose j}} f_s(P_S^+) , \quad s=0,\ldots, n, $$ where $P_S = (\sum_{i\in S}P_i)$ for any nonempty $S\subseteq[m]$, and $P_\emptyset=\{0\}$. \end{theorem} \begin{proof} Consider the % complex $\mathcal G^+(P)$, and recall that $s$-dimensional upper faces of $P$ correspond to $(n-s)$-dimensional cells of $\mathcal G^+(P)$. Let $W_1,\dots,W_N$ be the westernmost corners of $(n-s)$-cells of $\mathcal G^+(P)$ and let $I_1,\dots,I_N\subseteq [m]$ denote their supports. Note that % $0\leq |I_i| \leq n$ for all $i=1,\dots,N$. Let $w_s(P_S^+)$ denote the number of west-most corners of $(n-s)$-cells of $P_S^+$, so that $w_s(P_S^+)=f_s(P_S^+)$. Writing $f_s(P^+) = N = \sum_{i=1}^N 1$, we then obtain \begin{align*} % % % &f_s(P^+)\overset{Lem.}{\underset{\ref{lemma:inclusion-exclusion}}{=}}\sum_{i=1}^N \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j}\!\!\!\! \sum_{S\in {[m] \choose j}}\!\!\! \mathbb{1}_{I_i\subseteq S} = \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j}\!\!\!\! \sum_{S\in {[m] \choose j}} \sum_{i=1}^N \mathbb{1}_{I_i\subseteq S} \\ &\overset{Lem.}{\underset{\ref{lemma:sequence}}{=}} \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j} \!\!\sum_{S\in {[m] \choose j}} w_s(P_S^+) = \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j} \!\!\sum_{S\in {[m] \choose j}} f_s(P_S^+). \qedhere % % \end{align*} % \end{proof} One naturally wonders if the proof of Theorem~\ref{thm:f_vectors_upper_part_Minkowski_sum} can be extended to count the faces of the entire polytope, generalizing Weibel's result to sums of polytopes of arbitrary dimensions. Lemma~\ref{lemma:existence} does not cover that case. We will present an alternative approach in Section~\ref{sec:Weibel-Zaslavsky}. Weibel \cite[Theorem~3]{Weibel12} also shows, for sums of full-dimensional polytopes, that the number of vertices is maximized when the partial sums attain the trivial upper bound. The same arguments transfer to the case of upper vertices, and one can show the following corollary. In the following we consider families of polytopes $P_i$ satisfying $f_0(P_i)= k_i$, $i=1,\ldots, m$. \begin{corollary}[Upper bound for upper vertices of sums of full-dimensional polytopes] Let $m\geq n+1$ and $k_1,\ldots, k_m\geq n+2$. Then $f_0(P^+) \leq \sum_{j=0}^n (-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i$. \end{corollary} By comparison, Weibel's upper bound for the total number of vertices is $f_0(P) \leq \binom{m-1}{n} +$ \linebreak $\sum_{j=0}^n (-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i$. Unfortunately his argument does not extend to the case where some of the $k_i$ are small, neither for all vertices nor for upper vertices. To address that case, we will instead use the upper bound theorem by Adiprasito-Sanyal~\cite{KarimRaman}. Their result implies Proposition~\ref{thm:Mneighborly_upper_bound}, which states that if a Minkowski sum maximizes the number of vertices, then the partial sums attain the trivial upper bound $f_0(P_S)=\prod_{i\in S}f_0(P_i)$. The problem remaining is whether maximizing the number of vertices $f_0(P)$ also entails maximizing the number of upper vertices $f_0(P^+)$ and whether the partial sums will also attain the trivial upper bound for upper vertices. In Section~\ref{sec:Weibel-Zaslavsky} we will show that this is indeed the case. We find it useful to rewrite the alternating sum as follows. \begin{lemma} \label{lemma:reformulation} Let % $m\ge n+1\geq 1$ % and $k_1,\ldots, k_m \geq 2$. % Then \[\sum_{j=0}^{n} (-1)^{n-j} {m-1-j \choose n-j} \sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i = \sum_{j=0}^n\sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1).\] \end{lemma} \begin{proof} We prove the % equality by viewing both sides as polynomials in the variables $k_i$ and examining the coefficient % of each monomial. % Fix a subset $S\subseteq [m]$ of size $j$. The coefficient % for the monomial $k_S=\prod_{i\in S}k_i$ on the left-hand side of the equation is $(-1)^{n-j}\binom{m-1-j}{n-j}$. On the right-hand side, the term $k_S$ appears with sign equal to $(-1)^{|T|-|S|}$ for each $T\supseteq S$. The coefficient % on the right-hand side is therefore $ \sum_{T\supseteq S} (-1)^{|T|-j}=\sum_{i = 0}^{n-j} (-1)^i \cdot \binom{m-j}{i}$. The statement now follows % from the following observation, which is obtained by induction on $n$: If $m \geq n+1\geq 1$, then $\sum_{i=0}^{n} (-1)^i \binom{m}{i} = (-1)^{n}\binom{m-1}{n}$. \end{proof} \section{Face counting formulas \`a la Zaslavsky} \label{sec:Zaslavsky} Zaslavsky \cite{zaslavsky1975facing} proved a theorem expressing the number of regions defined by a hyperplane arrangement in terms of the characteristic polynomial, a function obtained from the intersection poset of the arrangement. In the following we derive a similar type of result for the case of maxout arrangements. Hyperplanes are special in that their intersections are affine spaces and can be discussed in terms of linear independence relations. In contrast, for maxout arrangements the intersections involve linear equations and also linear inequalities. In turn, the elements of the poset have a more complex topology that needs to be accounted for. In general the poset also has a more complex structure even if the arrangement is in general position. \begin{definition}[Maxout arrangement] For a collection of $m$ maxout units % $z_i(x) =$\linebreak $\max\{A_{i,1}(x),\ldots, A_{i,k_i}(x)\}$, $x\in\mathbb{R}^n$, $i=1,\ldots, m$, we define the maxout arrangement $\mathcal{A}=\{H_{ab}^i\colon \{a,b\}\in {[k_i]\choose 2}, i\in[m], \operatorname{co-dim}(H^i_{ab})=1\}$ in $\mathbb{R}^n$ as the collection of nonempty co-dimension $1$ indecision boundaries between pairs of preactivation features, called atoms, \begin{equation} H_{ab}^i = \left\{x\in\mathbb{R}^n\colon A_{i,a}(x) = A_{i,b}(x) =\max_{c\in[k_i]}\{A_{i,c}(x)\} \right\}. \label{eq:maxoutarrangementpieces} \end{equation} We call the arrangement central if the affine functions $A_{i,a}$ of each unit are linear. We let $L(\mathcal{A})$ denote the set of all possible nonempty sets obtained by intersecting subsets of elements in $\mathcal{A}$, including $\mathbb{R}^n$ as the empty set intersection. The set $L(\mathcal{A})$ is partially ordered by reverse inclusion, so that for any $s,t\in L(\mathcal{A})$ we have $s\geq t$ if and only if $s\subseteq t$. The smallest element, i.e.\ the $\hat 0$ in this poset, is $\mathbb{R}^n$. For a given arrangement $\mathcal{A}$, we denote by $r(\mathcal{A})$ the number of connected components of $\mathbb{R}^n\setminus \cup_{H\in\mathcal{A}}H$, called the regions of $\mathcal{A}$. \end{definition} Note that rank-$1$ units have no indecision boundaries and can be ignored. In the following we will therefore assume without loss of generality that $k_1,\ldots, k_m\geq 2$. \begin{figure} \centering \begin{tabularx}{115mm}{m{5cm}m{65mm}} \begin{tikzpicture}[every node/.style={black,above right, inner sep=1pt}] \path[fill=blue!10] (-1.25,-1.25) rectangle (1.25cm,1.25cm); \draw[name path=line11, double=black, white, thick] (0,0) -- (1.25,.75) node [right] {$H^1_{12}$}; \draw[name path=line12, double=black, white, thick] (0,0) -- (1.25,-1) node [right] {$H^1_{23}$}; \draw[name path=line13, double=black, white, thick] (0,0) -- (-1.25,0) node [left] {$H^1_{13}$}; \draw[name path=line21, double=blue!80, white, thick] (.25,-1) -- (-1.25,1.25) node [above] {\textcolor{blue!80}{$H^2_{12}$}}; \draw[name path=line22, double=blue!80, white, thick] (.25,-1) -- (.75,1.25) node [above] {\textcolor{blue!80}{$H^2_{23}$}}; \draw[name path=line23, double=blue!80, white, thick] (.25,-1) -- (.3,-1.25) node [below] {\textcolor{blue!80}{$H^2_{13}$}}; \fill[name intersections={of=line11 and line12,total=\t}, draw=white, thick] {(intersection-1) circle (1.5pt) node {}}; \fill[name intersections={of=line21 and line22,total=\t}, blue!80, draw=white, thick] {(intersection-1) circle (1.5pt) node {}}; \foreach \i in {1,2,3}{ \foreach \j in {1,2,3}{ \fill[name intersections={of={line2\i} and {line1\j}, total=\t}, red!80, draw=white, thick][] \ifnum\t=0 {}; \else \foreach \s in {1,...,\t}{(intersection-\s) circle (1.5pt) node {} } ; \fi } } \end{tikzpicture} & \begin{tikzpicture}[inner sep=1pt] \node (zero) at (0,-1) {$\mathbb{R}^2$}; \node (H112) at (-.5,0) {$H^1_{12}$}; \node (H113) at (-1.5,0) {$H^1_{13}$}; \node (H123) at (-2.5,0) {$H^1_{23}$}; \node (H212) at (2.5,0) {\textcolor{blue!80}{$H^2_{12}$}}; \node (H213) at (1.5,0) {\textcolor{blue!80}{$H^2_{13}$}}; \node (H223) at (.5,0) {\textcolor{blue!80}{$H^2_{23}$}}; \node (H10) at (-2,1) {\textcolor{black}{$\bullet$}}; % \node (H20) at (2,1) {\textcolor{blue!80}{$\bullet$}}; % \node (H113-H212) at (1,1) {\textcolor{red}{$\bullet$}};% \node (H123-H223) at (-1,1) {\textcolor{red}{$\bullet$}};% \node (H112-H223) at (0,1) {\textcolor{red}{$\bullet$}};% \draw (zero) -- (H112) -- (H10) -- (H113) -- (zero) -- (H123) -- (H10); \draw (zero) -- (H212) -- (H20) -- (H213) -- (zero) -- (H223) -- (H20); \draw (H113) -- (H113-H212) -- (H212); \draw (H123) -- (H123-H223) -- (H223); \draw (H112) -- (H112-H223) -- (H223); \end{tikzpicture} \end{tabularx}\vspace{-3mm} \caption{Shown is an arrangement of two maxout units of ranks $k_1=k_2=3$ on $\mathbb{R}^2$ along with its intersection poset discussed in Example~\ref{ex:0} and \ref{ex:1}. } \label{fig:posetcounting} \end{figure} \begin{example}\label{ex:0} Figure~\ref{fig:posetcounting} shows a maxout arrangement of two rank-$3$ maxout units and their intersection poset. The black indecision boundaries arise from $z_1(x,y)=\max\{2y,x+y+1,2\}$, while the blue indecision boundaries arise from $z_2(x,y)=\max\{0,3x+2y,5x+y\}$. \end{example} Recall that the M\"obius function of a poset $L$ with partial order $\leq$ is defined by $\mu_L(s,s) = 1$ for $s\in L$, $\mu_L(s,u) = -\sum_{s\leq t< u} \mu_L(s,t)$ for $s<u\in L$, and $\mu_L(s,u) = 0$ for $s\not\leq u$. If the poset has a minimal element $\hat 0$, one also defines $\mu_{L}(x):=\mu_{L}(\hat 0,x)$. The M\"obius inversion formula for locally finite posets states % the following % equivalence for functions $g$ and $h$ on the poset \cite{Rota}: \begin{align*} g(t) = \sum_{s\leq t} h(s) \quad \forall t\in L \quad\text{if and only if}\quad h(t) = \sum_{s\leq t} g(s) \mu_L(s,t) \quad \forall t\in L. \end{align*} Further, recall that if a space $X$ is suitably decomposed into cells with $f_s$ cells of dimension $s$, then its Euler characteristic is defined as $\psi(X) := f_0 - f_1 + f_2 \mp \cdots$, whereby we follow the notation of \cite{Stanley04anintroduction}. Concretely, a polyhedral decomposition or a CW complex decomposition with finitely many pieces are suitable for computing the Euler characteristic; see \cite[Chapter 4 Proposition 2.2]{1998tame}. The Euler characteristic is independent of the specific decomposition. A (closed) face of the arrangement $\mathcal{A}$ is a set of the form $\emptyset\neq F = \overline{R}\cap x$, where $\overline{R}$ is the closure of a connected component $R$ of $X\setminus\cup_{H\in\mathcal{A}}H$, and $x\in L(\mathcal{A})$. Denote the set of faces of $\mathcal{A}$ by $\mathcal{F}(\mathcal{A})$. The faces of an arrangement $\mathcal{A}$ in $X$ create a decomposition $X = \sqcup_{F\in F(\mathcal{A})}\operatorname{relint}(F)$. \begin{definition}[Proper arrangement] \label{def:properArrangement} We call an arrangement $\mathcal{A}$ in a space $X$ proper if the set of faces $\mathcal{F}(\mathcal{A})$ decomposes $X$ suitably for computing the Euler characteristic. \end{definition} \begin{lemma}\label{lem:properArrangements} A maxout arrangement in $\mathbb{R}^n$ is always proper. A central maxout arrangement in $\mathbb{R}^{n+1}$ restricted to $\mathbb{S}^n$ is proper whenever the associated Newton polytope in $\mathbb{R}^{n+1}$ has dimension $n+1$. \end{lemma} \begin{proof} The maxout arrangement creates a polyhedral decomposition of $\mathbb{R}^n$. For a central arrangement with full-dimensional polytope, the spherical cell complex is equivalent to the boundary of the dual polytope, which is a CW complex. \end{proof} We obtain the following counting formula for the number of regions of an arrangement. \begin{theorem}[Number of faces of maxout arrangements] \label{thm:posetcounting} Consider a proper arrangement $\mathcal{A}$ in a space $X$ (e.g.\ $\mathbb{R}^n$ or $\mathbb{S}^n$). Then the number of regions defined by $\mathcal{A}$ on $X$ satisfies $$ r(\mathcal{A}) = (-1)^{\dim(X)} \sum_{y\in L(\mathcal{A})} \psi(y) \mu_{L(\mathcal{A})}(y). $$ Moreover, writing $L(\mathcal{A})/x = \{y\in L(\mathcal{A}) \colon y\geq x\}$, the number of $s$-faces satisfies $$ f_s(\mathcal{A}) = \sum_{\substack{x\in L(\mathcal{A})\\\dim(x)=s}}(-1)^{\dim(x)}\sum_{y\in L(\mathcal{A})/x} \psi(y)\mu_{L(\mathcal{A})/x}(y) , \quad s=0,\ldots, \dim(X)-1. $$ \end{theorem} This result is an instance of what Zaslavsky calls a fundamental theorem of dissection theory \cite[Theorem~1.2]{ZASLAVSKY1977267}. We note that if $\mathcal{A}$ is an arrangement of hyperplanes in $\mathbb{R}^n$, then each $y\in L(\mathcal{A})$ is an affine subspace with Euler characteristic $\psi(y) = (-1)^{\dim(y)}$ and the statement of Theorem~\ref{thm:posetcounting} corresponds to Zaslavsky's result \cite[Theorem~A]{zaslavsky1975facing}, which states that for hyperplane arrangements $r(\mathcal{A}) =(-1)^n \sum_{x\in L(\mathcal{A})}(-1)^{\dim(x)} \mu_{L(\mathcal{A})}(x) =(-1)^n \chi_{L(\mathcal{A})}(-1)$. Here, the characteristic polynomial of $L(\mathcal{A})$ is defined as $\chi_{L(\mathcal{A})}(t) := \sum_{x\in L(\mathcal{A})} \mu_{L(\mathcal{A})}(x) t^{\dim(x)}$, $t\in\mathbb{R}$. % \begin{proof}[Proof of Theorem~\ref{thm:posetcounting}] The proof follows the Eulerian method, i.e.\ the arguments of Zaslavsky's quick proofs in \cite{zaslavsky1975facing}. Consider an arrangement $\mathcal{A}$ in a space $X$. For any $y\in L(\mathcal{A})$, denote the arrangement induced by $\mathcal{A}$ on $y$ as $\mathcal{A}^y := \{y\cap H\neq \emptyset \colon H\in \mathcal{A}, H\not\supseteq y\}$. Now, every $k$-face of $\mathcal{A}$ is % closure of a region of exactly one $\mathcal{A}^y$, $y\in L(\mathcal{A})$, $\dim(y)=k$. Hence we have \begin{equation} f_k(\mathcal{A}) = \sum_{\substack{y\in L(\mathcal{A})\\ \dim(y)=k}}r(\mathcal{A}^y). \label{eq:kfaces} \end{equation} Hence, if $\mathcal{A}$ % is proper, then we have $\psi(X) = f_0(\mathcal{A}) - f_1(\mathcal{A}) \pm % \cdots = \sum_{y\in L(\mathcal{A})} (-1)^{\dim(y)} r(\mathcal{A}^y)$. We have an analogous expression for any $x\in L(\mathcal{A})$ in place of $X$. Note that if $\mathcal{A}$ is proper, then so is $\mathcal{A}^y$ for any $y\in L(\mathcal{A})$. Hence, $$ \psi(x)% = \sum_{\substack{y\in L(\mathcal{A})\\ y\geq x}} (-1)^{\dim(y)} r(\mathcal{A}^y)\quad \forall x\in L(\mathcal{A}). $$ The M\"obius inversion formula then gives $$ (-1)^{\dim(x)} r(\mathcal{A}^x) = \sum_{\substack{y\in L(\mathcal{A})\\ y\geq x}} \psi(y) % \mu_{L(\mathcal{A})}(x,y) \quad \forall x\in L(\mathcal{A}). $$ Substituting $x=X$ gives $(-1)^{\dim(X)} r(\mathcal{A}) = \sum_{y\in L(\mathcal{A})} \psi(y) % \mu_{L(\mathcal{A})}(y)$. This completes the proof of the first statement. The second statement is by \eqref{eq:kfaces} and applying the first statement to each element of the intersection poset of dimension $s$. \end{proof} \begin{example} \label{ex:1} We illustrate Theorem~\ref{thm:posetcounting}. We consider the arrangement of two maxout units of ranks $k_1=k_2=3$ in $\mathbb{R}^n$, $n=2$, shown in Figure~\ref{fig:posetcounting}. In this example $\mu(\mathbb{R}^2)=1$, $\mu(H^i_{ab})=-1$, $\mu(\bullet)=2$, $\mu(\textcolor{blue!80}{\bullet})=2$, $\mu(\textcolor{red}{\bullet})=1$, and $\psi(\mathbb{R}^2)=(-1)^2=1$, $\psi(H^i_{ab})=1-1 = 0$, $\psi(\bullet)= \psi(\textcolor{blue!80}{\bullet})= \psi(\textcolor{red}{\bullet})=(-1)^0=1$. By Theorem~\ref{thm:posetcounting}, the number of regions and $1$-faces are \begin{align*} r(\mathcal{A}) & = \sum_{y\in L(\mathcal{A})} \psi(y)\mu_{L(\mathcal{A})}(y) = 1 + 0 (-1\!-\!1\!-\!1\!-\!1\!-\!1\!-\!1) + (-1)^0 (2\!+\!2\!+\!1\!+\!1\!+\!1) = 8 ,\\ f_1(\mathcal{A}) & = \sum_{\substack{x\in L(\mathcal{A})\\ \dim(x)=1}}(-1)^{\dim(x)}\sum_{y\in L(\mathcal{A})/x} \psi(y)\mu_{L(\mathcal{A})/x}(y)\\ & =(-1)^1(\underbracket[0.5pt]{(0\!-\!1 \!-\!1)}_{x=H^1_{12}} + \underbracket[0.5pt]{(0\!-\!1\!-\!1)}_{x=H^1_{13}} + \underbracket[0.5pt]{(0\!-\!1\!-\!1)}_{x=H^1_{23}} + \underbracket[0.5pt]{(0\!-\!1\!-\!1)}_{x=H^2_{12}} + \underbracket[0.5pt]{(0\!-\!1)}_{x=H^2_{13}} + \underbracket[0.5pt]{(0\!-\!1\!-\!1\!-\!1)}_{x=H^2_{23}}) = 12 . \end{align*} \end{example} For a general maxout arrangement, the Euler characteristic of the elements of the intersection poset may depend not only on the dimension. Besides the case of hyperplane arrangements, we note another special case where it depends only on the dimension. For central arrangements of full-dimensional maxout units in $\mathbb{R}^{n+1}$ (i.e.\ each of the polytopes $P_1,\ldots, P_m$ has dimension $n+1$), each $y\in L(\mathcal{A})\setminus\{\mathbb{R}^{n+1}, \{0\}\}$ is an unbounded pointed cone with $\psi(y)=0$ (this can be deduced from the fact that any bounded convex polytope has Euler characteristic $1$). Hence, noting that $\psi(\mathbb{R}^{n+1})=(-1)^{n+1}$ and $\psi(\{0\})=1$, we may express the number of regions of such an $\mathcal{A}$ in terms of the characteristic polynomial $\chi_{L(\mathcal{A})}(t) = \sum_{y\in L(\mathcal{A})} \mu_{L(\mathcal{A})}(y) t^{\dim(y)}$ as follows. \begin{proposition} For a central arrangement $\mathcal{A}$ of full-dimensional maxout units in $\mathbb{R}^{n+1}$, $r(\mathcal{A}) = 1 + (-1)^{n+1} \chi_{L(\mathcal{A})}(0)$. \end{proposition} \section{Region counting by regions of sub-arrangements} \label{sec:Weibel-Zaslavsky} Our objective in this section is to write % the number of regions $r(\mathcal{A})$ of an arrangement $\mathcal{A}$ in terms of the numbers of regions of small sub-arrangements. To this end we will focus on simple arrangements, which arise from maxout units with generic parameters, and combine the results from Sections~\ref{sec:Weibel} and \ref{sec:Zaslavsky}. As before, we will assume that each maxout unit has rank at least two. To draw a bridge between the upper vertices and all vertices of Minkowski sums of polytopes, we consider both non-central and central arrangements. \begin{definition}[Simple arrangements] Let $\mathcal{A}=\{H^{i}_{ab}\}_{i\in [m],\{a,b\}}$ be a maxout arrangement. % For any $S\subseteq[m]$, write $\mathcal{A}_S=\{H^{i}_{ab}\}_{i\in S,\{a,b\}}$ for the sub-arrangement of atoms of units $i\in S$. % A maxout arrangement $\mathcal{A}$ is simple, or in general position, if the intersection of any $j$ atoms of different units is either empty or has co-dimension $j$. In the case of a central arrangement, all atoms contain the origin $0$. A central maxout arrangement $\mathcal{A}$ is simple if the intersection of any $j$ atoms of different units is either the origin or has co-dimension $j$. If $\mathcal{A}$ is simple, we define the support of an element $y=\cap_{l=1}^r H^{i_l}_{a_l b_l}\in L(\mathcal A)$, with $y\neq 0$ if $\mathcal{A}$ is central, as the set $\{i_1,\dots,i_r\}\subseteq [m]$. Note that while $y$ may not have a unique representation as intersection of atoms, the support is well-defined if $\mathcal{A}$ is simple and $y\neq 0$ if $\mathcal{A}$ central. \end{definition} \subsection{Simple non-central arrangements} % Simple non-central arrangements have the convenient property that % an element $y\in L(\mathcal{A})$ of the intersection poset is % contained in the intersection poset $L(\mathcal A_S)$ of a sub-arrangement $\mathcal{A}_S$ if and only if the support of $y$ is contained in $S$. This mimics the behavior of westernmost corners in Lemma~\ref{lemma:sequence}, but without the need of a coordinate system and local optimization. We obtain a formula similar to Theorem~\ref{thm:f_vectors_upper_part_Minkowski_sum} for upper vertices with a short proof based on Theorem~\ref{thm:posetcounting}. \begin{theorem}[Number of regions of a simple non-central arrangement] \label{thm:facessimple} Let $\mathcal{A}$ be a simple arrangement of $m\geq n+1$ maxout units of ranks $k_1,\ldots, k_m\geq 2$ in $\mathbb{R}^n$. Then % \begin{align*} r(\mathcal{A}) = \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S). \end{align*} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:facessimple}] If $\mathcal{A}$ is simple, then any element in $L(\mathcal{A})$ can be written as an intersection of at most $n$ atoms. Moreover, any $y\in L(\mathcal{A})$ is included in $L(\mathcal A_S)$ if and only if its support is contained in $S$. Hence, using Lemma~\ref{lemma:inclusion-exclusion} % and the fact that $\mu_{L(\mathcal{A})}(y) = \mu_{L(\mathcal{A}_S)}(y)$ for any $y\in L(\mathcal{A}_S)$, we can rewrite the expression from Theorem~\ref{thm:posetcounting} as follows: \allowdisplaybreaks \begin{align}\label{eq:nonCentralRegions} r(\mathcal{A}) &\textoverset[O]{Thm.~\ref{thm:posetcounting}}{=} (-1)^n \sum_{y\in L(\mathcal{A})}\psi(y)\mu_{L(\mathcal{A})}(y)\nonumber \\ &\textoverset[O]{Lem.~\ref{lemma:inclusion-exclusion}}{=} (-1)^n \sum_{y\in L(\mathcal{A})}\psi(y)\mu_{L(\mathcal{A})}(y) \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j} \sum_{S\in {[m] \choose j}} \mathbb{1}_{y\in L(\mathcal{A}_S)}\nonumber \\ &\textoverset[O]{}{=} (-1)^n \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} \sum_{y\in L(\mathcal{A}_S)} \psi(y)\mu_{L(\mathcal{A}_S)}(y) \\ &\textoverset[O]{Thm.~\ref{thm:posetcounting}}{=} \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S).\nonumber \qedhere \end{align} \end{proof} \subsection{Simple central arrangements} Simple central arrangements come with the added challenge that the origin $\{0\}= \hat 1\in L(\mathcal{A})$ may be contained in several posets $L(\mathcal{A}_S)$ with $S\subseteq [m]$, $S\neq \emptyset$, while not being contained in the poset corresponding to the intersection of these $S$. See Figure~\ref{fig:originInCentralArrangements}. Consequently, there is no obvious way to define a support for $\{0\}$ that is consistent with % Lemma~\ref{lemma:sequence}, which in turn makes it difficult to apply Lemma~\ref{lemma:inclusion-exclusion}. We thus need % to treat $\{0\}$ separately. We consider central arrangements in $\mathbb{R}^{n+1}$, corresponding to the ambient space of the lifted Newton polytopes of maxout networks with input space~$\mathbb{R}^n$. \begin{figure} \centering \begin{tikzpicture}[font=\footnotesize] \node (A12) at (0,0) {% \begin{tikzpicture} \useasboundingbox (-1,-1) rectangle (1,1); \draw[very thick,red] (-1,0) -- (1,0); \draw[very thick,blue!80] (0,-1) -- (0,1); \node[anchor=north west, font=\footnotesize] at (-1.1,1) {$\mathcal{A}_{\{1,2\}}$}; \end{tikzpicture} }; \node[anchor=north west,xshift=-2.5mm] (A12Text) at (A12.south west) {$L(\mathcal{A}_{\{1,2\}})=\{\mathbb{R}^2,\textcolor{red}{\mathbb{R}\cdot e_1},\textcolor{blue!80}{\mathbb{R}\cdot e_2},\{0\}\}$}; \node (A13) at (5,0) {% \begin{tikzpicture} \useasboundingbox (-1,-1) rectangle (1,1); \draw[very thick,red] (-1,0) -- (1,0); \draw[very thick,violet] (-0.75,-0.75) -- (0.75,0.75); \node[anchor=north west, font=\footnotesize] at (-1.1,1) {$\mathcal{A}_{\{1,3\}}$}; \end{tikzpicture} }; \node[anchor=north west,xshift=-2.5mm] (A13Text) at (A13.south west) {$L(\mathcal{A}_{\{1,3\}})=\{\mathbb{R}^2,\textcolor{red}{\mathbb{R}\cdot e_1},\textcolor{violet}{\mathbb{R}\cdot (e_1+e_2)},\{0\}\}$}; \node (A1) at (11,0) {% \begin{tikzpicture} \useasboundingbox (-1,-1) rectangle (1,1); \draw[very thick,red] (-1,0) -- (1,0); \node[anchor=north west, font=\footnotesize] at (-1.1,1) {$\mathcal{A}_{\{1\}}$}; \end{tikzpicture} }; \node[anchor=north west,xshift=-2.5mm] (A1Text) at (A1.south west) {$L(\mathcal{A}_{\{1\}})=\{\mathbb{R}^2,\textcolor{red}{\mathbb{R}\cdot e_1}\}$}; % \end{tikzpicture}\vspace{-3mm} \caption{The origin $\{0\}$ is contained in both the intersection poset of $\mathcal{A}_{\{1,2\}}$ and of $\mathcal{A}_{\{1,3\}}$, but not in the intersection poset of $\mathcal{A}_{\{1,2\}\cap\{1,3\}}$. } \label{fig:originInCentralArrangements} \end{figure} \bigskip \paragraph{Special case} First we present a simple approach to enumerate the regions of a central arrangement in the special case that the $\mathcal{A}_{\{i\}}$ are proper. We consider the arrangement $\mathcal{A}\cap\mathbb{S}^n$, which for simplicity of notation we will still denote $\mathcal{A}$. For a central arrangement in $\mathbb{R}^{n+1}$, the number of regions it defines in $\mathbb{R}^{n+1}$ is equal to the number of regions it defines on $\mathbb{S}^n$. Conveniently, $\{0\}$ no longer appears in the posets over $\mathbb{S}^n$. For a central arrangement of $m\geq n+1$ units of ranks $k_1,\ldots, k_m\geq 2$, as before in~\eqref{eq:nonCentralRegions}, \begin{align} r(\mathcal{A}) = (-1)^n\!\!\sum_{y\in L(\mathcal{A})}\!\!\!\psi(y)\mu_{L(\mathcal{A})}(y) = & (-1)^n \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \!\!\sum_{S\in {[m]\choose j}}\sum_{y\in L(\mathcal{A}_S)}\!\!\!\psi(y)\mu_{L(\mathcal{A}_S)}(y) . \label{eq:centralsphere} \end{align} Under the additional assumption that the arrangement $\mathcal{A}_S$ on $\mathbb{S}^n$ is proper for all $S\neq\emptyset$ (e.g.\ the lifted Newton polytopes of all maxout units are full-dimensional), we can use Theorem~\ref{thm:posetcounting} to rewrite \eqref{eq:centralsphere}~as \begin{align} r(\mathcal{A}) = & (-1)^n{m-1\choose n} \psi(\mathbb{S}^n) + \sum_{j=1}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S)\nonumber\\ = & \psi(\mathbb{S}^n) + \sum_{j=1}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} ( r(\mathcal{A}_S) - \psi(\mathbb{S}^n) ), \label{eq:centralsphererecoverWeibel} \end{align} in which we also use $\sum_{y\in L(\mathcal{A}_\emptyset)}\psi(y)\mu_{L(\mathcal{A}_\emptyset)}(y) % =\psi(\mathbb{S}^n)$ and $\sum_{j=1}^n (-1)^{n-j}{m-1-j\choose n-j}{m\choose j} = 1 - (-1)^n{m-1\choose n}$ which follows from Lemma~\ref{lemma:inclusion-exclusion}. Since $\psi(\mathbb{S}^n)=0$ if $n$ is odd and $\psi(\mathbb{S}^n)=2$ if $n$ is even, \eqref{eq:centralsphererecoverWeibel} recovers the $k=0$ case of \cite[Theorem~1]{Weibel12}. \bigskip \paragraph{General case} Next, we present an approach to handle the origin directly in $\mathbb{R}^{n+1}$, which will allow us to address the general case where the lifted Newton polytopes of the maxout units need not be full-dimensional. Our goal is to prove the following theorem: \begin{theorem}[Number of regions of a simple central arrangement] \label{thm:central-arbdims} For a central simple arrangement $\mathcal{A}$ of $m$ maxout units of ranks $k_1,\ldots, k_m\geq 2$ in $\mathbb{R}^{n+1}$, $m\geq % n+1$, % \[ r(\mathcal{A})= {m-1 \choose n} + \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S). \] \end{theorem} To prove Theorem~\ref{thm:central-arbdims}, we will use use Theorem~\ref{thm:posetcounting} to write $r(\mathcal{A})$ as a sum of terms $\psi(y)\mu_{L(\mathcal{A})}(y)$ over $y\in L(\mathcal{A})$. For the elements $y\in L(\mathcal{A})\setminus\{0\}$ we can use similar arguments as before. To handle $\{0\}=\hat 1\in L(\mathcal{A})$ we will use the Cross-cut Theorem: \begin{theorem}[Cross-cut Theorem {\cite{Rota}}]\label{thm:cross-cut} Let $L$ be a finite lattice. Let $X$ be a subset of $L$ such that $\hat 0\not\in X$ and such that if $y\in L$, $y\neq \hat 0$, then some $x\in X$ satisfies $x\leq y$. Let $N_k$ be the number of $k$ element subsets of $X$ with join $\hat 1$. Then $\mu_L(\hat 0,\hat 1) = N_0 -N_1 + N_2 \mp \cdots$. \end{theorem} In the following we will evaluate this formula for the case of the intersection poset $L(\mathcal{A})$ of a simple central arrangement, and split the expression into terms corresponding to sub-arrangements $\mathcal{A}_S$ with $0\leq |S|\leq n$. We will use following definitions. \begin{definition} Let $\mathcal{A}$ be a simple central arrangement of $m$ maxout units. For $S\subseteq[m]$ of cardinality $|S|\leq n$, let $N_k^S$ denote the number of $k$ element subsets of $\mathcal{A}_S$ with join $\{0\}$. For $k>n$, let $N_k^\ast$ denote the number of $k$ element subsets of $\mathcal{A}$ which contain atoms of at least $n+1$ units. % Note that their join is necessarily $\{0\}$ as $\mathcal{A}$ is simple. \end{definition} First we note that the terms involving more than $n$ units can be grouped as follows. \begin{lemma}\label{lem:central-3} Let $\mathcal{A}$ be a simple central arrangement of $m$ maxout units in $\mathbb{R}^{n+1}$, and let $M\coloneqq |\mathcal{A}|$ be the number of its atoms. Then $\sum_{k=n+1}^M (-1)^k N_k^\ast = - (-1)^n {m-1 \choose n}$. \end{lemma} \begin{proof}% The statement follows from reformulations obtained by splitting $k$ element subsets of $\mathcal{A}$ with atoms from $\mathcal{A}_{\{i\}}$, $i\in S$, into a disjoint union of non-empty $J_i\subseteq \mathcal{A}_{\{i\}}$, $i\in S$: % \begin{align*} &\sum_{r=n+1}^M (-1)^r N_r^\ast = \sum_{r=n+1}^m \sum_{S\in{[m]\choose r}} \sum_{ \substack{ \emptyset\neq J_i\subseteq \mathcal{A}_{\{i\}}\\ \text{for all }i\in S}} \!\!(-1)^{\sum_{i\in S}|J_i|} = \sum_{r=n+1}^m \sum_{S\in{[m]\choose r}} \prod_{i\in S} % \sum_{j_i=1}^{|\mathcal{A}_{\{i\}}|}{|\mathcal{A}_{\{i\}}|\choose j_i} (-1)^{j_i}% \\ &= \sum_{r=n+1}^m \sum_{S\in{[m]\choose r}} \prod_{i\in S} \left(-1\right) = \sum_{r=n+1}^m {m \choose r} (-1)^{r} = -\sum_{r=0}^n { m \choose r} (-1)^{r} = - (-1)^n {m-1 \choose n}. \qedhere \end{align*} \end{proof} Now using Theorem~\ref{thm:cross-cut} and Lemma~\ref{lem:central-3}, we obtain the following description of $\mu_{L(\mathcal{A})}(\{0\})$. % \begin{lemma} \label{lem:central} Let $\mathcal{A}$ be a simple central arrangement of $m$ maxout units in $\mathbb{R}^{n+1}$. Then $$ \mu_{L(\mathcal{A})}(\{0\}) = - (-1)^n {m-1 \choose n} + \sum_{j=0% }^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in{[m]\choose j}} \mu_{L(\mathcal{A}_S)}(\{0\}) \mathds{1}_{\{0\}\in L(\mathcal{A}_S)}. $$ Here $\mathds{1}_{\{0\}\in L(\mathcal{A}_S)}$ takes value $1$ if $\{0\}\in L(\mathcal{A}_S)$ and $0$ otherwise. \end{lemma} \begin{proof}% Let $M\coloneqq |\mathcal{A}|$ be the number of atoms of $\mathcal{A}$, and for a subset $A\subseteq \mathcal{A}$ let $\Supp(A)\subseteq[m]$ denote the minimal $S\subseteq[m]$ with $A\subseteq \mathcal{A}_S$. With Theorem~\ref{thm:cross-cut} we can decompose $\mu_{L(\mathcal{A})}(\{0\}) = \mu_{L(\mathcal{A})}(\hat0,\hat1)$ as \allowdisplaybreaks \begin{align*}% &\mu_{L(\mathcal{A})}(\{0\}) \textoverset[O]{Thm.~\ref{thm:cross-cut}}{=} \sum_{k=0}^M (-1)^k N_k % =\bigg[\sum_{k=n+1}^M (-1)^k N_k^\ast\bigg] + \bigg[\sum_{k=0}^M (-1)^k \sum_{\substack{A\in\binom{\mathcal A}{k}\\ \text{join}(A)=\{0\} \\ |\Supp(A)|\leq n}} 1\bigg] \\ &\textoverset[O]{Lem.~\ref{lemma:inclusion-exclusion}}{=} \bigg[\sum_{k=n+1}^M (-1)^k N_k^\ast\bigg] + \bigg[\sum_{k=0}^M (-1)^k \!\!\!\sum_{\substack{A\in\binom{\mathcal A}{k}\\ \text{join}(A)=\{0\} \\ |\Supp(A)|\leq n}} \sum_{j=0}^n (-1)^{n-j}\binom{m-1-j}{n-j}\sum_{S\in\binom{[m]}{j}}\mathbb{1}_{\Supp(A)\subseteq S}\bigg] \\ &\textoverset[O]{}{=} \bigg[\sum_{k=n+1}^M (-1)^k N_k^\ast\bigg] + \bigg[ \sum_{j=0}^n (-1)^{n-j}\binom{m-1-j}{n-j}\sum_{S\in\binom{[m]}{j}}\sum_{k=0}^M (-1)^k \!\!\!\sum_{\substack{A\in\binom{\mathcal A}{k}\\ \text{join}(A)=\{0\} \\ |\Supp(A)|\leq n}} \mathbb{1}_{\Supp(A)\subseteq S}\bigg] \\ &\textoverset[O]{}{=} \bigg[\sum_{k=n+1}^M (-1)^k N_k^\ast\bigg] + \bigg[\sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in{[m]\choose j}}\sum_{k=0}^M (-1)^k N_k^S\bigg]\\ &\textoverset[O]{}{=} \bigg[\sum_{k=n+1}^M (-1)^k N_k^\ast\bigg] + \bigg[\sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in{[m]\choose j}} \mu_{L(\mathcal{A}_S)}(\{0\}) \mathds{1}_{\{0\}\in L(\mathcal{A}_S)}\bigg]\\ &\textoverset[O]{Lem.~\ref{lem:central-3}}{=} - (-1)^n {m-1 \choose n} + \bigg[\sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in{[m]\choose j}} \mu_{L(\mathcal{A}_S)}(\{0\}) \mathds{1}_{\{0\}\in L(\mathcal{A}_S)}\bigg]. \qedhere \end{align*} \end{proof} We now have all the supporting results we need to prove Theorem~\ref{thm:central-arbdims}. \begin{proof}[Proof of Theorem~\ref{thm:central-arbdims}] Similar to the proof of Theorem~\ref{thm:facessimple}, note that if $\mathcal{A}$ is simple, then any element in $L(\mathcal{A})\setminus\{0\}$ can be written as an intersection of at most $n$ atoms. Moreover, any $y\in L(\mathcal{A})\setminus\{0\}$ is contained in $L(\mathcal{A}_S)$ if and only if its support is contained in $S$. This allows us to apply Lemma~\ref{lemma:inclusion-exclusion} as in the proof of Theorem~\ref{thm:facessimple} for $y\in L(\mathcal{A})\setminus\{0\}$. % Hence % \allowdisplaybreaks \begin{align*}\label{eq:central-1} &r(\mathcal{A}) \textoverset[O]{Thm.~\ref{thm:posetcounting}}{=} (-1)^{n+1} \psi(\{0\})\mu_{L(\mathcal{A})}(\{0\}) + (-1)^{n+1} \sum_{y\in L(\mathcal{A})\setminus\{0\}}\psi(y)\mu_{L(\mathcal{A})}(y)\nonumber \\ &\textoverset[O]{Lem.~\ref{lemma:inclusion-exclusion}}{=} (-1)^{n+1} \mu_{L(\mathcal{A})}(\{0\}) + (-1)^{n+1} \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \!\!\!\sum_{S\in {[m]\choose j}} \sum_{y\in L(\mathcal{A}_S)\setminus\{0\}}\!\!\!\!\! \psi(y)\mu_{L(\mathcal{A}_S)}(y) \\ &\textoverset[O]{Lem.~\ref{lem:central}}{=} {m-1 \choose n} + (-1)^{n+1} \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} \Big(\mu_{L(\mathcal{A}_S)}(\{0\}) \mathds{1}_{\{0\}\in L(\mathcal{A}_S)} \\ &\hspace{98mm} + \sum_{y\in L(\mathcal{A}_S)\setminus\{0\}} \psi(y)\mu_{L(\mathcal{A}_S)}(y) \Big) \\ &\textoverset[O]{}{=} {m-1 \choose n} + (-1)^{n+1} \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} \sum_{y\in L(\mathcal{A}_S)} \psi(y)\mu_{L(\mathcal{A}_S)}(y) \\ &\textoverset[O]{Thm.~\ref{thm:posetcounting}}{=} {m-1 \choose n} + \sum_{j=0}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S). \qedhere \end{align*} \end{proof} Recall that units of rank $1$ have an empty arrangement and can be ignored. Theorem~\ref{thm:central-arbdims} generalizes \eqref{eq:centralsphererecoverWeibel} and Weibel's result \cite[Theorem~1 for $k=0$]{Weibel12} by removing the requirement that $k_1,\ldots, k_m\geq n+2$. % We illustrate Theorem~\ref{thm:central-arbdims} in the next example. \begin{example} \label{ex:central-arbdims} \ \begin{enumerate}[leftmargin=*] \item Consider the arrangement of $m=2$ maxout units of ranks $k_1=3$, $k_2=2$ in $\mathbb{R}^d$, $d=n+1=2$ shown in the left part of Figure~\ref{fig:maxoutarrangementcentral}. In this example $\mu(\mathbb{R}^2)=1$, $\mu(H^i_{ab})=-1$, $\mu(\textcolor{red}{\bullet})=3$. By Theorem~\ref{thm:posetcounting}, $r(\mathcal{A}) = \sum_{y\in L(\mathcal{A})} \psi(y)\mu_{L(\mathcal{A})}(y) = (-1)^2 1 + 0 (-1-1-1) + (-1)^1(-1) + (-1)^0 3 = 5$. By Theorem~\ref{thm:central-arbdims}, this can be written as $r(\mathcal{A}) = \psi(\mathbb{S}^n){m-1\choose n} + \sum_{j=1}^{n} (-1)^{n-j}{m-1-j\choose n-j} \sum_{S\in {[m]\choose j}} r(\mathcal{A}_S) = 0 + (-1)^0{0\choose 0}(3+2) = 5$. \item Consider now the arrangement of $m=3$ maxout units of ranks $k_1=3$, $k_2=3$, $k_3=2$ in $\mathbb{R}^d$, $d=n+1=3$, shown in the right part of Figure~\ref{fig:maxoutarrangementcentral}. By Theorem~\ref{thm:central-arbdims}, the number of regions is $r(\mathcal{A}) = 2{3-1\choose 2} + (-1)^{2-1}{3-1-1\choose 2-1}(3+3+2) + (-1)^{2-2}{3-1-2\choose 2-2}(9 +6+6 ) = 15$. \end{enumerate} \end{example} \begin{figure} \centering \begin{tabularx}{10cm}{m{5cm}m{5cm}} \begin{tikzpicture}[every node/.style={black,above right, inner sep=1pt}] \path[fill=blue!10] (-1.25,-1.25) rectangle (1.25cm,1.25cm); \draw[name path=line11, double=black, white, thick] (0,0) -- (1.25,.75) node [right] {$H^1_{12}$}; \draw[name path=line12, double=black, white, thick] (0,0) -- (1.25,-1) node [right] {$H^1_{23}$}; \draw[name path=line13, double=black, white, thick] (0,0) -- (-1.25,0) node [left] {$H^1_{13}$}; \draw[name path=line21, double=blue!80, white, thick] (0,-1.25) -- (0,1.25) node [above] {\textcolor{blue!80}{$H^2_{12}$}}; \fill[name intersections={of=line11 and line12,total=\t}, draw=white, thick] {(intersection-1) circle (1.5pt) node {}}; \foreach \i in {1}{ \foreach \j in {1,2,3}{ \fill[name intersections={of={line2\i} and {line1\j}, total=\t}, red!80, draw=white, thick][] \ifnum\t=0 {}; \else \foreach \s in {1,...,\t}{(intersection-\s) circle (1.5pt) node {} } ; \fi } } \end{tikzpicture} & \begin{tikzpicture} \node at (0,0) {\includegraphics[width=4cm]{illustrations/illustrate_complex2.pdf}}; \node at (1.75,1.25) {\textcolor{red}{$\mathcal{A}_1$}}; \node at (2.125,0) {\textcolor{blue}{$\mathcal{A}_2$}}; \node at (-1.75,-1.25) {\textcolor{green!70!black}{$\mathcal{A}_3$}}; \end{tikzpicture} \end{tabularx} \vspace{-3mm} \caption{% Shown are two central maxout arrangements, one in $\mathbb{R}^2$ and one $\mathbb{R}^3$ (for clarity we show only the intersection with $\mathbb{S}^2$), discussed in Example~\ref{ex:central-arbdims}. } \label{fig:maxoutarrangementcentral} \end{figure} \subsection{Strictly upper vertices or bounded regions} We now have counting formulas for the number of upper vertices and the total number of vertices of a Minkowski sum of polytopes. Given a sharp upper bound on the total number of vertices, we can obtain a sharp upper bound on the number of upper vertices if we also have an appropriate lower bound on the number of strict lower vertices. In this subsection we derive such a lower bound. The strict upper vertices of a Minkowski sum correspond to the bounded regions that a central maxout arrangement defines on a hyperplane that does not intersect the origin, which are the regions that do not intersect the negated hyperplane. These are also equivalent to the bounded regions of a non-central arrangement in one dimension lower. The case of central hyperplane arrangements was studied in~\cite[Theorem~3.2]{Greene1983ONTI}, showing that, in that case, the induced arrangement over a general hyperplane has $\mu(\hat 0, \hat 1)$ relatively bounded regions. A particular challenge in the case of maxout arrangements is that the atoms need not be symmetric. This means that the number of bounded regions it defines over a hyperplane may depend on the particular choice of the hyperplane. We obtain a lower bound that is independent of this choice. \begin{theorem}[Lower bound on the number of bounded regions]\label{thm:lowerbound_strict_lower} Let $\mathcal{A}$ be a simple central maxout arrangement in $\mathbb{R}^{n+1}$. % Let $g=\{x\in \mathbb{R}^{n+1}\colon \langle x,w\rangle =1\}$ be an affine hyperplane that does not contain the origin. Let $\mathcal{A}^g=\{g\cap H \neq \emptyset \colon H\in \mathcal{A}, H\not\supseteq g\}$. Then the number of regions of $\mathcal{A}$ which do not intersect $g$, i.e.\ regions consisting of points with $\langle x,w\rangle\leq 0$, satisfies \begin{align*} r(\mathcal{A}) - r(\mathcal{A}^g) \geq & {m-1\choose n}. \end{align*} \end{theorem} \begin{proof} By Theorem~\ref{thm:central-arbdims} and Theorem~\ref{thm:facessimple}, we have $$ r(\mathcal{A}) - r(\mathcal{A}^g) = {m-1\choose n} + \sum_{j=0}^n(-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}(r(\mathcal{A}_S) - r(\mathcal{A}_S^g) ). $$ It remains to show that the second summand is non-negative. We define the support of a region $R$ of $\mathcal{A}_S$ resp.\ $\mathcal{A}^g_S$ to be the minimal $Q\subseteq [m]$ such that $R$ is a region of $\mathcal{A}_Q$ resp.\ $\mathcal{A}_Q^g$. Let $s(\mathcal{A}_Q)$ resp.\ $s(\mathcal{A}_Q^g)$ be the number of regions of $\mathcal{A}_Q$ resp.\ $\mathcal{A}^g_Q$ with support $Q$, so that $r(\mathcal{A}_S)=\sum_{Q\subseteq S} s(\mathcal{A}_Q)$ and $r(\mathcal{A}_S^{g})=\sum_{Q\subseteq S} s(\mathcal{A}_Q^{g})$. Then \allowdisplaybreaks \begin{align*} & \sum_{j=0}^n(-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}(r(\mathcal{A}_S) - r(\mathcal{A}_S^g)) \\ & \textoverset[O]{}{=} \sum_{j=0}^n(-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}} \sum_{Q\subseteq S} (s(\mathcal{A}_Q) - s(\mathcal{A}_Q^g)) \\ & \textoverset[O]{}{=} \sum_{j=0}^n(-1)^{n-j}{m-1-j\choose n-j}\sum_{\substack{Q\subseteq [m]\\ |Q|\leq j}} \binom{m-|Q|}{j-|Q|} (s(\mathcal{A}_Q) - s(\mathcal{A}_Q^g)) \\ & \textoverset[O]{}{=} \sum_{\substack{Q\subseteq [m]\\ |Q|\leq n}} (s(\mathcal{A}_Q) - s(\mathcal{A}_Q^g)) \sum_{j=0}^n(-1)^{n-j}{m-1-j\choose n-j} \binom{m-|Q|}{j-|Q|} \\ & \textoverset[O]{Lem.~\ref{lemma:inclusion-exclusion}}{=} \sum_{\substack{Q\subseteq [m]\\ |Q|\leq n}} (s(\mathcal{A}_Q) - s(\mathcal{A}_Q^g)). \end{align*} Finally, note that each summand in the final expression is non-negative since the regions of a sub-arrangement $\mathcal{A}_Q$ intersecting $g$ is a subset of the regions of % $\mathcal{A}_Q$, and the same holds true for regions that are not contained in a smaller arrangement $\mathcal{A}_{Q'}$, $Q'\subsetneq Q$. \end{proof} \section{Discussion and outlook} \label{sec:discussion}\vspace{-2mm} We presented sharp explicit upper bounds for the number of linear regions of the functions that can be represented by shallow maxout networks with and without biases. These results can be regarded as upper bound theorems for tropical arrangements or upper bound theorems for the number of vertices and upper vertices of Minkowski sums of polytopes with given numbers of vertices. As a direct application of our sharp bounds for shallow maxout networks, we obtained asymptotically tight bounds for deep maxout networks. These results substantially improve previous lower and upper bounds. We presented counting formulas for the number of faces of maxout arrangements in terms of the intersection poset. In the case of simple arrangements or Minkowski sums of polytopes in general orientations, we obtained formulas % in terms of sub-arrangements or Minkowski subsums. We also presented a lower bound on the number of strict lower vertices of Minkowski sums of polytopes in general orientations, which correspond to the bounded regions of maxout arrangements. Our discussion connects the theoretical analysis of artificial neural networks, tropical geometry, and geometric combinatorics. The results that we have presented can serve as the basis for addressing several other problems: \begin{itemize}[leftmargin=*] \item One possible extension of the results we presented here are explicit formulas for the maximum number of faces of any dimensions or also for the number of bounded faces. Explicit formulas for lower-dimensional faces are of particular importance for the combinatorial complexity of tropical varieties~\cite{Joswig_2017}, which are intersections of tropical hypersurfaces, and consequently also for the complexity of many algorithms in tropical geometry. \item Further refining the bounds for deep networks is also an interesting endeavor for future work. Even the case of ReLU networks is still the subject of intense investigation. Another interesting avenue is the explicit number of faces for specific families of non-simple arrangements, for example those that one might obtain in convolutional networks or graph and simplicial networks, which have been recently studied in the ReLU case \cite{xiong2020number,bodnar2021weisfeiler}. \item An interesting open problem is the estimation of the expected number of faces for a given probability distribution over the parameters of shallow and deep maxout networks. The case of ReLU networks was recently studied in \cite{pmlr-v97-hanin19a,NIPS2019_8328}. In shallow ReLU networks, any generic parameter gives rise to the maximum number of regions (the number of regions of a generic hyperplane arrangement or equivalently the number of upper vertices of a zonotope). In contrast, for shallow maxout networks, generic choices of parameters can result in different numbers of regions. The expected number of regions of a single maxout unit with Gaussian weights corresponds to the number of (upper) vertices of a Gaussian polytope, which has been studied in the literature \cite{HMR04}. However, for a maxout layer one would need to consider Minkowski sums of random polytopes, which to our knowledge have not yet been studied. \item A further aspect of interest is the development of parameter initialization strategies that would allow for faster optimization or better algorithmic biases when training maxout networks. We have presented ways to select parameters that lead to the maximum number of regions for shallow maxout networks and to asymptotically maximal number of regions for deep maxout networks. Other properties of the initialization that can be considered include the normalization of the activation values across layers and the distribution of linear regions over the space of inputs. Related aspects for the case of ReLU networks have been studied in \cite{he2015delving,NEURIPS2018_d81f9c1b,Steinwart2019ASL,86441,Zhang2020Empirical}. \end{itemize} \vspace{-4mm} \subsection*{Acknowledgment} We are grateful to both Raman Sanyal and especially Karim Adiprasito for discussing their Upper Bound Theorem for Minkowski Sums with us. Guido Mont\'ufar has been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant n\textsuperscript{o}~757983). Yue Ren % has been supported by UK Research and Innovation (UKRI) under the Future Leaders Fellowship programme (grant n\textsuperscript{o}MR/S034463/1). \vspace{-2mm} \bibliographystyle{plain}
2,877,628,089,287
arxiv
\section{Introduction and Preliminaries}\label{intro} Consider the linear discrete ill-posed problem \begin{equation} \min\limits_{x\in \mathbb{R}^{n}}\|Ax-b\| \mbox{\,\ or \ $Ax=b$,} \ \ \ A\in \mathbb{R}^{m\times n}, \label{eq1} \ b\in \mathbb{R}^{m}, \end{equation} where the norm $\|\cdot\|$ is the 2-norm of a vector or matrix, and $A$ is extremely ill conditioned with its singular values decaying to zero without a noticeable gap. We simply assume that $m\geq n$. Since the results in this paper hold for both the $m\geq n$ and $m\leq n$ cases. \eqref{eq1} arises from many applications, e.g., from the discretization of the first kind Fredholm integral equation \begin{equation}\label{eq2} Kx=(Kx)(t)=\int_{\Omega} k(s,t)x(t)dt=g(s)=g,\ s\in \Omega \subset\mathbb{R}^q, \end{equation} where the kernel $k(s,t)\in L^2({\Omega\times\Omega})$ and $g(s)$ are known functions, while $x(t)$ is the unknown function to be sought. Applications include image deblurring, signal processing, geophysics, computerized tomography, heat propagation, biomedical and optical imaging, groundwater modeling, and many others \cite{aster,engl93,engl00,hansen10,kaipio,kern,kirsch,natterer,vogel02}. The right-hand side $b=b_{true}+e$ is assumed to be contaminated by a Gaussian white noise $e$, caused by measurement, modeling or discretization errors, where $b_{true}$ is noise-free and $\|e\|<\|b_{true}\|$. Because of the presence of noise $e$ and the extreme ill-conditioning of $A$, the naive solution $x_{naive}=A^{\dagger}b$ of \eqref{eq1} generally bears no relation to the true solution $x_{true}=A^{\dagger}b_{true}$, where $\dagger$ denotes the Moore-Penrose inverse of a matrix. Therefore, we must use regularization to extract a good approximation to $x_{true}$ as much as possible. For a Gaussian white noise $e$, throughout the paper, we always assume that $b_{true}$ satisfies the discrete Picard condition $\|A^{\dagger}b_{true}\|\leq C$ with some constant $C$ for $\|A^{\dagger}\|$ arbitrarily large \cite{aster,gazzola15,hansen90,hansen90b,hansen98,hansen10,kern}. Without loss of generality, assume that $Ax_{true}=b_{true}$. Then a dominating regularization approach is to solve the problem \begin{equation}\label{posed} \min\limits_{x\in \mathbb{R}^{n}}\|Lx\| \ \ \mbox{subject to}\ \ \|Ax-b\|\leq \tau\|e\| \end{equation} with $\tau>1$ slightly \cite{hansen98,hansen10}, where $L$ is a regularization matrix and its suitable choice is based on a-prior information on $x_{true}$. In this paper, we are concerned with the case $L=I$ in \eqref{posed}, which corresponds to a 2-norm filtering regularization problem. Let \begin{equation}\label{eqsvd} A=U\left(\begin{array}{c} \Sigma \\ \mathbf{0} \end{array}\right) V^{T} \end{equation} be the singular value decomposition (SVD) of $A$, where $U = (u_1,u_2,\ldots,u_m)\in\mathbb{R}^{m\times m}$ and $V = (v_1,v_2,\ldots,v_n)\in\mathbb{R}^{n\times n}$ are orthogonal, $\Sigma = {\rm diag} (\sigma_1,\sigma_2,\ldots,\sigma_n)\in\mathbb{R}^{n\times n}$ with the singular values $\sigma_1>\sigma_2 >\cdots >\sigma_n>0$ assumed to be simple, the superscript $T$ denotes the transpose of a matrix or vector, and $\mathbf{0}$ denotes a zero matrix. With \eqref{eqsvd}, we have \begin{equation}\label{eq4} x_{naive}=\sum\limits_{i=1}^{n}\frac{u_i^{T}b}{\sigma_i}v_i = \sum\limits_{i=1}^{n}\frac{u_i^{T}b_{true}}{\sigma_i}v_i + \sum\limits_{i=1}^{n}\frac{u_i^{T}e}{\sigma_i}v_i =x_{true}+\sum\limits_{i=1}^{n}\frac{u_i^{T}e}{\sigma_i}v_i \end{equation} and $\|x_{true}\|=\|A^{\dagger}b_{true}\|= \left(\sum_{i=1}^n\frac{|u_i^Tb_{true}|^2}{\sigma_i^2}\right)^{1/2}$. The discrete Picard condition means that, on average, the Fourier coefficient $|u_i^{T}b_{true}|$ decays faster than $\sigma_i$, which results in the following popular model that is used throughout Hansen's books \cite{hansen98,hansen10} and the references therein as well as \cite{jia18a,jia18b}: \begin{equation}\label{picard} | u_i^T b_{true}|=\sigma_i^{1+\beta},\ \ \beta>0,\ i=1,2,\ldots,n, \end{equation} where $\beta$ is a model parameter that controls the decay rates of $| u_i^T b_{true}|$. The covariance matrix of the Gaussian white noise $e$ is $\eta^2 I$, the expected value $\mathcal{E}(\|e\|^2)=m \eta^2$ and $\mathcal{E}(|u_i^Te|)=\eta,\,i=1,2,\ldots,n$, so that $\|e\|\approx \sqrt{m}\eta$ and $|u_i^Te|\approx \eta,\ i=1,2,\ldots,n$. \eqref{eq4} and \eqref{picard} show that, for large singular values, $|{u_i^{T}b_{true}}|/{\sigma_i}$ is dominant relative to $|u_i^{T}e|/{\sigma_i}$. Once $| u_i^T b_{true}| \leq | u_i^T e|$ from some $i$ onwards, the noise $e$ dominates $| u_i^T b|$, and the terms $\frac{| u_i^T b|}{\sigma_i}\approx \frac{|u_i^{T}e|}{\sigma_i}$ overwhelm $x_{true}$ for small singular values and must be dampened. Therefore, the transition point $k_0$ is such that \begin{equation}\label{picard1} | u_{k_0}^T b|\approx | u_{k_0}^T b_{true}|> | u_{k_0}^T e|\approx \eta, \ | u_{k_0+1}^T b| \approx | u_{k_0+1}^Te| \approx \eta; \end{equation} see \cite[p.42, 98]{hansen10} and \cite[p.70-1]{hansen98}. The truncated SVD (TSVD) method \cite{hansen90,hansen98,hansen10} is a reliable and commonly used method for solving small to modest sized \eqref{posed}, and it solves a sequence of problems \begin{equation}\label{tsvd} \min\|x\| \ \ \mbox{subject to}\ \ \|A_kx-b\|=\min \end{equation} starting with $k=1$ onwards, where $A_k=U_k\Sigma_k V_k^T$ is a best rank $k$ approximation to $A$ with respect to the 2-norm with $U_k=(u_1,\ldots,u_k)$, $V_k=(v_1,\ldots,v_k)$ and $\Sigma_k= {\rm diag}(\sigma_1,\ldots,\sigma_k)$; it holds that $\|A-A_k\|=\sigma_{k+1}$ \cite[p.12]{bjorck96}, and $ x_{k}^{tsvd}=A_k^{\dagger}b $ solves \eqref{tsvd}, called the TSVD regularized solution. For the Gaussian white noise $e$ it is known from \cite[p.70-1]{hansen98} and \cite[p.71,86-8,95]{hansen10} that $x_{k_0}^{tsvd}$ is the 2-norm filtering best TSVD regularized solution of \eqref{eq1}, i.e., $x_{k_0}^{tsvd}$ has the minimal 2-norm error $\|x_{true}-x_{k_0}^{tsvd}\|=\min_{k=1,2,\ldots,n}\|x_{true}-x_k^{tsvd}\|$. The index $k$ plays the role of the regularization parameter in the TSVD method. It has been observed and justified that $x_{k_0}^{tsvd}$ is essentially a 2-norm filtering best possible solution of \eqref{eq1}; see \cite{hansen90b}, \cite[p.109-11]{hansen98}, \cite[Sections 4.2 and 4.4]{hansen10} and \cite{varah79}. We refer to \cite{jia18a} for general elaborations. As a result, we can take $x_{k_0}^{tsvd}$ as the standard reference when assessing the regularization ability of a 2-norm filtering regularization method. For $A$ large, the TSVD method is generally prohibitively expensive, and only iterative regularization methods are appealing. Krylov iterative solvers have formed a major class of methods \cite{aster,engl00,gilyazov,hanke95,hansen98,hansen10,kirsch}. Specifically, the CGLS method \cite{golub89,hestenes} and its mathematically equivalent LSQR method \cite{paige82}, the CGME method \cite{bjorck96,bjorck15,craig,hanke95,hanke01} and the LSMR method \cite{bjorck15,chung15,fong} have been commonly used. These methods are deterministic 2-norm filtering regularization methods, have general regularizing effects, and exhibit semi-convergence \cite[p.89]{natterer}; see also \cite[p.314]{bjorck96}, \cite[p.733]{bjorck15}, \cite[p.135]{hansen98} and \cite[p.110]{hansen10}: The iterates first converge to $x_{true}$, then the noise $e$ starts to deteriorate the iterates so that they start to diverge from $x_{true}$ and instead converge to $x_{naive}$. The iteration number plays the role of the regularization parameter in iterative regularization methods. The behavior of ill-posed problems and solvers depends on the decay rate of $\sigma_j$. Hoffmann \cite{hofmann86} has characterized the degree of ill-posedness of \eqref{eq1} as follows: If $\sigma_j=\mathcal{O}(\rho^{-j})$ with $\rho>1$, $j=1,2,\ldots,n$, then \eqref{eq1} is severely ill-posed; if $\sigma_j=\mathcal{O}(j^{-\alpha})$, then \eqref{eq1} is mildly or moderately ill-posed for $\frac{1}{2}<\alpha\le1$ or $\alpha>1$. This definition has been widely used \cite{aster,engl00,hansen98,hansen10}. The requirement $\alpha>\frac{1}{2}$ does not appear in \cite{hofmann86} and is explicitly added in \cite{huangjia,jia18a}, which is always met for a linear compact operator equation \cite{hanke93,hansen98}. Hanke and Hansen \cite{hanke93} address that a strict proof of the regularizing properties of conjugate gradients is extremely difficult; see also \cite{hansen07}. The regularizing effects of CGLS, LSQR and CGME have been intensively studied; see, e.g., and have been intensively studied \cite{aster,eicke,firro97,gilyazov,hanke95,hanke01,hansen98,hansen10,hps16, hps09,huangjia,jia18a,jia18b,paige06,vorst90}. It has long been known (cf. \cite{hanke93,hansen98,hansen07,hansen10}) that if the singular values of the projection matrices involved in LSQR, called the Ritz values, approximate the large singular values in natural order then LSQR has the same regularization ability as the TSVD method, that is, the two methods can compute 2-norm filtering best regularized solutions with the same accuracy. As we will see clearly, the same results hold for CGME and LSMR when the singular values of projection matrices approximate the large singular values of $A$ and $A^TA$ in this order, respectively. If a 2-norm filtering regularized solution of \eqref{eq1} is as accurate as $x_{k_0}^{tsvd}$, it is called a 2-norm filtering best possible regularized solution. If the 2-norm filtering regularized solution by a regularization method at semi-convergence is such a best possible one, then the solver is said to have the {\em full} regularization. Otherwise, the solver has only the {\em partial} regularization. This definition is introduced in \cite{huangjia,jia18a}. In terms of it, a fundamental question posed in \cite{huangjia,jia18a} is: {\em Do CGLS, LSQR, CGME and LSMR have the full or partial regularization for severely, moderately and mildly ill-posed problems?} Actually, this question has been receiving high attention for CGLS and LSQR. For the cases that $\sigma_i$ are simple, the author in \cite{jia18a} has given accurate estimates for the 2-norm distances between the underlying $k$ dimensional Krylov subspace and the $k$ dimensional dominant right singular subspace $span\{V_k\}$ of $A$ for severely, moderately and mildly ill-posed problems. On the basis of \cite{jia18a}, the author in \cite{jia18b} has proved that, for LSQR, the $k$ Ritz values converge to the $k$ large singular values of $A$ in natural order and Lanczos bidiagonalization always generates a near best rank $k$ approximation until $k=k_0$ for severely and moderately ill-posed problems with suitable $\rho>1$ and $\alpha>1$, meaning that LSQR and CGLS have the full regularization. However, if such desired properties fail to hold, it has been theoretically unknown if LSQR has the full or partial regularization. Nevertheless, numerical experiments on many ill-posed problems have demonstrated that LSQR always has the full regularization \cite{jia18a,jia18b}. In this paper, we analyze the regularization of CGME and LSMR under the assumption that all the singular values $\sigma_i$ are simple. We establish a number of results, and prove that the regularization ability of CGME is generally inferior to that of LSQR, that is, the 2-norm filtering best regularized solutions obtained by CGME at semi-convergence are generally less accurate than those obtained by LSQR. Specifically, we derive the filtered SVD expansion of CGME iterates, by which we prove that the semi-convergence of CGME always occurs no later than that of LSQR and can be much earlier than the latter. In the meantime, we show how to extract a rank $k$ approximation from the rank $k+1$ approximation to $A$ generated in CGME at iteration $k$, which is as accurate as the rank $k$ approximation in LSQR. Exploiting such rank $k$ approximation, we propose a modified CGME (MCGME) method whose regularization ability is shown to be very comparable to that of LSQR. For LSMR, we present a number of results and prove that its regularization ability is as good as that of LSQR and the two methods compute the 2-norm filtering best regularized solutions with essentially the same accuracy. We also show that the semi-convergence of LSMR always occurs no sooner than that of LSQR. As a windfall, making of our analysis approach used for CGME, we improve a fundamental bound, Theorem 9.3 presented in Halko {\em et al.} \cite{halko11}, for the accuracy of the truncated rank $k$ approximation to $A$ generated by randomized algorithms, which have formed a highly intensive topic and have been used in numerous disciplines over the years. As remarked by Halko {\em et al.} in \cite{halko11} (cf. Remark 9.1 there), their bound appears ``{\em conservative, but a complete theoretical understanding lacks.}'' Our new bounds for the approximation accuracy are not only unconditionally sharper than theirs but also can reveal how the truncation step damages the accuracy of the rank $k$ approximation. The paper is organized as follows. In Section~\ref{methods}, we review LSQR, CGME and LSMR. In Section~\ref{lsqr}, we briefly state some results on LSQR in \cite{jia18a,jia18b} and take LSQR as reference to assess the regularization ability of CGME and LSMR. In Section~\ref{cgme}, we derive a number of regularization properties of CGME and propose the MCGME method. In Section~\ref{randomappro}, we consider the accuracy of the truncated rank $k$ randomized approximation \cite{halko11} and present sharper bounds. In Section~\ref{lsmr}, we study the regularization ability of LSMR. In Section~\ref{numer}, we report numerical experiments to confirm our theory. We conclude the paper in Section~\ref{conclu}. Throughout the paper, we denote by $\mathcal{K}_{k}(C, w)= span\{w,Cw,\ldots,C^{k-1}w\}$ the $k$ dimensional Krylov subspace generated by the matrix $\mathit{C}$ and the vector $\mathit{w}$, and by the bold letter $\mathbf{0}$ the zero matrix with orders clear from the context. \section{The LSQR, CGME and LSMR algorithms}\label{methods} These three algorithms are all based on the Lanczos bidiagonalization process, which computes two orthonormal bases $\{q_1,q_2,\dots,q_k\}$ and $\{p_1,p_2,\dots,p_{k+1}\}$ of $\mathcal{K}_{k}(A^{T}A,A^{T}b)$ and $\mathcal{K}_{k+1}(A A^{T},b)$ for $k=1,2,\ldots,n$, respectively. We describe the process as Algorithm 1. {\bf Algorithm 1: \ $k$-step Lanczos bidiagonalization process} \begin{remunerate} \item Take $ p_1=b/\|b\| \in \mathbb{R}^{m}$, and define $\beta_1{q_0}=\mathbf{0}$. \item For $j=1,2,\ldots,k$ \begin{romannum} \item $r = A^{T}p_j - \beta_j{q_{j-1}}$ \item $\alpha_j = \|r\|;q_j = r/\alpha_j$ \item $ z = Aq_j - \alpha_j{p_{j}}$ \item $\beta_{j+1} = \|z\|;p_{j+1} = z/\beta_{j+1}.$ \end{romannum} \end{remunerate} Algorithm 1 can be written in the matrix form \begin{align} AQ_k&=P_{k+1}B_k,\label{eqmform1}\\ A^{T}P_{k+1}&=Q_{k}B_k^T+\alpha_{k+1}q_{k+1}(e_{k+1}^{(k+1)})^{T},\label{eqmform2} \end{align} where $e_{k+1}^{(k+1)}$ denotes the $(k+1)$-th canonical basis vector of $\mathbb{R}^{k+1}$, $P_{k+1}=(p_1,p_2,\ldots,p_{k+1})$, $Q_k=(q_1,q_2,\ldots,q_k)$ and \begin{equation}\label{bk} B_k = \left(\begin{array}{cccc} \alpha_1 & & &\\ \beta_2 & \alpha_2 & &\\ & \beta_3 &\ddots & \\& & \ddots & \alpha_{k} \\ & & & \beta_{k+1} \end{array}\right)\in \mathbb{R}^{(k+1)\times k}. \end{equation} It is known from \eqref{eqmform1} that \begin{equation}\label{Bk} B_k=P_{k+1}^TAQ_k. \end{equation} Algorithm 1 cannot break down before step $n$ when $\sigma_i,\ i=1,2,\ldots,n$, are simple since $b$ is supposed to have nonzero components in the directions of $u_i,\ i=1,2,\ldots,n$. The singular values $\theta_i^{(k)},\ i=1,2,\ldots,k$ of $B_k$, called the Ritz values of $A$ with respect to the left and right subspaces $span\{P_{k+1}\}$ and $span\{Q_k\}$, are all simple. Write $\mathcal{V}_k^R=\mathcal{K}_k(A^TA,A^Tb)$ and $\beta_1=\|b\|$. At iteration $k$, LSQR \cite{paige82} solves $$ \|Ax_k^{lsqr}-b\|=\min_{x\in \mathcal{V}_k^R} \|Ax-b\| $$ for the iterate \begin{equation}\label{yk} x_k^{lsqr}=Q_ky_k^{lsqr} \ \ \mbox{with}\ \ y_k^{lsqr}=\arg\min\limits_{y\in \mathbb{R}^{k}}\|B_ky-\beta_1 e_1^{(k+1)}\| =\beta_1 B_k^{\dagger} e_1^{(k+1)}, \end{equation} where $e_1^{(k+1)}$ is the first canonical basis vector of $\mathbb{R}^{k+1}$, and $\|Ax_k^{lsqr}-b\|=\|B_ky_k^{lsqr}-\beta_1 e_1^{(k+1)}\|$ decreases monotonically with respect to $k$. CGME \cite{bjorck15,hanke95,hanke01,hps16,hps09} is the CG method implicitly applied to $\min\|AA^Ty-b\|$ or $AA^Ty=b$ with $x=A^Ty$, and it solves the problem $$ \|x_{naive}-x_k^{cgme}\|=\min_{x\in \mathcal{V}_k^R}\|x_{naive}-x\| $$ for the iterate $x_k^{cgme}$. The error norm $\|x_{naive}-x_k^{cgme}\|$ decreases monotonically with respect to $k$. Let $\bar{B}_k\in \mathbb{R}^{k\times k}$ be the matrix consisting of the first $k$ rows of $B_k$, i.e., \begin{equation}\label{bbar} \bar{B}_k=P_k^TAQ_k. \end{equation} Then the CGME iterate \begin{equation}\label{ykcgme} x_k^{cgme}=Q_ky_k^{cgme} \ \ \mbox{with} \ \ y_k^{cgme}=\beta_1 \bar{B}_k^{-1}e_1^{(k)} \end{equation} and $\|Ax_k^{cgme}-b\|=\beta_{k+1}|(e_k^{(k)})^T y_k^{cgme}|$ with $e_k^{(k)}$ the $k$-th canonical vector of $\mathbb{R}^{k+1}$. LSMR \cite{bjorck15,fong} is mathematically equivalent to MINRES \cite{paige75} applied to the normal equation $A^TAx=A^Tb$ of \eqref{eq1}, and it solves $$ \|A^T(b-A x_k^{lsmr})\|=\min_{x\in \mathcal{V}_k^R}\|A^T(b-A x)\| $$ for the iterate $x_k^{lsmr}$. The residual norm $\|A^T(b-Ax_k^{lsmr})\|$ of the normal equation decreases monotonically with respect to $k$, and the iterate \begin{equation}\label{yklsmr} x_k^{lsmr}=Q_ky_k^{lsmr} \ \ \mbox{with}\ \ y_k^{lsmr}=\arg\min\limits_{y\in \mathbb{R}^{k}}\|(B_k^TB_k,\alpha_{k+1} \beta_{k+1}e_k^{(k)})^Ty-\alpha_1\beta_1 e_1^{(k+1)}\|. \end{equation} \section{Some results on LSQR in \cite{jia18a,jia18b}} \label{lsqr} From $\beta_1 e_1^{(k+1)}=P_{k+1}^T b$ and \eqref{yk} we have \begin{equation}\label{xk} x_k^{lsqr}=Q_k B_k^{\dagger} P_{k+1}^Tb, \end{equation} which is the minimum 2-norm solution to the problem that perturbs $A$ in \eqref{eq1} to its rank $k$ approximation $P_{k+1}B_k Q_k^T$. Recall that $\|A-A_k\|=\sigma_{k+1}$. Analogous to \eqref{tsvd}, LSQR now solves a sequence of problems \begin{equation}\label{lsqrreg} \min\|x\| \ \ \mbox{ subject to }\ \ \|P_{k+1}B_kQ_k^Tx-b\|=\min \end{equation} for $x_k^{lsqr}$ starting with $k=1$ onwards, where $A$ in \eqref{eq1} is replaced by a rank $k$ approximation $P_{k+1}B_kQ_k^T$ of it. Therefore, if $P_{k+1}B_k Q_k^T$ is a near best rank $k$ approximation to $A$ with an approximate accuracy $\sigma_{k+1}$ and the singular values $\theta_i^{(k)},\ i=1,2,\ldots,k$ of $B_k$ approximate the $k$ large $\sigma_i$ in natural order for $k=1,2,\ldots,k_0$, then LSQR has the same regularization ability as the TSVD method and thus has the full regularization. See \cite{jia18a} for more elaborations. The analysis on the TSVD method and the Tikhonov regularization method \cite{hansen98,hansen10} shows that the core requirement on a regularization method is to acquire the $k_0$ dominant SVD components of $A$ and meanwhile suppress the remaining $n-k_0$ SVD components. Therefore, the more accurate the rank $k$ approximation is to $A$ and the better approximations are the $k$ non-zero singular values of a projection matrix to some of the $k_0$ large singular values of $A$, the better regularization ability of the method has, so that the best regularized solution obtained by it is more accurate. Define \begin{equation}\label{gammak} \gamma_k^{lsqr} = \|A-P_{k+1}B_kQ_k^T\|, \end{equation} which measures the accuracy of the rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A$ involved in LSQR. Since the best rank $k$ approximation $A_k$ satisfies $\|A-A_k\|=\sigma_{k+1}$, we have $$ \gamma_k^{lsqr}\geq \sigma_{k+1}. $$ The author in \cite{jia18b} introduces the definition of a near best rank $k$ approximation to $A$: For LSQR, $P_{k+1}B_kQ_k^T$ is called a near best rank $k$ approximation to $A$ if $\gamma_k^{lsqr}$ is closer to $\sigma_{k+1}$ than to $\sigma_k$: \begin{equation}\label{near} \sigma_{k+1}\leq \gamma_k^{lsqr}<\frac{\sigma_k+\sigma_{k+1}}{2}. \end{equation} Based on the accurate estimates established in \cite{jia18a} for the 2-norm distances between the underlying Krylov subspace $\mathcal{V}_k^R$ and the $k$ dimensional dominant right singular subspace $span\{V_k\}$ for severely, moderately and mildly ill-posed problems, the author \cite{jia18b} has derived accurate estimates for $\gamma_k^{lsqr}$ and a number of approximation properties of $\theta_i^{(k)},\ i=1,2,\ldots,k$ for the three kinds of ill-posed problems. The results have shown that, for severely and moderately ill-posed problems with for suitable $\rho>1$ and $\alpha>1$ and for $k=1,2,\ldots,k_0$, $P_{k+1}B_kQ_k^T$ must be a near best rank $k$ approximation to $A$, and the $k$ Ritz values $\theta_i^{(k)}$ approximate the large singular values $\sigma_i$ of $A$ in natural order. This means that LSQR has the full regularization for these two kinds of problems with suitable $\rho>1$ and $\alpha>1$. However, for moderately ill-posed problems with $\alpha>1$ not enough and mildly ill-posed problems, $P_{k+1}B_kQ_k^T$ is generally not a near best rank $k$ approximation, and the $k$ Ritz values $\theta_i^{(k)}$ do not approximate the large singular values of $A$ in natural order for some $k\leq k^*$. In particular, the author \cite[Theorem 5.1]{jia18b} has proved the following three results: \begin{align} \gamma_k^{lsqr}&=\|G_k\| \label{gk} \end{align} with \begin{align}\label{gk1} G_k&=\left(\begin{array}{cccc} \alpha_{k+1} & & & \\ \beta_{k+2}& \alpha_{k+2} & &\\ & \beta_{k+3} &\ddots & \\& & \ddots & \alpha_{n} \\ & & & \beta_{n+1} \end{array}\right)\in \mathbb{R}^{(n-k+1)\times (n-k)}, \end{align} \begin{align} \alpha_{k+1}&<\gamma_k^{lsqr},\ \beta_{k+2}<\gamma_k^{lsqr},\ k=1,2,\ldots,n-1,\label{alphagamma}\\ \gamma_{k+1}^{lsqr}&<\gamma_k^{lsqr},\ k=1,2,\ldots,n-2.\label{monto} \end{align} These notation and results will be used later. \section{The regularization of CGME}\label{cgme} Note that $P_k^Tb=\beta_1e_1^{(k)}$. We obtain \begin{equation}\label{cgmesolution} x_k^{cgme}=Q_k\bar{B}_k^{-1}P_k^Tb. \end{equation} Therefore, analogous to \eqref{tsvd} and \eqref{lsqrreg}, CGME solves a sequence of problems \begin{equation}\label{cgmereg} \min\|x\| \ \ \mbox{ subject to }\ \ \|P_k\bar{B}_kQ_k^T x-b\|=\min \end{equation} for the regularized solution $x_k^{cgme}$ starting with $k=1$ onwards, where $A$ in \eqref{eq1} is replaced by a rank $k$ approximation $P_k\bar{B}_kQ_k^T$ of it. Just as LSQR, if $P_k\bar{B}_kQ_k^T$ is a near best rank $k$ approximation to $A$ and the $k$ singular values of $\bar{B}_k$ approximate the large ones of $A$ in natural order for $k=1,2,\ldots,k_0$, then CGME has the full regularization. By \eqref{eqmform1}, \eqref{eqmform2} and \eqref{Bk}, the rank $k$ approximation involved in LSQR is \begin{equation}\label{lsqrappr} P_{k+1}B_kQ_k^T=AQ_kQ_k^T. \end{equation} By \eqref{gammak}, we have $ \gamma_k^{lsqr}=\|A(I-Q_kQ_k^T)\|. $ For CGME, by \eqref{eqmform2} and \eqref{bbar}, we obtain \begin{align} P_{k+1}P_{k+1}^TA &= P_{k+1}(B_kQ_k^T+\alpha_{k+1}e_{k+1}^{(k+1)}q_{k+1}^T)\notag\\ &=P_{k+1}(B_k, \alpha_{k+1}e_{k+1}^{(k+1)})Q_{k+1}^T \notag\\ &=P_{k+1}\bar{B}_{k+1}Q_{k+1}^T. \label{leftbidiag} \end{align} Therefore, $x_k^{cgme}$ is the solution to \eqref{cgmereg} in which the rank $k$ approximation to $A$ is $P_k\bar{B}_kQ_k^T=P_kP_k^TA$, whose approximation accuracy is \begin{equation}\label{cgmeacc} \gamma_k^{cgme}=\|A-P_k\bar{B}_kQ_k^T\|=\|(I-P_kP_k^T)A\|. \end{equation} \begin{theorem}\label{cgmeappr} For the rank $k$ approximations $P_kP_k^TA=P_k\bar{B}_kQ_k^T$ to $A$, $k=1,2,\ldots,n-1$, with the definition $\gamma_0^{lsqr}=\|A\|$ we have \begin{eqnarray} &\gamma_k^{lsqr}<\gamma_k^{cgme}< \gamma_{k-1}^{lsqr}&, \label{cgmelowup} \\ &\gamma_{k+1}^{cgme}< \gamma_k^{cgme}.& \label{mono} \end{eqnarray} \end{theorem} {\em Proof. } We give two proofs of the upper bound in \eqref{cgmelowup}. The first is as follows. Since $P_{k+1}P_{k+1}^T(I-P_{k+1}P_{k+1}^T)=\mathbf{0}$, from \eqref{eqmform2} we obtain \begin{align} (\gamma_k^{lsqr})^2 &=\|A-P_{k+1}B_kQ_k^T\|^2 \notag\\ &=\|P_{k+1}P_{k+1}^TA-P_{k+1}B_kQ_k^T+(I-P_{k+1}P_{k+1}^T)A\|^2\notag\\ &=\max_{\|y\|=1}\|\left( (P_{k+1}P_{k+1}^TA-P_{k+1}B_kQ_k^T)+(I-P_{k+1}P_{k+1}^T)A\right)y\|^2\notag\\ &=\max_{\|y\|=1} \|P_{k+1}P_{k+1}^T(P_{k+1}P_{k+1}^TA- P_{k+1}B_kQ_k^T)y+(I-P_{k+1}P_{k+1}^T)Ay\|^2 \notag\\ &=\max_{\|y\|=1}\left(\|P_{k+1}P_{k+1}^T(P_{k+1}P_{k+1}^TA- P_{k+1}B_kQ_k^T)y\|^2+\|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\ &=\max_{\|y\|=1}\left(\|P_{k+1}(P_{k+1}^TA-B_kQ_k^T)y\|^2+ \|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\ &=\max_{\|y\|=1}\left(\|(P_{k+1}^TA-B_kQ_k^T)y\|^2+ \|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\ &=\max_{\|y\|=1}\left(\alpha_{k+1}^2|(e_{k+1}^{(k+1)})^Ty|^2+ \|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\ &> \max_{\|y\|=1} \|(I-P_{k+1}P_{k+1}^T)Ay\|^2 \notag\\ &=\|(I-P_{k+1}P_{k+1}^T)A\|^2=(\gamma_{k+1}^{cgme})^2,\notag \end{align} which is the upper bound in \eqref{cgmelowup} by replacing the index $k+1$ with $k$. Taking $k=n$ in \eqref{Bk} and augmenting $P_{n+1}$ such that $P=(P_{n+1},\widehat{P})\in \mathbb{R}^{m\times m}$ is orthogonal, we have \begin{equation}\label{fulllb} P^TAQ_n=\left(\begin{array}{c} B_n\\ \mathbf{0} \end{array} \right), \end{equation} where all the entries $\alpha_i$ and $\beta_{i+1}$, $i=1,2,\ldots,n$, of $B_n$ are positive, and $Q_n\in \mathbb{R}^{n\times n}$ is orthogonal. Then by the orthogonal invariance of the 2-norm we obtain \begin{equation}\label{cgmeapp} \gamma_k^{cgme}=\|A-P_k\bar{B}_kQ_k^T\|=\|P^T(A-P_k\bar{B}_kQ_k^T)Q_n\| =\|(\beta_{k+1}e_1,G_k)\| \end{equation} with $G_k$ defined by \eqref{gk1}. It is straightforward to justify that the singular values of $G_k\in \mathbb{R}^{(n-k+1)\times (n-k)}$ {\em strictly} interlace those of $(\beta_k e_1,G_k)\in \mathbb{R}^{(n-k+1)\times (n-k+1)}$ by noting that $(\beta_{k+1}e_1,G_k)^T (\beta_{k+1}e_1,G_k)$ is an {\em unreduced} symmetric tridiagonal matrix, from which and $\|G_k\|=\gamma_k^{lsqr}$ the lower bound of \eqref{cgmelowup} follows. Based on \eqref{cgmeapp}, we can also give the second proof of the upper bound in \eqref{cgmelowup}. Observe from \eqref{gk1} that $(\beta_{k+1}e_1,G_k)$ is the matrix deleting the first row of $G_{k-1}$. Applying the strict interlacing property of singular values to $(\beta_{k+1}e_1,G_k)$ and $G_{k-1}$, we obtain $\gamma_{k-1}^{lsqr}=\|G_{k-1}\|>\|(\beta_{k+1}e_1,G_k)\|=\gamma_k^{cgme}$, which yields the upper bound of \eqref{cgmelowup}. From \eqref{cgmeapp}, notice that $(\beta_{k+2}e_1,G_{k+1})$ is the matrix deleting the first row of $(\beta_{k+1}e_1,G_k)$ and the first column, which is {\em zero}, of the resulting matrix. Applying the strict interlacing property of singular values to $(\beta_{k+2}e_1,G_{k+1})$ and $(\beta_{k+1}e_1,G_k)$ establishes \eqref{mono}. \qquad\endproof \eqref{cgmelowup} indicates that $P_kP_k^TA=P_k\bar{B}_kQ_k^T$ is definitely a less accurate rank $k$ approximation to $A$ than $AQ_kQ_k^T=P_{k+1}B_kQ_k^T$ in LSQR. \eqref{mono} shows the strict monotonic decreasing property of $\gamma_k^{cgme}$. Moreover, keep in mind that $\gamma_k^{lsqr}\geq \sigma_{k+1}$. Then a combination of it and the results in Section~\ref{lsqr} indicates that, unlike $P_{k+1}B_kQ_k^T$ in LSQR, there is no guarantee that $P_k\bar{B}_kQ_k^T$ is a near best rank $k$ approximation to $A$ even for severely and moderately ill-posed problems, because $\gamma_k^{cgme}$ simply lies between $\gamma_k^{lsqr}$ and $\gamma_{k-1}^{lsqr}$ and there do not exist any sufficient conditions on $\rho>1$ and $\alpha>1$ that enforce $\gamma_k^{cgme}$ to be closer to $\gamma_k^{lsqr}$, let alone closer to $\sigma_{k+1}$. Therefore, based on the accuracy of the rank $k$ approximations in CGME and LSQR, we come to the conclusion that the regularization ability of CGME cannot be superior and is generally inferior to that of LSQR. Furthermore, since there is no guarantee that $P_k\bar{B}_kQ_k^T$ is a near best rank $k$ approximation for severely and moderately ill-posed problems with suitable $\rho>1$ and $\alpha>1$, CGME may not have the full regularization for these two kinds of problems. In the following we investigate the approximation behavior of the $k$ singular values $\bar{\theta}_i^{(k)}$ of $\bar{B}_k,\ k=1,2,\ldots,n$. Before proceeding, it is necessary to have a closer look at Algorithm 1 and distinguish some subtleties when $A$ is rectangular, i.e., $m>n$, and square, i.e., $m=n$, respectively. Keep in mind that Algorithm 1 does not break down before step $n$. For the rectangular case $m>n$, Algorithm 1 is exactly what is presented there, all the $\alpha_k$ and $\beta_{k+1}$ are positive, $k=1,2,\ldots,n$, and we generate $P_{n+1}$ and $Q_n$ at step $n$ and $\alpha_{n+1}=\beta_{n+2}=0$. As a consequence, by definition \eqref{leftbidiag}, we have \begin{equation}\label{bn1} \bar{B}_{n+1}=(B_n,\alpha_{n+1}e_{n+1}^{(n+1)})=(B_n,\mathbf{0}). \end{equation} It is known from \eqref{fulllb} that the singular values of $B_n$ are identical to the singular values $\sigma_i,\ i=1,2,\ldots,n$ of $A$. Therefore, the $n+1$ singular values of $\bar{B}_{n+1}$ are $\sigma_i,\,i=1,2,\ldots,n$ and zero. For the square case $m=n$, however, we must have $\beta_{n+1}=0$, that is, the last row of $B_n$ is zero; otherwise, we would obtain an $n\times (n+1)$ orthonormal matrix $P_{n+1}$, which is impossible since $P_n$ is already an orthogonal matrix. After Algorithm 1 is run to completion, we have $$ \bar{B}_n=P_n^TAQ_n, $$ whose singular values $\bar{\theta}_i^{(n)}=\sigma_i,\,i=1,2,\ldots,n$. By the definition \eqref{leftbidiag} of $\bar{B}_k$, from \eqref{eqmform2} and the above description, for both the rectangular and square cases we obtain \begin{equation}\label{aat} P_k^TAA^TP_k=\bar{B}_k \bar{B}_k^T,\,k=1,2,\ldots,n^*, \end{equation} with $n^*=n+1$ for $m>n$ and $n^*=n$ for $m=n$, which are unreduced symmetric tridiagonal matrices. For $m=n$, the eigenvalues of $AA^T$ are just $\sigma_i^2,\ i=1,2,\ldots,n$, all of which are simple and positive; for $m>n$, the eigenvalues of $AA^T$ are $\sigma_i^2,\, i=1,2,\ldots,n$ plus $m-n$ zeros, denoted by $\sigma_{n+1}^2=\cdots=\sigma_m^2=0$ for our later use. Therefore, by the definition of $n^*$, the eigenvalues of $\bar{B}_{n^*}\bar{B}_{n^*}^T$ are $\sigma_i^2,\ i=1,2,\ldots,n^*$. Notice that $\bar{B}_k \bar{B}_k^T$ is nothing but the projection matrix of $AA^T$ onto the $k$ dimensional Krylov subspace $\mathcal{K}_k(AA^T,b)$. More precisely, $\bar{B}_k \bar{B}_k^T$ is generated by the $k$-step symmetric Lanczos tridiagonalization process applied to $AA^T$ starting with $p_1=b/\|b\|$, and the eigenvalues of $\bar{B}_k \bar{B}_k^T$ generally approximate extreme eigenvalues of $AA^T$; see, e.g., \cite{bjorck96,bjorck15,parlett} for details. Particularly, the smallest eigenvalue $(\bar{\theta}_k^{(k)})^2$ of $\bar{B}_k \bar{B}_k^T$ generally converges to the smallest eigenvalue $\sigma_{n^*}^2$ of $AA^T$ as $k$ increases, which is $\sigma_{n+1}^2=0$ for $m>n$ and $\sigma_n^2>0$ for $m=n$. In contrast, for $B_k$, its smallest singular value $\theta_k^{(k)}> \sigma_n$ unconditionally until $\theta_n^{(n)}=\sigma_n$. We next give a number of close relationships between $\bar{\theta}_i^{(k)}$ and $\theta_i^{(k)}$ as well as between them and the singular values $\sigma_i$ of $A$, which are crucial to compare the regularizing effects of CGME with those of LSQR. \begin{theorem}\label{interlace} Denote by $\bar{\theta}_i^{(k)}$ and $\theta_i^{(k)},\ i=1,2,\ldots,k$ the singular values of $\bar{B}_k$ and $B_k$, respectively, labeled in decreasing order. Then \begin{align} \theta_1^{(k)}&>\bar{\theta}_1^{(k)}>\theta_2^{(k)}> \bar{\theta}_2^{(k)}> \cdots >\theta_k^{(k)}> \bar{\theta}_k^{(k)}, \ k=1,2,\ldots,n-1. \label{secondinter} \end{align} Moreover, \begin{align} \sigma_n&<\bar{\theta}_k^{(k)}<\theta_k^{(k)}<\sigma_k,\ k=1,2,\ldots,n-1 \label{thetak} \end{align} for $m=n$ and \begin{align} \sigma_n&<\theta_k^{(k)}<\sigma_k,\ k=1,2,\ldots,n-1,\label{thetak1} \\ 0&<\bar{\theta}_k^{(k)}<\theta_k^{(k)}<\sigma_k,\ k=1,2,\ldots,n-1 \label{thetak2} \end{align} for $m>n$. \end{theorem} {\em Proof.} Observe that $\bar{B}_k$ consists of the first $k$ rows of $B_k$ and all the $\alpha_k$ and $\beta_{k+1}$ are positive for $k=1,2,\ldots,n-1$. Applying the strict interlacing property of singular values to $\bar{B}_k$ and $B_k$, we obtain \eqref{secondinter}. Note that, for $A$ both rectangular and square, we have $\theta_i^{(n)}=\sigma_i,\ i=1,2,\ldots,n$. Since $B_k$ consists of the first $k$ columns of $B_n$ and deletes the last $n-k$ zero rows of the resulting matrix, applying the strict interlacing property of singular values to $B_k$ and $B_n$ (cf. \cite[p.198, Corollary 4.4]{stewartsun}), for $k=1,2,\ldots,n-1$ we have \begin{equation}\label{interbar} \sigma_{n-k+i}<\theta_i^{(k)}<\sigma_i,\ i=1,2,\ldots,k. \end{equation} Observe that $\bar{B}_k\bar{B}_k^T,\, k=1,2,\ldots,n-1,$ are the $k\times k$ leading principal matrices of $\bar{B}_{n^*}\bar{B}_{n^*}^T$, whose eigenvalues are $\sigma_i^2,\ i=1,2,\ldots,n^*$, and they are unreduced symmetric tridiagonal matrices. Applying the strict interlacing property of eigenvalues to $\bar{B}_k \bar{B}_k^T$ and $\bar{B}_{n^*}\bar{B}_{n^*}^T$, for $k=1,2,\ldots,n-1$ we obtain $$ \sigma_{n^*-k+i}^2<(\bar{\theta}_i^{(k)})^2<\sigma_i^2,\ i=1,2,\ldots,k, $$ from which and the definition of $n^*$ it follows that $$ \sigma_n<\bar{\theta}_k^{(k)}<\sigma_k $$ for $m=n$ and $$ 0=\sigma_{n+1}<\bar{\theta}_k^{(k)}<\sigma_k $$ for $m>n$. The above, together with \eqref{interbar} and \eqref{secondinter}, yields \eqref{thetak}--\eqref{thetak2}. \qquad\endproof From Section~\ref{lsqr}, \eqref{thetak} and \eqref{thetak2} indicate that, unlike the $k$ singular values $\theta_i^{(k)}$ of $B_k$, which have been proved to interlace the first $k+1$ large ones of $A$ and approximate the first $k$ ones in natural order for the severely or moderately ill-posed problems for suitable $\rho>1$ or $\alpha>1$ \cite{jia18b}, the lower bound for $\bar{\theta}_k^{(k)}$ is simply $\sigma_n$ for $m=n$ and zero for $m>n$, respectively, and there does not exist a better lower bound for it. This implies that $\bar{\theta}_k^{(k)}$ may be much smaller than $\sigma_{k+1}$ and it can be as small as $\sigma_n$ for $m=n$ and arbitrarily small for $m>n$, independent of $\rho$ or $\alpha$. In other words, the size of $\rho$ or $\alpha$ has no intrinsic effects on the size of $\bar{\theta}_k^{(k)}$, and cannot make $\bar{\theta}_k^{(k)}$ lie between $\sigma_{k+1}$ and $\sigma_k$ by choosing $\rho$ or $\alpha$, that is, the regularizing effects of CGME have intrinsic indeterminacy for severely and moderately ill-posed problems, independent of the size of $\rho$ and $\alpha$. Therefore, CGME may or may not have the full regularization for these two kinds of problems. On the other hand, even if the $\bar{\theta}_i^{(k)}$ approximate the first $k$ large singular values $\sigma_i$ in natural order, they are less accurate than the $k$ singular values $\theta_i^{(k)}$ of $B_k$ because of \eqref{thetak} and \eqref{thetak2}. Consequently, since the $\theta_i^{(k)}$ are always correspondingly larger than the $\bar{\theta}_i^{(k)}$, the regularization ability of CGME cannot be superior and is generally inferior to that of the LSQR. A final note is that, unlike for $m=n$, CGME may be at risk for $m>n$ since the $\bar{\theta}_k^{(k)}$ converges to {\em zero} other than $\sigma_n$ as $k$ increases and can be arbitrarily small, which causes that the projected problem $\bar{B}_ky_k^{cgme}=\beta_1 e_1^{(k)}$ may even be worse conditioned than \eqref{eq1} and $\|x_k^{cgme}\| =\|y_k^{cgme}\|$ may be unbounded as $k$ increases and bigger than $\|x_{naive}\|$ for a given \eqref{eq1}. In what follows we establish more results on the regularization of CGME and get more insight into it. It is known, e.g., \cite[p.146]{hansen98} that the LSQR iterate $x_k^{lsqr}$ takes the following filtered SVD expansion: \begin{equation}\label{eqfilter2} x_k^{lsqr}=\sum\limits_{i=1}^nf_i^{(k,lsqr)}\frac{u_i^{T}b}{\sigma_i}v_i,\ k=1,2,\ldots,n, \end{equation} where the filters \begin{equation}\label{filterlsqr} f_i^{(k,lsqr)}=1-\prod\limits_{j=1}^k\frac{(\theta_j^{(k)})^2-\sigma_i^2} {(\theta_j^{(k)})^2},\ i=1,2,\ldots,n. \end{equation} These results have been extensively used to study the regularizing effects of LSQR; see, e.g., \cite{hansen98,hansen07,jia18a}. We now prove that the CGME iterate $x_k^{cgme}$ also takes a filtered SVD expansion similar to \eqref{eqfilter2} and \eqref{filterlsqr}, but its proof is much more involved than that of \eqref{eqfilter2} and \eqref{filterlsqr}. \begin{theorem}\label{theocgme} The CGME iterate $x_k^{cgme}$ has the filtered SVD expansion \begin{equation}\label{cgmeexpr} x_k^{cgme}=\sum\limits_{i=1}^nf_i^{(k,cgme)}\frac{u_i^{T}b}{\sigma_i}v_i,\ k=1,2,\ldots,n, \end{equation} where the filters \begin{equation}\label{filter} f_i^{(k,cgme)}=1-\prod\limits_{j=1}^k\frac{(\bar{\theta}_j^{(k)})^2-\sigma_i^2} {(\bar{\theta}_j^{(k)})^2},\ i=1,2,\ldots,n. \end{equation} \end{theorem} {\em Proof.} Let $y_{naive}=(AA^T)^{\dagger}b$ be the minimal 2-norm solution to $\min_{y} \|AA^Ty-b\|$. Recall Algorithm 1. For this minimization problem, starting with $y_0^{cgme}=\mathbf{0}$, at iteration $k$ the CG method extracts $y_k^{cgme}$ from the $k$ dimensional Krylov subspace $ \mathcal{K}_k(AA^T,b)=span\{P_k\}. $ It is well known from, e.g., \cite{meurant}, that the residual of $y_k^{cgme}$ is \begin{equation}\label{yksol} b-AA^T y_k^{cgme}=r_k(AA^T) b, \end{equation} where $r_k(\lambda)$ is the $k$-th residual, or Ritz, polynomial with the normalization $r_k(0)=1$, whose $k$ roots are the Ritz values $(\bar{\theta}_j^{(k)})^2$ of $AA^T$ with respect to the subspace $span\{P_k\}$; see \eqref{aat}. Therefore, we have \begin{equation}\label{rkex} r_k(\sigma_i^2)=\prod_{j=1}^n \frac{(\bar{\theta}_j^{(k)})^2-\sigma_i^2} {(\bar{\theta}_j^{(k)})^2},\ i=1,2,\ldots,n. \end{equation} From the full SVD \eqref{eqsvd} of $A$, write $U=(U_n,U_{\perp})$. Then we have $A=U_n\Sigma V^T$, the compact SVD of $A$. It is straightforward to see that $$ AA^T(AA^T)^{\dagger}=(AA^T)^{\dagger}AA^T=U_nU_n^T. $$ Therefore, by $y_{naive}=(AA^T)^{\dagger}b$, premultiplying the two hand sides of \eqref{yksol} by $(AA^T)^{\dagger}$ yields \begin{align*} y_{naive}-U_nU_n^Ty_k^{cgme}&=(AA^T)^{\dagger}r_k(AA^T) b\\ &=r_k(AA^T)(AA^T)^{\dagger} b=r_k(AA^T) y_{naive}, \end{align*} from which it follows that \begin{equation}\label{ykex} U_nU_n^Ty_k^{cgme}=(I-r_k(AA^T))y_{naive}. \end{equation} By the SVD \eqref{eqsvd} of $A$, we have $$ y_{naive}=(AA^T)^{\dagger}b=\sum_{i=1}^n\frac{u_i^Tb}{\sigma_i^2}u_i. $$ Hence for $ k=1,2,\ldots,n$ from \eqref{rkex} and \eqref{ykex} we obtain \begin{align} U_nU_n^T y_k^{cgme}&=\sum\limits_{i=1}^n (1-r_k(\sigma_i^2))\frac{u_i^{T}b} {\sigma_i^2}u_i \notag\\ &=\sum\limits_{i=1}^nf_i^{(k,cgme)}\frac{u_i^{T}b}{\sigma_i^2}u_i \label{cgmeexp} \end{align} with $f_i^{(k,cgme)}$ defined by \eqref{filter}. In terms of $x_k^{cgme}=A^T y_k^{cgme}$ and $A=U_n\Sigma V^T$, premultiplying the two hand sides of the above relation by $A^T$ and exploiting $U_n^TU_n=I$, we have $$ x_k^{cgme}=A^T y_k^{cgme}=V\Sigma U_n^T y_k^{cgme} =V\Sigma U_n^T U_n U_n^T y_k^{cgme}=A^T U_nU_n^T y_k^{cgme}. $$ Then making use of this relation, $A^T u_i=\sigma_i v_i$ and \eqref{cgmeexp}, we obtain \eqref{cgmeexpr}. \qquad\endproof Based on Theorems~\ref{interlace}--\ref{theocgme}, we can prove the following important result. \begin{theorem}\label{semi} Let $k_{cgme}^*$ and $k_{lsqr}^*$ be iterations at which the semi-convergence of CGME and LSQR occurs, respectively, $k_0$ the transition point of the TSVD method. Then \begin{equation}\label{semicgme} k_{cgme}^*\leq k_{lsqr}^*\leq k_0, \end{equation} that is, the semi-convergence of CGME always occurs no later than that of LSQR and the TSVD method. \end{theorem} {\em Proof}. The result $k_{lsqr}^*\leq k_0$ has been proved in \cite[Theorem 3.1]{jia18a}. Next we first prove that $k_{cgme}^*\leq k_0$. Recall that the best TSVD solution $$ x_{k_0}^{tsvd}=A_{k_0}^{\dagger}b=\sum_{i=1}^{k_0}\frac{u_i^Tb}{\sigma_i}v_i $$ and the fact that a 2-norm filtering best possible solution must capture the $k_0$ dominant SVD components of $A$ and suppress the $n-k_0$ small SVD components of $A$. For CGME, from \eqref{thetak} and \eqref{thetak2} we have $ \bar{\theta}_k^{(k)}<\sigma_k, $ Therefore, at iteration $k_0+1$ we must have $\bar{\theta}_{k_0+1}^{(k_0+1)}< \sigma_{k_0+1}$. If the $\bar{\theta}_i^{(k)}$ approximate the large $\sigma_i$ in natural order for $k=1,2,\ldots,k_0$, then by \eqref{filter} we have $f_i^{(k,cgme)}\rightarrow 1$ for $i=1,2,\ldots,k$ and $f_i^{(k,cgme)}\rightarrow 0$ for $i=k+1,\ldots,n$. On the other hand, by \eqref{filter} we have $f_{k_0+1}^{(k_0+1,cgme)}=\mathcal{O}(1)$. Compared with the best TSVD solution, by \eqref{cgmeexpr} the above shows that the CGME iterate $x_k^{cgme}$ captures the $k$ dominant SVD components of $A$ and filters out the $n-k$ small ones. As a result, $x_k^{cgme}$ improves until iteration $k_0$, and the semi-convergence of CGME occurs at iteration $k_{cgme}^*=k_0$. If the $\bar{\theta}_j^{(k)}$ do not converge to the large singular values of $A$ in natural order and $\bar{\theta}_k^{(k)}<\sigma_{k_0+1}$ for some iteration $k\leq k_0$ for the first time, then $x_k^{cgme}$ is already deteriorated by the noise $e$ before iteration $k$: Suppose that $\sigma_{j^*}<\bar{\theta}_k^{(k)}<\sigma_{k_0+1}$ with $j^*$ the smallest integer $j^*>k_0+1$. Then we can easily justify from \eqref{filter} that $f_i^{(k,cgme)}\in (0,1)$ and tends to zero monotonically for $i=j^*,j^*+1,\ldots,n$, but $$ \prod\limits_{j=1}^k\frac{(\bar{\theta}_j^{(k)})^2-\sigma_i^2} {(\bar{\theta}_j^{(k)})^2}=\frac{(\bar{\theta}_k^{(k)})^2-\sigma_i^2} {(\bar{\theta}_k^{(k)})^2}\prod\limits_{j=1}^{k-1} \frac{(\bar{\theta}_j^{(k)})^2-\sigma_i^2}{(\bar{\theta}_j^{(k)})^2}\leq 0, \ i=k_0+1,\ldots,j^*-1 $$ since the first factor is non-positive and the second factor is positive by noting that $\bar{\theta}_j^{(k)}>\sigma_i$, $j=1,2,\ldots,k-1$ for $i=k_0+1,\ldots,j^*-1$. As a result, $f_i^{(k,cgme)}\geq 1$ for $i=k_0+1,\ldots, j^*-1$, showing that $x_k^{cgme}$ has been deteriorated by the noise $e$ and the semi-convergence of CGME has occurred at some iteration $k^*_{cgme}<k_0$. Finally, we prove $k_{cgme}^*\leq k_{lsqr}^*$. Notice that $\bar{\theta}_k^{(k)}<\theta_k^{(k)}$ means that the first iteration $k$ such that $\bar{\theta}_k^{(k)}<\sigma_{k_0+1}$ for CGME is no more than the one such that $\theta_k^{(k)}<\sigma_{k_0+1}$ for LSQR. Therefore, applying a similar proof to that of the semi-convergence of CGME to \eqref{eqfilter2}--\eqref{filterlsqr}, it is direct that the semi-convergence of CGME occurs no later than that of LSQR, i.e., $k_{cgme}^*\leq k_{lsqr}^*$. \qquad\endproof It is seen from the above proof that, due to $\bar{\theta}_k^{(k)}<\theta_k^{(k)}$, the semi-convergence of CGME can occur much earlier than that of LSQR. We can, {\em informally}, deduce more features of CGME. By definition, the optimality of CGME means that \begin{equation}\label{cgmelsqr} \|x_{naive}-x_k^{cgme}\|\leq \|x_{naive}-x_k^{lsqr}\| \end{equation} holds unconditionally for $i=1,2,\ldots,n$. Since $x_k^{cgme}$ and $x_k^{lsqr}$ converge to $x_{true}$ until iterations $k_{cgme}^*$ and $k_{lsqr}^*$ at which the semi-convergence of CGME and LSQR occurs, respectively, it is known that, for $k\leq k_{cgme}^*$ and $k\leq k_{lsqr}^*$, $\|x_{true}-x_k^{cgme}\|$ and $\|x_{true}-x_k^{lsqr}\|$ are negligible relative to $\|x_{naive}-x_{true}\|$, which is supposed very large in the context of discrete ill-posed problems. As a consequence, we have \begin{eqnarray} \|x_{naive}-x_k^{cgme}\|&=&\|x_{naive}-x_{true}+x_{true}-x_k^{cgme}\| \nonumber \\ &\approx& \|x_{naive}-x_{true}\|+\|x_{true}-x_k^{cgme}\|,\label{naive1}\\ \|x_{naive}-x_k^{lsqr}\|&=&\|x_{naive}-x_{true}+x_{true}-x_k^{lsqr}\| \nonumber\\ &\approx& \|x_{naive}-x_{true}\| +\|x_{true}-x_k^{lsqr}\|. \label{naive2} \end{eqnarray} Since the first terms in the right-hand sides of \eqref{naive1} and \eqref{naive2} are the same constant, a combination of \eqref{cgmelsqr} with \eqref{naive1} and \eqref{naive2} means that \begin{equation}\label{accurcomp} \|x_{true}-x_k^{cgme}\|\leq \|x_{true}-x_k^{lsqr}\| \end{equation} generally holds until $k=k_{cgme}^*$. That is, $x_k^{cgme}$ should be {\em at least} as accurate as $x_k^{lsqr}$ until the semi-convergence of CGME occurs. Then for $k>k_{cgme}^*$, according to Theorem~\ref{semi}, $x_k^{lsqr}$ continues approximating $x_{true}$ as $k$ increases until iteration $k=k_{lsqr}^*$, at which LSQR ultimately computes a more accurate approximation $x_{k_{lsqr}^*}^{lsqr}$ to $x_{true}$ than $x_{k_{cgme}^*}^{cgme}$. We will have more exciting findings. Observe that after Lanczos bidiagonalization is run $k$ steps, we have already obtained $\bar{B}_{k+1}$, $P_{k+1}$ and $Q_{k+1}$, but LSQR and CGME exploit only $B_k, Q_k$ and $\bar{B}_k, Q_k$, respectively. Since $\alpha_{k+1}>0$ for $k\leq n-1$, applying the strict interlacing property of singular values to $B_k$ and $\bar{B}_{k+1}$, we have \begin{equation}\label{bkbarbk} \bar{\theta}_1^{(k+1)}>\theta_1^{(k)}>\bar{\theta}_2^{(k+1)}>\cdots> \bar{\theta}_k^{(k+1)}>\theta_k^{(k)}>\bar{\theta}_{k+1}^{(k+1)}, \ k=1,2,\ldots,n-1. \end{equation} Note from \eqref{thetak2} that $\bar{\theta}_i^{(k+1)}<\sigma_i,\ i=1,2,\ldots,k+1$. Combining \eqref{bkbarbk} with \eqref{thetak2}, we see that as approximations to the first large $k$ singular values $\sigma_i$ of $A$, although the $k$ singular values $\bar{\theta}_i^{(k)}$ of $\bar{B}_k$ are less accurate than the singular values $\theta_i^{(k)}$ of $B_k$, the first $k$ singular values $\bar{\theta}_i^{(k+1)}$ of $\bar{B}_{k+1}$ are more accurate than the $\theta_i^{(k)}$ correspondingly. Based on the above property and \eqref{leftbidiag}, we next show how to extract a best possible rank $k$ approximation to $A$ from the available rank $k+1$ matrix $P_{k+1}\bar{B}_{k+1}Q_{k+1}^T=P_{k+1}P_{k+1}^TA$ generated by Algorithm 1. \begin{theorem}\label{approx} Let $\bar{C}_k$ be the best rank $k$ approximation to $\bar{B}_{k+1}$ with respect to the 2-norm. Then for $k=1,2,\ldots,n-1$ we have \begin{align} \|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|&\leq \sigma_{k+1}+\gamma_{k+1}^{cgme},\label{lowrank}\\ \|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|&\leq \bar{\theta}_{k+1}^{(k+1)}+\gamma_{k+1}^{cgme}, \label{better} \end{align} where $\bar{\theta}_{k+1}^{(k+1)}$ is the smallest singular value of $\bar{B}_{k+1}$ and $\gamma_{k+1}^{cgme}$ is defined by \eqref{cgmeacc}. \end{theorem} {\em Proof}. Write $A-P_{k+1}\bar{C}_kQ_{k+1}^T=A-P_{k+1}\bar{B}_{k+1}Q_{k+1}^T+ P_{k+1}(\bar{B}_{k+1}-\bar{C}_k)Q_{k+1}^T$. Then exploiting \eqref{leftbidiag}, we obtain \begin{align} \|A-P_{k+1}\bar{C}_kQ_{k+1}^T\| & \leq \|A-P_{k+1}\bar{B}_{k+1}Q_{k+1}^T\|+ \|P_{k+1}(\bar{B}_{k+1}-\bar{C}_k)Q_{k+1}^T\| \label{barbk}\\ &= \|A-P_{k+1}\bar{B}_{k+1}Q_{k+1}^T\|+ \|P_{k+1}P_{k+1}^TA-P_{k+1}\bar{C}_kQ_{k+1}^T\|. \label{decom2} \end{align} By the definition of $C_k$ and \eqref{leftbidiag}, it is easily justified that $P_{k+1}\bar{C}_kQ_{k+1}^T$ is the best rank $k$ approximation to $P_{k+1}\bar{B}_{k+1}Q_{k+1}^T= P_{k+1}P_{k+1}^TA$ in the 2-norm as $P_{k+1}$ and $Q_{k+1}$ are column orthonormal. Keep in mind that $A_k$ is the best rank $k$ approximation to $A$. Since $P_{k+1}P_{k+1}^TA_k$ is a rank $k$ approximation to $P_{k+1}P_{k+1}^TA$, we obtain \begin{align*} \|P_{k+1}P_{k+1}^TA-P_{k+1}\bar{C}_kQ_{k+1}^T\| &\leq \|P_{k+1}P_{k+1}^TA -P_{k+1}P_{k+1}^TA_k\| \\ &=\|P_{k+1}P_{k+1}^T(A-A_k)\|\\ &\leq \|A-A_k\|=\sigma_{k+1}. \end{align*} Note that the first term in the right-hand side of \eqref{decom2} is just $\gamma_{k+1}^{cgme}$. Therefore, it follow from \eqref{decom2} that \eqref{lowrank} holds. Since $P_{k+1}$ and $Q_{k+1}$ are column orthonormal and $C_k$ is the best rank $k$ approximation to $\bar{B}_{k+1}$, by the orthogonal invariance of 2-norm we obtain $$ \|P_{k+1}(\bar{B}_{k+1}-\bar{C}_k)Q_{k+1}^T\|=\|\bar{B}_{k+1}-\bar{C}_k\| =\bar{\theta}_{k+1}^{(k+1)}, $$ which, together with \eqref{barbk}, yields \eqref{better}. \qquad\endproof The bound \eqref{better} is always smaller than the bound \eqref{lowrank} because of $\bar{\theta}_{k+1}^{(k+1)}<\sigma_{k+1}$ from \eqref{thetak} and \eqref{thetak2}. Indeed, the bound \eqref{lowrank} can be conservative since we have amplified $\|P_{k+1}(\bar{B}_{k+1}-\bar{C}_k)Q_{k+1}^T\|$ twice and obtained its bound $\sigma_{k+1}$, which might be a considerable overestimate. Moreover, as we have explained previously, \eqref{thetak} and \eqref{thetak2} show that $\bar{\theta}_{k+1}^{(k+1)}>\sigma_n$ may approach $\sigma_n$ for $m=n$ and $\bar{\theta}_{k+1}^{(k+1)}>0$ can be close to zero arbitrarily for $m>n$. By definition \eqref{gammak} of $\gamma_k^{lsqr}$, since $\gamma_{k+1}^{cgme}<\gamma_k^{lsqr}$ (cf. the upper bound of \eqref{cgmelowup}), $\gamma_k^{lsqr}\geq \sigma_{k+1}>\bar{\theta}_{k+1}^{(k+1)}$ and $\|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|\geq\sigma_{k+1}$, the right-hand side of \eqref{better} satisfies $$ \sigma_{k+1}\leq \bar{\theta}_{k+1}^{(k+1)}+\gamma_{k+1}^{cgme}<2\gamma_k^{lsqr}. $$ Therefore, $\bar{\theta}_{k+1}^{(k+1)}+\gamma_{k+1}^{cgme}$ is as small as and can even be smaller than $\gamma_k^{lsqr}$, meaning that $P_{k+1}\bar{C}_kQ_{k+1}^T$ is as accurate as the rank $k$ approximation $P_{k+1}B_kQ_k^T$ in LSQR. Define $Q_{n+1}=(Q_n,\mathbf{0})\in \mathbb{R}^{n\times (n+1)}$, and note from \eqref{bn1} that $\bar{B}_{n+1}=(B_n,\mathbf{0})$. Recall that the singular values of $\bar{B}_{n+1}$ and $B_n$ are $\bar{\theta}_i^{(n+1)}, \ i=1,2,\ldots,n+1$ and $\theta_i^{(n)},\ i=1,2,\ldots,n$, respectively, and $\bar{\theta}_i^{(n+1)}=\theta_i^{(n)}=\sigma_i,\ i=1,2,\ldots,n$ and $\bar{\theta}_{n+1}^{(n+1)}=0$. From \eqref{fulllb} and the definition of $\bar{C}_n$, since $\bar{B}_{n+1}$ is of rank $n$, we have $$ \bar{C}_n=\bar{B}_{n+1} $$ and $$ A=P_{n+1}B_nQ_n^T=P_{n+1}\bar{B}_{n+1}Q_{n+1}^T=P_{n+1}\bar{C}_nQ_{n+1}^T. $$ Based on Theorem~\ref{approx} and the analysis followed, just as done in CGME and LSQR, we can replace $A$ in \eqref{eq1} by the rank $k$ approximation $P_{k+1}\bar{C}_kQ_{k+1}^T$ and propose a modified CGME (MCGME) method that solves \begin{equation}\label{mcgmereg} \min\|x\| \ \ \mbox{ subject to }\ \ \|P_{k+1}\bar{C}_kQ_{k+1}^T x-b\|=\min \end{equation} for the regularized solution $x_k^{mcgme}=Q_{k+1}y_k^{mcgme}$ of \eqref{eq1} with \begin{equation}\label{ykmcgme} y_k^{mcgme}= \bar{C}_k^{\dagger}P_{k+1}^Tb=\beta_1 \bar{C}_k^{\dagger}e_1^{(k+1)} \end{equation} starting with $k=1$ onwards. MCGME is expected to have the same regularization ability as LSQR because (i) the $k$ nonzero singular values $\bar{\theta}_i^{(k+1)}$ of $\bar{C}_k$ are more accurate than the $k$ singular values $\theta_i^{(k)}$ of $B_k$ as approximations to the first $k$ singular values of $A$ and (ii) $P_{k+1}\bar{C}_kQ_{k+1}^T$ is a rank $k$ approximation which is as accurate as $P_{k+1}B_kQ_k^T$ in LSQR. Regarding implementations, we comment that the singular values, and left and right singular vectors of $\bar{C}_k^{\dagger}$ is already available when $\bar{C}_k$ is extracted from the SVD of $\bar{B}_{k+1}$, whose computational cost is $\mathcal{O}(k^3)$ flops. As a result, by \eqref{ykmcgme} we can compute $y_k^{mcgme}$ at cost of $\mathcal{O}(k^2)$ flops. A difference from CGME and LSQR is that MCGME seeks $x_k^{mcgme}$ in the $k+1$ dimensional Krylov subspace $\mathcal{K}_{k+1}(A^TA,A^Tb)$ other than in $\mathcal{K}_k(A^TA,A^Tb)$. Numerical experiments will justify that MCGME has very comparable regularizing effects to LSQR and can obtain the best regularized solutions with very similar accuracy to those by LSQR. We will not consider the by-product MCGME method further in this paper. $\bar{C}_k$ may have some other potential applications. For example, when we are required to compute several largest singular triplets of a large scale matrix $A$, we can use the nonzero singular values of $\bar{C}_k$ to replace the ones of $B_k$ as more accurate approximations to the largest singular values of $A$ in Lanczos bidiagonaliation type algorithms \cite{jia03}. In such a way, exploiting the SVD of $\bar{C}_k$, we can also compute more accurate approximate left and right singular vectors of $A$ simultaneously. A development of such modified algorithms is beyond the scope of this paper. \section{The accuracy of truncated rank $k$ approximate SVDs by randomized algorithms}\label{randomappro} In this section, we deviate from the context of Krylov solvers. Using the analysis approach in the last section, we consider the accuracy of a truncated rank $k$ SVD approximation to $A$ constructed by standard randomized algorithms and their improved variants \cite{halko11}. This topic has been intensively investigated in recent years; see the survey paper \cite{halko11} and the references therein. Algorithm 2 is one of the two basic randomized algorithms from \cite{halko11} for computing an approximate SVD and extracting a truncated rank $k$ approximate SVD from it. A minor difference from the other sections in this paper is that we drop the restrictions that the singular values of $A$ are simple and $m\geq n$, that is, the singular values of $A$ are $\sigma_1\geq \sigma_2\geq\cdots \geq \sigma_{\min\{m,n\}}$. {\bf Algorithm 2: Randomized approximate SVD of $A$} \begin{itemize} \item Input: Given $A\in \mathbb{R}^{m\times n}$, a target rank $k$, and an oversampling parameter $p$ satisfying $\ell=k+p\leq \min\{m,n\}$. \item Output: a truncated rank $k$ approximate SVD $A_{(k)}$ of $A$. \end{itemize} {\sf Stage A} \begin{enumerate} \item Draw an $n\times \ell$ Gaussian random matrix $\Omega$. \item Form the $m \times \ell$ matrix $Y = A\Omega$. \item Compute the compact QR factorization $Y = PR$. \end{enumerate} {\sf Stage B} \begin{enumerate} \item Form $B=P^TA$. \item Compute the compact SVD of the $\ell\times n$ matrix $B$: $B=\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T$. \item Set $\widehat{U}=P\widetilde{U}$. Compute a rank $\ell$ SVD approximation $PP^TA=\widehat{U}\widetilde{\Sigma}\widetilde{V}^T$ to $A$. \item Let $B_{(k)}=\widetilde{U}_k\widetilde{\Sigma}_{(k)}\widetilde{V}_k^T$ be the best rank $k$ approximation to $B$ with the diagonal $\widetilde{\Sigma}_{(k)}$ being the first $k$ diagonals of $\widetilde{\Sigma}$, and $\widetilde{U}_k$ and $\widetilde{V}_k$ the first $k$ columns of $\widetilde{U}$ and $\widetilde{V}$, respectively. Form a truncated rank $k$ SVD approximation $A_{(k)}=PB_{(k)}= \widehat{U}_k\widetilde{\Sigma}_{(k)}\widetilde{V}_k^T$ to $A$ with $\widehat{U}_k=P\widetilde{U}_k$. \end{enumerate} For the approximation accuracy of $A_{(k)}$ to $A$, Halko {\em et al.} \cite{halko11} establish a fundamental result (cf. Theorem 9.3 there): \begin{equation}\label{random} \|A-A_{(k)}\|\leq \sigma_{k+1}+\|(I-PP^T)A\|. \end{equation} Assume that the oversampling parameter $p\geq 4$. Making use of the probability theory, in terms of $\sigma_{k+1}$, Halko {\em et al.} \cite{halko11} have established a number of bounds for $\|(I-PP^T)A\|$; see, e.g., Theorems 10.5--10.8 and Corollary 10.9--10.10 there. However, concerning \eqref{random}, they point out in Remark 9.1 that {\em "In the randomized setting, the truncation step appears to be less damaging than the error bound of Theorem 9.3 suggests, but we currently lack a complete theoretical understanding of its behavior."} That is to say, the first term $\sigma_{k+1}$ in \eqref{random} is generally conservative and an overestimate. Motivated by the proof of \eqref{better} in Theorem~\ref{approx}, we can improve \eqref{random} substantially and reveal why \eqref{random} is an overestimate. Let \begin{equation}\label{ab} \widetilde{\sigma}_1\geq\widetilde{\sigma}_2\geq \cdots\geq \widetilde{\sigma}_{k+p} \end{equation} be the singular values of $B=P^TA$ defined in Algorithm 2. It is clear from Algorithm 2 that $$ P^TAA^TP=BB^T $$ is an $(k+p)\times (k+p)$ symmetric matrix, which is the projection matrix of $AA^T$ onto the subspace $span\{P\}$ in the orthonormal basis of $\{p_i\}_{i=1}^{k+p}$ with $P=(p_1,p_2,\ldots,p_{k+p})$, whose eigenvalues are $\widetilde{\sigma}_i^2,\ i=1,2,\ldots,k+p$. Keep in mind that the eigenvalues of $AA^T$ are $\sigma_i^2,\ i=1,2,\ldots,\min\{m,n\}$ and $m-\min\{m,n\}$ zeros, denoted by $\sigma_{\min\{m,n\}+1}^2=\cdots=\sigma_m^2=0$ for later use. \begin{theorem}\label{improve} For $A\in \mathbb{R}^{m\times n}$, let $P$ and $A_{(k)}$ be defined as in Algorithm 2, and $\widetilde{\sigma}_{k+1}$ defined as in \eqref{ab}. Then \begin{equation}\label{betterbound} \|A-A_{(k)}\|\leq \widetilde{\sigma}_{k+1}+\|(I-PP^T)A\| \end{equation} with \begin{equation}\label{interrand} \sigma_{m-p+1}\leq \widetilde{\sigma}_{k+1}\leq \sigma_{k+1}. \end{equation} \end{theorem} {\em Proof.} Since $P$ is orthonormal, the eigenvalues of $BB^T$ interlace those of $AA^T$ and satisfy (cf. \cite[p.198, Corollary 4.4]{stewartsun}) $$ \sigma_{m-(k+p)+i}\leq \widetilde{\sigma}_i\leq \sigma_i,\ i=1,2,\ldots,k+p, $$ from which \eqref{interrand} follows. From Algorithm 2, we can write \begin{eqnarray*} A-A_{(k)}&=&A-PP^TA+PP^TA-A_{(k)} \\ &=&A-PP^TA+ P\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T-P\widetilde{U}_k\widetilde{\Sigma}_{(k)} \widetilde{V}_{(k)}^T\\ &=& (I-PP^T)A+P(\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T- \widetilde{U}_k\widetilde{\Sigma}_{(k)} \widetilde{V}_{(k)}^T). \end{eqnarray*} Since $B_{(k)}$ is the best rank $k$ approximation to $B$, by the column orthonormality of $P$ we obtain \begin{eqnarray*} \|A-A_{(k)}\|&\leq &\|(I-PP^T)A\|+\|P(\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T- \widetilde{U}_k\widetilde{\Sigma}_{(k)} \widetilde{V}_{(k)}^T)\|\\ &=&\|(I-PP^T)A\|+\|\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T- \widetilde{U}_k\widetilde{\Sigma}_{(k)} \widetilde{V}_{(k)}^T\|\\ &=& \|(I-PP^T)A\|+\|B-B_{(k)}\| \\ &=&\|(I-PP^T)A\|+\widetilde{\sigma}_{k+1}, \end{eqnarray*} which proves \eqref{betterbound}. \qquad\endproof \begin{remark} This theorem indicates that $\widetilde{\sigma}_{k+1}$ never exceeds $\sigma_{k+1}$ and, for $m, n$ large and $k+p$ small, it may be much smaller than $\sigma_{k+1}$. Specifically, $\widetilde{\sigma}_{k+1}$ can be as small as $\sigma_{m-p+1}$. For $m>n$, whenever $m-p+1>n$, we have $\sigma_{m-p+1}=0$. Consequently, the bound \eqref{betterbound} is unconditionally superior to the bound \eqref{random} and is sharper than the latter when $\widetilde{\sigma}_{k+1}<\sigma_{k+1}$. On the other hand, however, note that $\sigma_{k+1}\leq \|A-A_{(k)}\|$. Therefore, if $\|(I-PP^T)A\|<\sigma_{k+1}$, we must have $\widetilde{\sigma}_{k+1}\approx\sigma_{k+1}$, that is, $\widetilde{\sigma}_{k+1}$ dominates the bound \eqref{betterbound}. Summarizing the above, in response of Remark 9.1 in \cite{halko11}, we conclude that the truncation step does damage the approximation accuracy of the truncated rank $k$ approximation when $\|(I-PP^T)A\|<\sigma_{k+1}$ and it is less damaging when $\|(I-PP^T)A\|\geq \sigma_{k+1}$. \end{remark} As we have seen, the column space of $P$ constructed by Algorithm 2 aims to capture the $(k+p)$-dimensional dominant left singular subspace of $A$. A variant of it is to capture the $(k+p)$-dimensional right dominant singular subspace of $A$. Mathematically, it amounts to applying Algorithm 2 to $A^T$ and computes a truncated rank $k$ SVD approximation $A_{(k)}$ to $A$ in a similar way. We call such variant Algorithm 3, for which \eqref{random} now becomes \begin{equation}\label{randomv} \|A-A_{(k)}\|\leq \sigma_{k+1}+\|A(I-PP^T)\| \end{equation} with the orthonormal $P\in \mathbb{R}^{n\times (k+p)}$. Note that the eigenvalues of $A^TA$ are $\sigma_i^2,\ i=1,2,\ldots,\min\{m,n\}$ and $n-\min\{m,n\}$ zeros, denoted by $\sigma_{\min\{m,n\}+1}^2=\cdots=\sigma_n^2=0$. Since the eigenvalues of $(AP)^TAP$ interlace those of $A^TA$, using the same proof approach as that of Theorem~\ref{improve}, we can establish the following result. \begin{theorem}\label{improvev} For $A\in \mathbb{R}^{m\times n}$, let $P$ and $A_{(k)}$ be defined as in Algorithm 3, and $ \widetilde{\sigma}_1\geq\widetilde{\sigma}_2\geq \cdots\geq \widetilde{\sigma}_{k+p} $ be the singular values of $AP$. Then \begin{equation}\label{betterboundv} \|A-A_{(k)}\|\leq \widetilde{\sigma}_{k+1}+\|A(I-PP^T)\| \end{equation} with \begin{equation}\label{interrandv} \sigma_{n-p+1}\leq \widetilde{\sigma}_{k+1}\leq \sigma_{k+1}. \end{equation} \end{theorem} We comment that, in the case $m<n$, whenever $n-p+1>m$, we have $\sigma_{n-p+1}=0$, and consequently the bound \eqref{betterboundv} is unconditionally superior to and can be substantially sharper than the bound \eqref{randomv} for $m,n$ large and $k+p$ small. \begin{remark} If the singular values $\sigma_i$ of $A$ are all simple, by the strict interlacing properties of eigenvalues, the singular values of $B$ in Algorithms~2--3 are all simple too, and the lower and upper bounds in \eqref{interrand} and \eqref{interrandv} are strict, i.e., $\widetilde{\sigma}_{k+1}<\sigma_{k+1}$. \end{remark} \begin{remark} \eqref{random} and \eqref{randomv} and Theorems~\ref{improve}--\ref{improvev} hold for all the truncated rank $k$ SVD approximations generated by the enhanced variants of Algorithm 2--3 in \cite{halko11}, where the unique difference between the variants is the way that $P$ is generated. More generally, Theorems~\ref{improve}--\ref{improvev} are true for arbitrarily given orthonormal $P\in \mathbb{R}^{m\times (k+p)}$ and $P\in \mathbb{R}^{n\times (k+p)}$ with $k+p\leq \min\{m,n\}$, respectively. \end{remark} \section{The regularization of LSMR}\label{lsmr} From Algorithm 1 we obtain \begin{equation}\label{matrixlsmr} Q_{k+1}^TA^TAQ_k=(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T. \end{equation} Therefore, from \eqref{yklsmr}, noting that $Q_{k+1}^TA^Tb=\alpha_1\beta_1 e_1^{(k+1)}$, we have \begin{equation}\label{lsmrsolution} x_k^{lsmr}=Q_k(Q_{k+1}^TA^TAQ_k)^{\dagger}Q_{k+1}^TA^Tb, \end{equation} which means that LSMR solves the problem \begin{equation}\label{lsmrrank} \min\|x\| \ \ \mbox{ subject to }\ \ \|Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^Tx-A^Tb\|=\min \end{equation} for the regularized solution $x_k^{lsmr}$ starting with $k=1$ onwards. In the meantime, it is direct to justify that the TSVD solution $x_k^{tsvd}$ solves the problem \begin{equation}\label{tsvdreg} \min\|x\| \ \ \mbox{ subject to }\ \ \|A_k^TA_kx-A^Tb\|=\min \end{equation} starting with $k=1$ onwards. Therefore, \eqref{lsmrrank} and \eqref{tsvdreg} deal with the normal equation $A^TAx=A^T b$ of \eqref{eq1} by replacing $A^T A$ with its rank $k$ approximations $Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T$ and $A_k^TA_k$, respectively. In view of \eqref{lsmrrank} and \eqref{tsvdreg}, we need to accurately estimate the approximation accuracy $\|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|$ and investigate how the singular values of $Q_{k+1}^TA^TAQ_k$ approximate the $k$ large singular values $\sigma_i^2,\ i=1,2,\ldots,k$ of $A^TA$. We are concerned with some intrinsic relationships between the regularizing effects of LSMR and those of LSQR and compare the regularization ability of the two methods. By \eqref{xk}, \eqref{eqmform1}, \eqref{eqmform2}, \eqref{Bk} and $P_{k+1}P_{k+1}^Tb=b$, the LSQR iterate \begin{align*} x_k^{lsqr}&=Q_kB_k^{\dagger}P_{k+1}^Tb=Q_k(B_k^TB_k)^{-1}B_k^TP_{k+1}^Tb\\ &=Q_k(Q_k^TA^TAQ_k)^{-1}Q_k^TA^TP_{k+1}P_{k+1}^Tb \\ &=Q_k(Q_k^TA^TAQ_k)^{-1}Q_k^TA^Tb, \end{align*} which is the solution to the problem \begin{equation}\label{lsqrrank} \min\|x\| \ \ \mbox{ subject to }\ \ \|Q_kQ_k^TA^TAQ_kQ_k^T x-A^Tb\|=\min \end{equation} that replaces $A^TA$ by its rank $k$ approximation $Q_kQ_k^TA^TAQ_kQ_k^T =Q_kB_k^TB_kQ_k^T$ in the normal equation $A^T Ax=A^T b$. In this sense, the accuracy of such rank $k$ approximation is measured in terms of $\|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|$ for LSQR. Firstly, we present the following result, which compares the accuracy of two rank $k$ approximations involved in LSMR and LSQR in the sense of solving the normal equation $A^TA x=A^T b$. \begin{theorem}\label{lsqrmr} For the rank $k$ approximations to $A^TA$ in \eqref{lsmrrank} and \eqref{lsqrrank}, $k=1,2,\ldots,n-1$, we have \begin{align}\label{lsqrmrest} \|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|& < \|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|. \end{align} \end{theorem} {\em Proof.} For the orthogonal matrix $Q_n$ generated by Algorithm 1, noticing that $\alpha_{n+1}=0$, from \eqref{eqmform1} and \eqref{eqmform2} we obtain $Q_n^TA^TAQ_n=B_n^TB_n$ and \begin{align} \|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|&= \|Q_n^T(A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T)Q_n\| \nonumber\\ &=\|B_n^TB_n-(I,\mathbf{0})^T(B_k^TB_k,\alpha_{k+1} \beta_{k+1}e_k)^T(I,\mathbf{0})\| \nonumber \\ &=\|F_k\|, \ k=1,2,\ldots,n-1, \label{fklsmr} \end{align} where \begin{align}\label{fk} F_k&=\left(\begin{array}{ccccc} \alpha_{k+1}\beta_{k+1} & & & &\\ \alpha_{k+1}^2+\beta_{k+2}^2 &\alpha_{k+2}\beta_{k+2} &&&\\ \alpha_{k+2}\beta_{k+2} &\alpha_{k+2}^2+\beta_{k+3}^2&\ddots & &\\ & \alpha_{k+3}\beta_{k+3} & \ddots& & \\ & & &\alpha_{n-1}\beta_{n-1}& \\ & & \ddots& \alpha_{n-1}^2+\beta_n^2 &\alpha_n\beta_n\\ & & &\alpha_n\beta_n&\alpha_n^2+\beta_{n+1}^2\\ \end{array} \right)\\ &=\left(\begin{array}{c} \alpha_{k+1}\beta_{k+1}(e_1^{(n-k)})^T\\ G_k^TG_k \end{array} \right)\in \mathbb{R}^{(n-k+1)\times (n-k)} \label{fkgk} \end{align} is the matrix by deleting the $(k+1)\times k$ leading principal matrix of the symmetric tridiagonal matrix $B_n^TB_n$ and the first $k-1$ zero rows and $k$ zero columns of the resulting matrix, where $G_k$ is defined by \eqref{gk1} and $e_1^{(n-k)}$ are the first canonical vector of $\mathbb{R}^{n-k}$. On the other hand, it is direct to verify that \begin{align} \|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|&= \|Q_n^T(A^TA-Q_kQ_k^TA^TAQ_kQ_k^T)Q_n\| \nonumber\\ &=\|B_n^TB_n-(I,\mathbf{0})^TB_k^TB_k(I,\mathbf{0})\| \nonumber\\ &=\|F_k^{\prime}\|, \label{fklsqr} \end{align} where $F_k^{\prime}=\left(\alpha_{k+1}\beta_{k+1} e_2^{(n-k+1)},F_k\right)\in \mathbb{R}^{(n-k+1)\times (n-k+1)}$ with $e_2^{(n-k+1)}$ being the second canonical vector of $\mathbb{R}^{n-k+1}$. From \eqref{fk} and \eqref{fkgk}, we obtain \begin{align} F_k^{\prime} (F_k^{\prime})^T&= (\alpha_{k+1}\beta_{k+1} e_2^{(n-k+1)},F_k) (\alpha_{k+1}\beta_{k+1} e_2^{(n-k+1)}, F_k)^T \nonumber\\ &= F_kF_k^T+\alpha_{k+1}^2\beta_{k+1}^2 e_2^{(n-k+1)}(e_2^{(n-k+1)})^T. \label{fkprime} \end{align} Since $G_k^TG_k$ is unreduced symmetric tridiagonal, its eigenvalues are all simple. Observe from \eqref{fkgk} that \begin{equation}\label{fkfk} F_k^TF_k=(G_k^TG_k)^2+\alpha_{k+1}^2\beta_{k+1}^2 e_1^{(n-k)} (e_1^{(n-k)})^T, \ k=1,2,\ldots,n-1. \end{equation} Therefore, we know from \cite[p.218]{demmel} that the eigenvalues of $F_k^TF_k$ strictly interlace those of $(G_k^T G_k)^2$ and are all simple. Furthermore, we see from \eqref{gk1} that $G_k$ is of full column rank, which means that the eigenvalues of $F_k^TF_k$ are all {\em positive}. Note that the eigenvalues of $F_kF_k^T$ are those of $F_k^TF_k$ and zero. As a result, the eigenvalues of $F_kF_k^T$ are all simple. According to \cite[p.218]{demmel}, we know from \eqref{fkprime} that the eigenvalues of $F_k^{\prime} (F_k^{\prime})^T$ strictly interlace those of $F_kF_k^T$. Therefore, we obtain $$ \|F_k^{\prime}\|^2=\|F_k^{\prime}(F_k^{\prime})^T\|>\|F_kF_k^T\|=\|F_k\|^2, $$ which, from \eqref{fklsmr} and \eqref{fklsqr}, establishes \eqref{lsqrmrest}. \qquad\endproof This theorem indicates that, as far as solving $A^TAx=A^Tb$ is concerned, the rank $k$ approximation in LSMR is more accurate than that in LSQR. Recall that \eqref{gammak} measures the quality of the rank $k$ approximation involved in LSQR for the regularization problem \eqref{lsqrreg}. We now estimate the approximation accuracy of $Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T$ to $A^TA$ in terms of $(\gamma_k^{lsqr})^2$. \begin{theorem}\label{aprod} For $k=1,2,3,\ldots,n-1$, let $\gamma_k^{lsqr}$ be defined as \eqref{gammak}. For $k=2,3,\ldots,n-1$ we have \begin{equation}\label{aproderror} (\gamma_k^{lsqr})^2< \|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|\leq \sqrt{1+m_k(\gamma_{k-1}^{lsqr}/\gamma_k^{lsqr})^2}(\gamma_k^{lsqr})^2 \end{equation} with $0< m_k<1$ and $\gamma_0^{lsqr}=\|A\|$. For $k=1,2,\ldots,n-2$, the strict monotonic decreasing property holds: \begin{equation}\label{monolsmr} \|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|< \|A^TA-Q_{k+2}Q_{k+2}^TA^TAQ_{k+1}Q_{k+1}^T\|. \end{equation} \end{theorem} {\em Proof}. Combining \eqref{fkgk} with \eqref{gk} and \eqref{alphagamma}, for $k=2,3,\ldots,n-1$ we obtain from \cite[p.98]{wilkinson} and \cite[p.218]{demmel} that \begin{equation}\label{fgk} \|F_k\|^2=\|G_k\|^4+m^{\prime}_k\alpha_{k+1}^2\beta_{k+1}^2\leq (\gamma_k^{lsqr})^4+m_k(\gamma_{k-1}^{lsqr}\gamma_k^{lsqr})^2 \end{equation} with $0< m^{\prime}_k\leq 1$ and $0< m_k<m^{\prime}_k$, from which the lower and upper bounds in \eqref{aproderror} follow directly. For $k=1$, the equality in \eqref{fgk} is still true. From \eqref{alphagamma}, we have $\alpha_2<\gamma_1^{lsqr},\ \beta_2<\|A\|=\gamma_0^{lsqr}$. Therefore, we obtain $$ (\gamma_1^{lsqr})^4<\|F_1\|^2=\|G_1\|^4+m_1^{\prime} \alpha_2^2\beta_2^2 \leq (\gamma_1^{lsqr})^4+m_1(\gamma_0^{lsqr}\gamma_1^{lsqr})^2, $$ from which it follows that \eqref{aproderror} holds for $k=1$. From \eqref{fk}, we see that $F_{k+1}$ is the matrix that first deletes the first column and row of $F_k$ and then deletes the first zero column and row of the resulting matrix. Therefore, applying the interlacing property of singular values to $F_{k+1}$ and $F_k$ yields $$ \|F_k\|\leq \|F_{k+1}\|. $$ We next prove that the above "$\leq$" is the strict "$<$". Since $B_n^TB_n=Q_n^TA^TAQ_n$ is an unreduced symmetric tridiagonal matrix, its singular values $\sigma_i^2,\ i=1,2,\ldots,n$ are simple. Observe that $F_k$ is the matrix deleting the first $k$ columns of $B_n^TB_n$ and the first $k$ zero rows of the resulting matrix. Consequently, the singular values $\zeta_i^{(k)},\, i=1,2,\ldots,n-k$ of $F_k$ strictly interlace the simple singular values $\sigma_i^2,\ i=1,2,\ldots,n$ of $B_n^TB_n$ and are thus simple for $k=1,2,\ldots,n-1$. Moreover, the singular values of $F_{k+1}$ strictly interlace those of $F_k$, which means that $\zeta_1^{(k)}<\zeta_1^{(k+1)}$, i.e., $\|F_k\|<\|F_{k+1}\|$, which proves \eqref{monolsmr}. \qquad\endproof \begin{remark} According to the results and analysis in \cite{jia18b}, we have $\gamma_{k-1}^{lsqr}/\gamma_k^{lsqr}\sim \rho$ for severely ill-posed problems, and $\gamma_{k-1}^{lsqr}/\gamma_k^{lsqr}\sim (k/(k-1))^{\alpha}$ for moderately and mildly ill-posed problems. Therefore, the lower and upper bounds of \eqref{aproderror} indicate that $\|A^T A-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|\sim (\gamma_k^{lsqr})^2$. \end{remark} Finally, let us investigate the relationship between the singular values of rank $k$ approximation matrices in LSMR and LSQR. From \eqref{matrixlsmr} and \eqref{Bk}, we know that they are the singular values of $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T$ and $B_k^TB_k$, respectively. \begin{theorem}\label{lsmrlsqr} Let $(\widetilde{\theta}_1^{(k)})^2> (\widetilde{\theta}_2^{(k)})^2> \cdots>(\widetilde{\theta}_k^{(k)})^2$ be the singular values of $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T$. Then for $i=1,2,\ldots,k$ we have \begin{align} \theta_i^{(k)}&<\widetilde{\theta}_i^{(k)}<\sigma_i, \label{lsmrqr}\\ (\widetilde{\theta}_i^{(k)})^2&<(\theta_i^{(k)})^2+\gamma_k^{lsqr} \gamma_{k-1}^{lsqr}. \label{lsmrvalue} \end{align} \end{theorem} {\em Proof.} Observe that $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T$ is the matrix consisting of the first $k$ columns of $B_n^TB_n$ and deleting the last $n-k-1$ zero rows of the resulting matrix. As a result, since $\sigma_i,\ i=1,2,\ldots,n$, are simple, the singular values $(\widetilde{\theta}_i^{(k)})^2$ of $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T$ strictly interlace the singular values $\sigma_i^2$ of $B_n^TB_n$: $$ \sigma_{n-k+i}^2< (\widetilde{\theta}_i^{(k)})^2 <\sigma_i^2,\ i=1,2,\ldots,k $$ and are simple, which means the upper bound \eqref{lsmrqr}. Note that $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})^T(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})$ has the $k+1$ eigenvalues $(\widetilde{\theta}_i^{(k)})^4$ and zero, and $(B_k^TB_k)^T(B_k^TB_k)= (B_k^TB_k)^2$ is its $k\times k$ leading principal submatrix and has $k$ simple eigenvalues $(\theta_i^{(k)})^4$. Therefore, $(\theta_i^{(k)})^4$ strictly interlace $(\widetilde{\theta}_i^{(k)})^4$ and zero, which proves the lower bound of \eqref{lsmrqr}. On the other hand, we have $$ (B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k^{(k)})(B_k^TB_k,\alpha_{k+1} \beta_{k+1}e_k^{(k)})^T = (B_k^TB_k)^2+\alpha_{k+1}^2\beta_{k+1}^2e_k^{(k)}(e_k^{(k)})^T. $$ Recall \eqref{gk1} that $\alpha_{k+1}<\gamma_k^{lsqr}$ and $\beta_{k+1}<\gamma_{k-1}^{lsqr}$. By standard perturbation theory, we obtain $$ (\widetilde{\theta}_i^{(k)})^4-(\theta_i^{(k)})^4\leq \alpha_{k+1}^2\beta_{k+1}^2 <(\gamma_k^{lsqr}\gamma_{k-1}^{lsqr})^2, \ i=1,2,\ldots,k, $$ from which it follows that \eqref{lsmrvalue} holds. \qquad\endproof \begin{remark}\label{semilsmr} \eqref{lsmrqr} indicates that $\widetilde{\theta}_i^{(k)},\ 1=1,2,\ldots,k$ approximate the first $k$ large singular values $\sigma_i$ more accurately than $\theta_i^{(k)}$. Particularly, since $\theta_k^{(k)}<\widetilde{\theta}_k^{(k)}$, the first iteration step $k$ such that $\widetilde{\theta}_{k}^{(k)}<\sigma_{k_0+1}$ must be no smaller than the $k$ such that $\theta_{k}^{(k)}<\sigma_{k_0+1}$. A combination of this and the previous analysis on the semi-convergence of CGME and LSQR implies that the semi-convergence of LSMR must occur no sooner than that of LSQR. On the other hand, \eqref{lsmrvalue} shows that $\widetilde{\theta}_i^{(k)}$ is bounded from the above by $\theta_i^{(k)}$ as an approximation to $\sigma_i$, which and \eqref{lsmrqr} imply that $\widetilde{\theta}_i^{(k)}$ and $\theta_i^{(k)}$ interact and $\theta_i^{(k)}$ cannot be considerably more accurate than $\widetilde{\theta}_i^{(k)}$ as approximations to the large singular values of $A$ for $i=1,2,\ldots,k$. \end{remark} \begin{remark} A combination of Theorem~\ref{lsqrmr} and the above two remarks means that the regularizing effects of LSMR are not inferior to those of LSQR and the best regularized solutions by LSMR are at least as accurate as those by LSQR, that is, LSMR has the same regularization ability as that of LSQR. Particularly, from the results on LSQR in Section \ref{lsqr}, we conclude that LSMR has the full regularization for severely or moderately ill-posed problems with suitable $\rho>1$ or $\alpha>1$. \end{remark} A final note is that Huang and Jia \cite{huangjia17} have derived the eigendecomposition, i.e., equivalent SVD, filtered expansion of MINRES iterates for $Ax=b$ with $A$ symmetric; see Theorem~3.1 there. The result can be directly adapted to the LSMR iterates $x_k^{lsmr}$ by keeping in mind that LSMR is mathematically equivalent to MINRES applied to the specific symmetric positive definite linear system $A^TAx=A^Tb$. \section{Numerical experiments}\label{numer} All the computations are carried out in Matlab R2017b on the Intel Core i7-4790k with CPU 4.00 GHz processor and 16 GB RAM with the machine precision $\epsilon_{\rm mach}= 2.22\times10^{-16}$ under the Miscrosoft Windows 8 64-bit system. We have tested LSQR, CGME, LSMR and MCGME on almost all the 1D and 2D problems from \cite{berisha,hansen07,hansen12} and have observed similar phenomena. For the sake of length, we list only some of them in Table~\ref{tab1}, where each problem takes its default parameter(s). We mention that the relatively easy 1D problems are all from \cite{hansen07,hansen12}, where {\sf shaw}, {\sf gravity} and {\sf baart} are severely ill-posed and {\sf phillips}, {\sf heat} and and {\sf deriv2} are moderately. The 2D image deblurring problems {\sf blur}, {\sf fanbeamtomo} and {\sf seismictomo} are also from \cite{hansen07,hansen12}, and the other 2D problems are from \cite{berisha}. We notice that for {\sf blur}, {\sf fanbeamtomo}, although the orders $m$ and $n$ are already tens of thousands, their condition numbers $\sigma_1/\sigma_n$ are only 31.5 and 2472, respectively, which, intuitively, do not satisfy the definition of a discrete ill-posed problem whose singular values decay and are centered at zero, so that the ratio $\sigma_1/\sigma_n$ is very large. For each test problem, we compute $b_{true}=Ax_{true}$ and add a Gaussian white noise $e$ with zero mean to $b_{true}$ by prescribing the relative noise level \begin{equation}\label{noiselevel} \varepsilon=\frac{\|e\|}{\|b_{true}\|}. \end{equation} \graphicspath{{figurecgme/}} \begin{table}[h] \centering \caption{The description of test problems.} \begin{tabular}{lll} \hline Problem & Description & Size of $m,\ n$ \\ \hline {\sf shaw} &1D image restoration model & $m=n=5000$\\ {\sf gravity} &1D gravity surveying problem & $m=n=5000$\\ {\sf baart} & 1D image deblurring & $m=n=5000$\\ {\sf phillips} &phillips' famous test problem & $m=n=5000$ \\ {\sf heat} & Inverse heat problem & $m=n=5000 $\\ {\sf deriv2} &Computation of second derivative &$m=n=10000$\\ {\sf AtmosphericBlur10} & Spatially Invariant Gaussian Blur &$m=n=65536$\\ {\sf AtmosphericBlur30} & Spatially Invariant Gaussian Blur &$m=n=65536$\\ {\sf GaussianBlur420} & Spatially Invariant Atmospheric &$m=n=65536$\\ & Turbulence Blur&\\ {\sf GaussianBlur422} & Spatially Invariant Atmospheric &$m=n=65536$\\ & Turbulence Blur& \\ {\sf VariantGaussianBlur1} &Spatially Variant Gaussian Blur & $m=n=99856$ \\ {\sf VariantGaussianBlur2} &Spatially Variant Gaussian Blur & $m=n=99856$ \\ {\sf VariantMotionBlur\_large} &Spatially Variant Motion Blur & $m=n=65536$ \\ {\sf VariantMotionBlur\_medium} &Spatially Variant Motion Blur & $m=n=65536$ \\ {\sf blur} &2D image restoration & $m=n=22500$ \\ {\sf fanbeamtomo} &2D fan-beam tomography problem & $61200\times 14400$ \\ {\sf seismictomo} & 2D seismic tomography & $20000\times 10000$\\ \hline \end{tabular} \label{tab1} \end{table} We use the code {\sf lsqr\_b.m} of \cite{hansen07}, where the reorthogonalization is exploited during Lanczos bidiagonalization in order to maintain the numerical orthogonality of $P_{k+1}$ and $Q_k$. We have written the Matlab codes of CGME, LSMR and MCGME based on the same Lanczos bidiagonalization process used in {\sf lsqr\_b.m}. For all the 1D problems and the 2D {\sf seismictomo}, we report the results on them for $\varepsilon=10^{-3}$; for all the 2D problems except {\sf blur} and {\sf fanbeamtomo}, we report the results on them for $\varepsilon=5\times 10^{-3}$. For several other $\varepsilon\in [10^{-3},5\times 10^{-2}]$, we have the same findings. For {\sf blur} and {\sf fanbeamtomo}, however, we will observe some fundamental distinctions between the convergence features for $\varepsilon$ lying in this practical interval. Figures~\ref{fig1}--\ref{fig6} depict the convergence processes of LSQR, CGME, LSMR and MCGME, and we give some key details, including the iterations $k^*$ at which the semi-convergence of an algorithm occurs and the relative error of the best regularized solution obtained by each algorithm, which is defined by $$ \frac{\|x_{k^*}^{lsqr}-x_{true}\|}{\|x_{true}\|} $$ for LSQR. Similar relative errors are defined for CGME, LSMR and MCGME with the superscript ``$lsqr$'' replaced by ``$cgme$'', ``$lsmr$'' and ``$mcgme$'', respectively. In addition, as a comparison standard on the solution accuracy, we depict the semi-convergence process of the TSVD method for {\sf blur} and {\sf seismictomo}, and report the relative errors of the best TSVD regularized solutions $x_{k_0}^{tsvd}$ with $k_0$ the transition point at which the semi-convergence of TSVD occurs. For the other nine larger 2D problems, we cannot compute the SVDs of the matrices due to out of memory in our computer. We mention that for the first six 1D test problems we have found that the best regularized solutions obtained by TSVD method have the same accuracy as those by LSQR, where the $k_0$ are very small relative to $n$ and all the $k^*\leq k_0$ correspondingly. We omit the results on the 1D problems obtained by the TSVD method. \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{shaw5000four.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{gravityfour.eps}} \centerline{(b)} \end{minipage} \vfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{baart5000four.eps}} \centerline{(c)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{phillipsfour.eps}} \centerline{(d)} \end{minipage} \vfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{heatfour.eps}} \centerline{(e)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{deriv2four.eps}} \centerline{(f)} \end{minipage} \caption{1D problems with the relative noise level $\varepsilon=10^{-3}$.} \label{fig7} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{atmosphericblur10a.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{atmosphericblur30a.eps}} \centerline{(b)} \end{minipage} \caption{{\rm (a)}: {\sf AtmosphericBlur10} and {\rm (b)}: {\sf AtmosphericBlur30} with $\varepsilon=5\times 10^{-3}$.} \label{fig1} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{gaussianblur420a2.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{gaussianblur422a2.eps}} \centerline{(b)} \end{minipage} \caption{{\rm (a)}: {\sf GaussianBlur420} and {\rm (b)}: {\sf GaussianBlur422} with $\varepsilon=5\times 10^{-3}$}\label{fig2} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{variantmotionlargea.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{variantmotionmediuma.eps}} \centerline{(b)} \end{minipage} \caption{{\rm (a)}: {\sf VariantMotionBlur\_large} and {\rm (b)}: {\sf VariantMotionBlur\_medium} with $\varepsilon=5\times 10^{-3}$.} \label{fig3} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{blur150four.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{blur150tsvd.eps}} \centerline{(b)} \end{minipage} \caption{{\sf blur}.} \label{fig4} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{fanbeamtomofour0001.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{fanbeamtomofour005.eps}} \centerline{(b)} \end{minipage} \caption{{\sf fanbeamtomo} with $\varepsilon=10^{-3}$ and $5\times 10^{-2}$.} \label{fig5} \end{figure} \begin{figure} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{seismictomofour0001.eps}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=6.0cm,height=4.5cm]{seismictomotsvd0001.eps}} \centerline{(b)} \end{minipage} \caption{{\sf seismictomo} with $\varepsilon=10^{-3}$.} \label{fig6} \end{figure} We now comment the figures and the related details in order. Firstly, for all the problems in Table~\ref{tab1}, the semi-convergence of CGME occurs earlier than LSQR and can be much earlier. This confirms Theorem~\ref{semi}. The much earlier semi-convergence of CGME indicates that $\bar{\theta}_k^{(k)}<\sigma_{k_0+1}$ occurs much earlier for CGME than $\theta_k^{(k)}<\sigma_{k_0+1}$ for LSQR. Secondly, for all the problems, the best regularized solutions $x_{k^*}^{cgme}$ are correspondingly less accurate than $x_{k^*}^{lsqr}$ considerably except for {\sf blur} in Figure~\ref{fig4}, where the best regularized solution by CGME is almost as accurate as those by LSQR, LSMR and MCGME. For all the 1D problems but {\sf baart} and the 2D problem {\sf fanbeamtomo} with $\varepsilon=10^{-3}$, the relative errors of the best regularized solutions by CGME are twice to five times larger than the counterparts by the other three ones, indicating that the regularization ability is considerably inferior to the other three ones, given that the relative errors by LSQR, LSMR and MCGME themselves are only roughly $0.01\sim 0.1$; see Figures~\ref{fig7} (a) and \ref{fig5} (a). These results confirm Theorems~\ref{cgmeappr}--\ref{interlace} and the analysis on them. We will make more comments on Figure~\ref{fig4} later. Thirdly, for each of the problems, by a careful observation and comparison, we have found that $x_{k}^{cgme}$ is more accurate than and at least at least as accurate as $x_{k}^{lsqr}$ until the occurrence of CGME, after which LSQR continues improving iterates until the occurrence of its semi-convergence, as is clearly seen from Figures~\ref{fig7}--\ref{fig6}. These results justify our arguments on \eqref{accurcomp}. Fourthly, for each of the 2D problems, the best regularized solution $x_{k^*}^{lsmr}$ is at least as accurate as $x_{k^*}^{lsqr}$, and the semi-convergence of LSMR always occurs no sooner and actually later than that of LSQR. We notice that the relative error of $x_{k^*}^{lsmr}$ is only slightly smaller than that of $x_{k^*}^{lsqr}$, and there is little difference between them. For all the 1D problems, the semi-convergence of LSMR and LSQR occurs exactly at the same iterations, and the best regularized solutions obtained by them have the same accuracy. These results confirm Remark~\ref{semilsmr} and justify that LSMR has the same regularization ability as that of LSQR. Fifthly, for each of the test problems, MCGME improves CGME substantially. As a matter of fact, for the 1D problems, the best regularized solutions by MCGME have the same accuracy as those by LSQR and LSMR; for the 2D problems, the best regularized solutions $x_{k^*}^{mcgme}$ are almost as accurate as $x_{k^*}^{lsqr}$ and $x_{k^*}^{lsmr}$. Sixthly, as we have stated, {\sf blur} and {\sf fanbeamtomo} are quite well conditioned. With the relatively small $\varepsilon=10^{-3}$, we observe from Figures~\ref{fig4}--\ref{fig5} that there is no semi-convergence phenomenon for LSQR, LSMR and MCGME as well as the TSVD method. This means that $e$ does not plays a part in regularization and these methods solve these two problems as if they were ordinary linear systems. Furthermore, it is clear from the figures that the relative errors of regularized solutions obtained by LSQR, LSMR and MCGME stabilize after 30 iterations for {\sf blur} and 80 iterations for {\sf fanbeamtomo}, respectively. Figures~\ref{fig4} (a) and \ref{fig5} (a) seems to indicate that CGME has no semi-convergence phenomenon for the {\em square} {\sf blur} and given $\varepsilon$ but it has for the {\em rectangular} {\sf fanbeamtomo}. However, this semi-convergence is in disguise and is not caused by the noise $e$: For the rectangular {\sf fanbeamtomo}, \eqref{thetak2}, its proof and the analysis on it state that the smallest singular value $\bar{\theta}_k^{(k)}$ of $\bar{B}_k$ can be arbitrarily small and approaches zero as $k$ increases. As we have elaborated, $(\bar{\theta}_k^{(k)})^2$ approaches the eigenvalue zero of $AA^T$ as $k$ increases. As a result, the projected problem $\bar{B}_k y_k^{cgme}=\beta e_1^{(k)}$ involved in CGME can become even worse conditioned than \eqref{eq1} itself as $k$ increases for $A$ rectangular, causing that $\|x_k^{cgme}\|$, which equals $\|Q_k y_k^{cgme}\|=\|y_k^{cgme}\|$, and the relative error $\frac{\|x_k^{cgme}-x_{true}\|}{\|x_{true}\|}$ tends to infinity with respect to $k$. This can also be seen from \eqref{filter}, where we can easily check that $|f_k^{(k,cgme)}|\rightarrow \infty$ as $k$ increases since $\sigma_k$ is a constant but $\bar{\theta}_k^{(k)}\rightarrow 0$ as $k$ increases. In contrast, the smallest singular values of the projection matrices are always bounded from below by either $\sigma_n$ for LSQR (cf. \eqref{thetak1}) and MCGME (cf. \eqref{bkbarbk}) or $\sigma_n^2$ for LSMR (cf. \eqref{lsmrqr}), no matter how $A$ is rectangular or square. This is why CGME has seemingly semi-convergence phenomenon for $A$ rectangular when the other solvers do not have. In the meantime, we see that the best regularized solution by CGME is substantially less accurate than those by the other three algorithms for {\sf fanbeamtomo}. For the square {\sf blur} with $\varepsilon=10^{-3}$, we see that the four Krylov solvers and the TSVD method do not exhibit semi-convergence and compute the solutions with very comparable accuracy. These results and analysis tell us that CGME is definitely not a good choice when $A$ is rectangular. Seventhly, if the relative noise level $\varepsilon$ is increased to $\varepsilon=0.05$, the semi-convergence of LSQR, LSMR and MCGME occurs for {\sf fanbeamtomo}, as is seen from Figure~\ref{fig5}. We have also observed the semi-convergence of the four algorithms and the TSVD method for {\sf blur} with $\varepsilon=0.05$. We find that the best regularized solutions by LSQR, LSMR and MCGME have very comparable accuracy but CGME computes a less accurate best regularized solution. We omit the corresponding figure. For the test problems, we have also observed that the semi-convergence of the TSVD method occurs much later than the four Krylov solvers, i.e., $k^*\ll k_0$. \section{Conclusions}\label{conclu} For a general large-scale ill-posed problem \eqref{eq1}, iterative solvers are only computationally viable. Of them, the Krylov solvers LSQR, CGLS, CGME and LSMR have been commonly used. In terms of the accuracy of the rank $k$ approximation to $A$ in LSQR, in this paper we have derived accurate estimates for the accuracy of the rank $k$ approximations to $A$ and $A^TA$ that are involved in CGME and LSMR, respectively. We have made detailed analyses on the approximation behavior of the singular values of the projection matrices associated with CGME and LSMR. In the meantime, we have derived the filtered SVD expansion of CGME regularized iterates. In conclusion, we have shown that the regularization of CGME is generally inferior to LSQR and the semi-convergence of CGME occurs no later than that of LSQR. We have extracted a best possible rank $k$ approximation to $A$ from the rank $(k+1)$ approximation $P_{k+1}P_{k+1}^TA$, and have shown why such approximation is as accurate as the rank $k$ approximation in LSQR. Based on this analysis, as a by-product, we have proposed a modified CGME (MCGME) method that improves CGME substantially and has the same regularization ability as LSQR. We have substantially improved a fundamental result, Theorem 9.3 in \cite{halko11}, which gives a bound for the approximation accuracy of the truncated rank $k$ SVD approximation to $A$ generated by randomized algorithms and lacks a complete understanding to its considerable overestimate. Our new bounds are unconditionally superior to theirs and reveal how the truncation step affects the accuracy of the truncated rank $k$ approximation to $A$. In the meantime, we have proved that LSMR has the same regularization ability as LSQR and the semi-convergence of LSMR occurs no sooner than that of LSQR. Particularly, we have shown that LSMR has the full regularization for severely and moderately ill-posed problems with suitable $\rho>1$ and $\alpha>1$. We have made detailed numerical experiments to confirm our regularization results on CGME and LSMR. We have also numerically demonstrated that the best regularized solutions by MCGME are very comparable to those by LSQR.
2,877,628,089,288
arxiv
\section*{Introduction} Evaluating the expressive power of logical formalisms is an important task in theoretical computer science. It has many applications in numerous fields such as complexity theory, verification or databases. In this latter case, it often amounts to determine how difficult it is to compute a query written in a given language. In this vein, determining which fragments of first-order logic defines tractable query languages has deserved much attention. It is well known, that over an arbitrary signature, computing a first-order query can be done in time polynomial in the size of the structure (and even in logarithmic space and $AC^0$). However the exponent of this polynomial depends heavily on the formula size (more precisely, on the number of variables). Nevertheless, for particular kinds of structures or formulas the complexity bound can be substantially improved. In~\cite{Seese-96}, it is proved that checking if a given first-order sentence $\varphi$ is true (i.e., the Boolean query or model-checking problem) in a structure ${\cal S}$ all of whose relations are of bounded degree can be done in linear time in the size of ${\cal S}$. The method used to prove this result relies on old model-theoretic technics (see~\cite{Hanf-65}). It is perfectly constructive but hardly implementable. Later, still using such kind of methods, several other tractability results have been shown for the complexity of the model-checking of first-order formulas over structures or formulas that admit nice (tree) decomposition properties (see~\cite{FlumFG-02}). In this paper, a \textit{bounded degree structure} is either a relational structure all of whose relations are of bounded degree or a functional structure involving bijective functions only. The main goal of this paper is to revisit the complexity of the evaluation problem of not necessarily Boolean first-order queries over structures of bounded degree. We regard query evaluation as a \textit{dynamical process}. Instead of considering the cost of the evaluation globally, we measure the delay between consecutive tuples, i.e., query problems are viewed as enumeration problems. This latter kind of problems appears widely in many areas of computer science (see for example~\cite{EiterG-95,EiterGM-03,BorosGKM-00,KavvadiasSS-00,Goldberg-94} or~\cite{JohnsonYP-88} for basic complexity notions on enumeration). However, to our knowledge, relation to query evaluation has not been investigated so far. We prove that any query on bounded degree structures is $\textsc{Constant-Delay$_{lin}$}$, i.e., can be computed by an algorithm that has two separate parts: it has a precomputation step whose time complexity is linear in the size of the structure and then, outputs all the solution tuples one by one with a constant (i.e., depending on the size of the formula only) delay between two successive tuples. Seen as a global process, this implies that queries on bounded structures can be evaluated in total time $O(f(|\varphi|).(|{\cal S}|+|\varphi({\cal S})|))$ and space $O(f(|\varphi|).|{\cal S}|)$ where $|{\cal S}|$ is the size of the structure ${\cal S}$, $|\varphi|$ is that of the formula $\varphi$, $|\varphi({\cal S})|$ is the size of the result $\varphi({\cal S})$ of the query and $f$ is some function. As a corollary, it implies that the time complexity of the model-checking problem is $O(f(|\varphi|).|{\cal S}|)$ thus providing an alternative proof of the result of~\cite{Seese-96}. A particularity of the main method used in this paper is that it does not rely on model-theoretic technic as previous results of the same kind (see, for example,~\cite{Seese-96} or~\cite{Lindell-04} for a generalization to least-fixed point formulas). Instead, we develop a quantifier elimination method suitable for bijective unary functions and apply it to obtain our complexity bound. An advantage of this method is that it is effective and easily implementable. Another advantage is that our paper is completely self-contained. Besides, the $\textsc{Constant-Delay$_{lin}$}$ class is an interesting notion by itself and is, to our knowledge, a new complexity class for enumeration problems: as proved for linear time complexity (the class $\mbox{\rm DLIN}$ studied in~\cite{GrandjeanS-02}) it can be shown that $\textsc{Constant-Delay$_{lin}$}$ is a robust class and is in some sense the minimal robust complexity class of enumeration problems. The paper is organized as follows. First, basic definitions are given in Section~\ref{definitions}. In particular, in Subsection~\ref{definition constant delay}, we recall definitions about enumeration problems and introduce the notion of constant delay computation and prove some basic properties about it. In Section~\ref{First-order queries on bijective structures}, the quantifier elimination method is introduced and is applied to the evaluation problem of first-order formulas over functional structures all of whose functions are bijective. In Section~\ref{SEC degre borne}, using classical logical interpretation technics, this later problem is reduced in linear time to the first-order query problem over structures of bounded degree thus providing the same bound for it. Finally, in Subsection~\ref{Complexity of subgraphs problems}, consequences about the complexity of the subgraph (resp. induced subgraph) isomorphism problem are given. \section{Definitions}~\label{definitions} \subsection{Logical definitions and query problems} We suppose the reader to be familiar with basic notions of first-order logic. A \textit{signature} $\sg$ is a finite set of relational and functional symbols of given arities ($0$-ary function symbols are constants symbols). The arity of $\sg$ is the maximal arity of its symbols. The set $\sg$ is called \textit{unary functional} if all its symbols are of arity bounded by one. A (finite) $\sigma\mbox{-structure}$ consists of a domain $D$ together with an interpretation of each symbol of $\sg$ over $D$ (the same notation is used here for each signature symbol and its interpretation). In this paper, we will distinguish between two kinds of signatures on which semantical restrictions on their possible interpretation are imposed: \begin{itemize} \item Either $\sg$ is made of constant and monadic (i.e., unary) relation symbols and unary function symbols whose interpretation is taken among bijective functions (i.e., permutations) only, \item Or $\sg$ contains relation symbols only whose degrees are bounded by some given constant (detailed definitions about bounded degree relations are delayed till section~\ref{SEC degre borne}). \end{itemize} Structures defined by either of semantical restrictions will be called \textit{bounded degree structures}. In what follows we make precise notions and problems about first-order logic over bijective structures. \begin{definition} Let $\sg = \{ \tu c, \tu U, \su{f}{k} \}$ be a signature consisting of constant symbols $c_i\in \tu c$, of monadic predicates $U_i\in \tu U$ and of unary function symbols $f_i$, $i=1,\dots,k$. A {\em bijective $\sigma\mbox{-structure}$} is a $\sigma\mbox{-structure}$ ${\cal S}$ of the form ${\cal S} = \st{D}{\tu c, \tu U, \su{f}{k}}$ where each $f_i$ is a permutation on domain $D$. \end{definition} One of the main results of this paper provides a quantifier elimination method over bijective structures. As it is usual for such kind of result, the elimination will be done in a richer language. The following definition is required. \begin{definition} A {\em bijective term} $\t (x)$ is of the form $f_1^{\eps_1}\dots f_l^{\eps_l}(x)$ where $l\geq 0$, $x$ is a variable and where each $f_i^{\eps_i}$ is either the function symbol $f_i$ or its reciprocal $f_i^{-1}$. The term $\t^{-1}(x)$ denotes the reciprocal of the term $\t(x)$. A {\em bijective atomic formula} is of one of the following four forms where $\t (x)$ and $\t_1 (x)$ are bijective terms: \begin{itemize} \item either a \textit{bijective equality} $\t (x) = \t_1 (y)$, \item or $\t (x) = c$ where $c$ is a constant symbol, \item or $U(\t (x))$ where $U$ is a monadic predicate, \item or a \textit{cardinality statement} $\exk{k}{x}\Psi(x)$ where the quantifier $\exk{k}{x}$ is interpreted as "there exist at least $k$ values of $x$ such that" and $\Psi$ is a Boolean combination of bijective atoms $\alpha (x)$ over variable $x$ only. \end{itemize} \end{definition} As the reciprocal of each function symbol can be used, each bijective equality $\t (x) = \t_1 (y)$ can be rephrased as $\t_2 (x) = y$ where $\t_2 (x)=\t_1^{-1}\t(x)$. A {\em bijective literal} is a bijective atomic formula or its negation. \begin{definition} The set $\mathbf{FO_{Bij}}$ of \text{bijective first-order formulas} is the set of first-order formulas built over bijective atomic formulas of some unary signature $\sg$. \end{definition} Let $\barre{t}=(t_1,\dots,t_k)$ be a $k$-tuple of variables and $\varphi(\barre{t})$ and $\varphi'(\barre{t})$ be two $\sg$-formulas with free variables $\barre{t}$. Formulas $\varphi(\barre{t})$ and $\varphi'(\barre{t})$ are \textit{equivalent} if for all $\sigma\mbox{-structures}$ ${\cal S}$ and all tuples $\barre{a}$ of element of the domain with $|\barre{a}|=|\barre{t}|$ it holds that: \[ ({\cal S}, \barre{a}) \models \varphi(\barre{t}) \mbox{ iff } ({\cal S}, \barre{a}) \models \varphi'(\barre{t}). \] In this paper query problems are considered for specific classes of first-order formulas (and structures). One of the specific problems under consideration here is the following. \medskip \noindent $\query{\mathbf{FO_{Bij}}}$\\ \noindent \textbf{Input:} a unary functional signature $\sg$, a bijective $\sg$-structure ${\cal S}$ and a first-order bijective $\sg$-formula $\varphi(\tu x)$ with $k$ free variables $\tu x = (x_1,\dots,x_k)$\\ \noindent \textit{Parameter:} $\varphi$ \\ \noindent \textbf{Output:} $\varphi({\cal S}) = \{\tu a \in D^k : ({\cal S}, \tu a) \models \varphi(\tu x)\}$. \medskip The Boolean query problem (the subproblem where $k=0$) is often called a model-checking problem. It will be denoted by $\modelchecking{\mathbf{FO_{Bij}}}$ here. As suggested by the formulation of the query problem, we are interested in its parameterized complexity and the complexity results given here consider the size of the query formula $\varphi$ as the parameter (see~\cite{DowneyF-99}). \subsection{Model of computation and measure of time} The model of computation used in this paper is the Random Access Machine (RAM) with uniform cost measure (see~\cite{AhoHU-74, GrandjeanS-02, GrandjeanO-04, FlumFG-02}). As query problems are the main subject of this paper, instances of problems always consist of two kinds of objects: first-order structures and first-order formulas. The \textit{size} $|I|$ of an object $I$ is the number of registers used to store $I$ in the RAM. If $E$ is the set $[n]$, $|E|=card(E)=n$. If $R\subseteq D^k$ is a $k$-ary relation over domain $D$, with $|D|=card(D)$, then $|R|=k.card(R)$: all the tuples $(x_1,\dots,x_k)$ for which $R(x_1,\dots,x_k)$ holds must be stored, each in a separate $k$-tuple of registers. Similarly, if $f$ is a unary function from $D$ to $D$, all values $f(x)$ must be stored and $|f|=|D|$. If $\varphi$ is a first-order formula, $|\varphi|$ is the number of occurrences of variables, relation or function symbols and syntactic symbols: $\exists, \forall, \wedge, \vee, \neg, =, "(", ")", ","$. For example, if $\varphi \equiv \exists x \exists y \ R(x,y) \wedge \neg (x = y)$ then $|\varphi|=17$. \bigskip All the problems we consider in this paper are parameterized problems: they take as input a list of objects made of a $\sigma$-structure ${\cal S}$ and a formula $\varphi$ and as output the result of the query size $\varphi({\cal S})$. Due to the much larger size, in practice, of the structure ${\cal S}$ than the size of formula $\varphi$, $|{\cal S}|>>|\varphi|$, this latter one, $|\varphi|$ , in considered here as the parameter. A problem \textbf{P} is said to be computable in time $f(|\varphi|).T(|{\cal S}|,|\varphi({\cal S})|)$ for some function $f: N \rightarrow R^+$ if there exists a RAM that computes \textbf{P} in time (i.e., the number of instructions performed) bounded by $f(|\varphi|).T(|{\cal S}|,|\varphi({\cal S})|)$ using space, i.e., addresses and register contents also bounded by $f(|\varphi|).T(|{\cal S}|,|\varphi({\cal S})|)$. The notation $O_{\varphi}(T(|{\cal S}|,|\varphi({\cal S})|))$ is used when one does not want to make precise the value of function $f$. It is also assumed that the function $T$ is at least linear and at most polynomial, i.e., $T(n,p) = \Omega (n+p)$ and $T(n,p) = (n+p)^{O(1)}$. To give an example and to relate our complexity measure to the logarithmic cost measure, in case $T$ is linear, i.e., $T(n,p)=n+p$, the number of bits manipulated by the RAM is well linear in the number of bits needed to encode the input and the output. \subsection{Enumeration algorithms and constant delay computation}~\label{definition constant delay} In this section, $A$ is a binary predicate. Enumeration problems will be defined by reference to such a predicate. \begin{definition} Given a binary relation $A$, the enumeration function $\enumpb{A}$ associated to $A$ is defined as follows. For each input $x$: \[ \enumpb{A(x)} = \{y \ : \ A(x,y) \mbox{ holds } \} \] \end{definition} \begin{remark} Query problems may evidently be seen as enumeration problems. The input $x$ is made of the structure ${\cal S}$ and the formula $\varphi (\tu x)$, a witness $y$ is a tuple $\tu a$ and evaluating predicate $A$ amounts to check whether $({\cal S},\tu a) \models \varphi(\tu x)$. \end{remark} One may consider the delay between two consecutive solutions as an important point in the complexity of enumeration problems. In~\cite{JohnsonYP-88} several complexity measures for enumeration have been defined. One of the most interesting is that of \textit{polynomial delay} algorithm. An algorithm ${\cal A}$ is said to run within a \textit{polynomial delay} if there is no more than a (fixed) polynomial delay between two consecutive solutions it outputs (and no more than a polynomial delay to output the first solution and between the last solution and the end of the algorithm). \textit{Polynomial delay} is often considered as the right notion of feasability for enumeration problems. In this paper, we introduce a much stronger complexity measure that forces \textit{constant delay} between outputs. \begin{definition} An enumeration problem $\enumpb{A}$ is \textit{constant delay with linear precomputation}, which is written $\enumpb{A}\in \textsc{Constant-Delay$_{lin}$}$, if there exists a RAM algorithm ${\cal A}$ which, for any input $x$, enumerates all the elements of the set $\enumpb{A(x)}$ with a constant delay, i.e., that satisfies the following properties. \begin{enumerate} \item ${\cal A}$ uses linear input space, i.e., space $O(|x|)$ \item ${\cal A}$ can be decomposed into the two following successive steps \begin{enumerate} \item $\precompAlgo{{\cal A}}$ which runs some precomputations in time $O(|x|)$, and \item $\enumAlgo{{\cal A}}$ which outputs all solutions within a delay bounded by some constant $\delayAlgo{{\cal A}}$. This delay applies between two consecutive solutions and after the last one. \end{enumerate} \end{enumerate} \end{definition} Allowing polynomial time precomputations (and polynomial space) instead of linear time, one may define a larger class called $\textsc{Constant-Delay$_{poly}$}$. \begin{remark} As proved for the linear time class $\mbox{\rm DLIN}$ (see~\cite{GrandjeanS-02}), it can be shown that the complexity enumeration class $\textsc{Constant-Delay$_{lin}$}$ is robust, i.e., is not modified if the set of allowed operations and statements of the RAMs is changed in many ways. This is because linear time (and linear space) precomputations give the ability to precompute the tables of new allowed operations. \end{remark} The following result is immediate, it evaluates the total time cost of any constant delay algorithm. \begin{lemma}~\label{LEM total time} Let $\enumpb{A}$ be an enumeration problem belonging to $\textsc{Constant-Delay$_{lin}$}$ then, for any input $x$, the set $\enumpb{A(x)}$ can be computed in $O(|x| + |\enumpb{A(x)}|)$ total time, i.e., in time linear in the size of $|Input|+|Output|$, and linear input space $O(|x|)$. \end{lemma} \begin{remark} In the query problem we consider, the size of $\varphi$ is considered as a parameter. Then, $|x|=|{\cal S}|$ and the constant delay depends on $|\varphi|$ only. \end{remark} The two lemmas below give basic properties of constant delay computations. \begin{lemma}~\label{LEM linear time = constant delay} An enumeration problem $\enumpb{A}$ computable in linear time $O(|x|)$ for any input $x$ belongs to $\textsc{Constant-Delay$_{lin}$}$. \end{lemma} \begin{proof} For any input $x$, one only has to compute the set $\enumpb{A(x)}$, to sort it and to eliminate the possible multiple occurrences of solutions. These steps can be viewed as the precomputation part of the algorithm running in time $O(|x|)$. Then, one has to enumerate one by one the solutions of the sorted list. This is obviously a constant delay process. \end{proof} \begin{lemma}~\label{LEM disjoint union} Let $\enumpb{A}$ and $\enumpb{B}$ be two disjoint enumeration problems, i.e., such that, for any input $x$, $\enumpb{A(x)} \cap \enumpb{B(x)} = \emptyset$. Let $\enumpb{(A\cup B)}$ be the union of this two enumeration problems defined by, for any $x$: \[ \enumpb{(A\cup B)(x)} = \{y \ : \ A(x,y) \mbox{ or } B(x,y) \mbox{ holds }\}. \] If $\enumpb{A}$ and $\enumpb{B}$ belong to $\textsc{Constant-Delay$_{lin}$}$ then, problem $\enumpb{A\cup B}$ also belongs to $\textsc{Constant-Delay$_{lin}$}$. \end{lemma} \begin{proof} Due to the disjointness of the two solutions sets for any input, the proof is evident. Given ${\cal A}$ and ${\cal B}$ the algorithms for problems $\enumpb{A}$ and $\enumpb{B}$, the following algorithm correctly computes for the problem $\enumpb{A\cup B}$. \begin{algorithm} \caption{Constant delay algorithm for $\enumpb{A\cup B}$} \begin{algorithmic}[1] \State \textbf{Input:} $x$ \State $\precompAlgo{{\cal A}}$; $\precompAlgo{{\cal B}}$ \State $\enumAlgo{{\cal A}}$; $\enumAlgo{{\cal B}}$ \end{algorithmic} \end{algorithm} Obviously, the delay is bounded by the maximum of $\delayAlgo{{\cal A}}$ and $\delayAlgo{{\cal B}}$. \end{proof} \begin{remark} Note that the disjointness condition in the Lemma above is not always necessary. In case there exist a total ordering $\leq$ and constant delay enumeration algorithms for $\enumpb{A}$ and $\enumpb{B}$ that enumerate solutions with respect to this unique ordering $\leq$ then, it is easily seen that $\enumpb{A\cup B}$ belongs also to $\textsc{Constant-Delay$_{lin}$}$ even if the problems are not disjoints. \end{remark} \section{First-order queries on bijective structures}~\label{First-order queries on bijective structures} \subsection{Quantifier elimination on bijective structures} The key result of this paper consists of a quantifier elimination method for $\mathbf{FO_{Bij}}$ formulas. \begin{theorem}[quantifier elimination for $\mathbf{FO_{Bij}}$]~\label{TH elimination quantificateur} Each bijective \textit{first-order} formula is equivalent to a Boolean combination of bijective \textit{atomic} formulas. More precisely, let $\varphi(\barre{t})\in \mathbf{FO_{Bij}}$ with free variables $\barre{t}$ then, there exists a Boolean combination of bijective \textit{atomic} formulas $\varphi'(\barre{t})$ over the same free variables $\barre{t}$ equivalent to $\varphi(\barre{t})$. In the special case where $\varphi$ is closed (i.e., without free variable) then, $\varphi$ is equivalent to a Boolean combination of cardinality statements. \end{theorem} \begin{proof} As $\forall x \varphi \equiv \neg (\ex x \neg \varphi)$, we only have to consider elimination of existentially quantified variables. W.l.o.g., we consider formulas in disjunctive normal form and, as existential quantifier commutes with disjunction we may consider the case of the elimination of a single existentially quantified variable $y$ in a formula of the form: \begin{equation} \varphi(\barre{x})\equiv \ex y \ (\alpha_1 \wedge \dots \wedge \alpha_r) \end{equation} \noindent where each $\alpha_i$ is a bijective literal among variables $\barre{x}$ and $y$. Literals depending on $\barre{x}$ only and cardinality statements need not be considered since they do not involve $y$, so $\varphi(\barre{x})$ may be supposed of the following form: \begin{equation} \varphi(\barre{x})\equiv \ex y \ [\psi(y) \wedge y =_{\eps_1}\t_1(x_{i_1}) \wedge \dots \wedge y =_{\eps_k}\t_k(x_{i_k}) ] \end{equation} \noindent where each $y =_{\eps_j}\t_j(x_{i_j})$ with $\eps_{j}=\pm 1$ is $y = \t_j(x_{i_j})$ if $\eps_{j}= 1$ or $y \neq \t_j(x_{i_j})$ if $\eps_{j}= -1$. To eliminate quantified variable $y$ two cases may happen. Suppose first there is at least one index $j$ such that $\eps_{j}= 1$. In this case, the equality $y = \t_j(x_{i_j})$ is used to replace each occurrence of $y$ in the formula by the term $\t_j(x_{i_j})$. The process results in a new formula $\varphi'(\barre{x})$ without variable $y$. The second possibility leads to a more complicated replacement scheme. Suppose that for every $j$, $\eps_j = -1$. Then, \begin{equation}~\label{EQU cas 2} \varphi(\barre{x})\equiv \ex y \ [\psi(y) \wedge \bigwedge_{j\leq k} y\neq \t_j(x_{j}) ] \end{equation} (For simplicity of notations but w.l.o.g. we have supposed that $i_j =j$ for $j=1,\dots,k$). The basic idea is now the following : suppose $h\leq k$ is the number of distinct values among the $k$ terms $\t_j(x_{j})$ such that $\psi(\t_j(x_{j}))$ is true; then, formula $\varphi(\barre{x})$ is true if and only if the number of $y$ such that $\psi(y)$ holds is strictly greater than $h$ (i.e., $\exk{h+1}{y}\psi(y)$ is true). Introducing (new) cardinality statements in the formula, $\varphi(\barre{x})$ can be equivalently rephrased as the following Boolean combination of bijective atomic formulas: \begin{equation}~\label{EQU cas 2 bis} \varphi(\barre{x})\equiv \begin{array}[t]{l} {\displaystyle \bigvee_{h=0}^k \bigvee_{P\subseteq [k], Q\subseteq P, |Q|=h}}\\ \left[{\displaystyle \bigwedge_{j\in Q} \psi(\t_j(x_{j})) \wedge \bigwedge_{i\in P}\bigvee_{j\in Q} \t_i(x_{i})= \t_j(x_{j}) \wedge \bigwedge_{j\in [k]\setminus P} \neg \psi(\t_j(x_{j})) } \wedge \exk{h+1}{y}\psi(y) \right] \\ \end{array} \end{equation} \noindent where $[k]=\{1,\dots,k\}$. More generally, starting from a prenex bijective first-order formula $\varphi(\barre{t})$ with free variables $\barre{t}$, one eliminates all quantified variables from the innermost to the outermost one. This will result in an equivalent Boolean combination of bijective \textit{atomic} formulas over $\barre{t}$. In the case where $\varphi$ is without free variable (i.e., $\barre{t}$ is empty), it is easily seen that the elimination process results in a Boolean combination of cardinality statements (note that, of course, $\exists x \varphi(x) \equiv \exk{1}{x} \varphi(x)$). \end{proof} One interesting consequence of Theorem~\ref{TH elimination quantificateur} is the following result. \begin{corollary}[Seese~\cite{Seese-96}]~\label{COR MC fobij} The problem $\modelchecking{\mathbf{FO_{Bij}}}$ is decidable in time $O_{\varphi}(|{\cal S}|)$. \end{corollary} \begin{proof} From Theorem~\ref{TH elimination quantificateur}, we know that there exists a Boolean combination of cardinality statements over the same signature $\sg$ equivalent to $\Phi$. Given a formula $\exk{k}{x}\Psi(x)$ one can test whether a given $\sigma\mbox{-structure}$ ${\cal S}$ satisfies ${\cal S} \models \exk{k}{x}\Psi(x)$ in time $O_{\Psi}(|{\cal S}|)$: it suffices to enumerate all the elements $a$ of the domain, test whether $({\cal S}, a) \models \Psi(x)$ in constant time and count those for which the answer is positive. If this number is greater than or equal to $k$ then $ \exk{k}{x}\Psi(x)$ is true in ${\cal S}$. The final answer for $\Phi$ is given by the boolean combination of the answers for each cardinality statement. \end{proof} \subsubsection{Considerations on an efficient implementation of the algorithm} Compared to the method of~\cite{Seese-96}, the proofs given in this paper are constructive and easily implementable. But, due to the case of Formula~\ref{EQU cas 2} in Theorem~\ref{TH elimination quantificateur} which leads to the equivalent Formula~\ref{EQU cas 2 bis} the whole process is in $O_{\varphi}(|{\cal S}|)=O(f(|\varphi|).|{\cal S}|)$ for some function $f$ that may be a tower of exponentials. It can be shown that it heavily depends on the number of variables and of quantifier alternations of the formula. However, the size of the function $f$ can be substantially reduced in case there are few quantifier alternations. In what follows, we revisit the method of the proof of Theorem~\ref{TH elimination quantificateur} to prove a slightly different result in a specific case. We focus on formulas with existentially quantified variables only and show that the model-checking problem for such formulas can be efficiently evaluated. A $\mathbf{FO_{Bij}}$ formula is in $\Sigma_1\!-\!\mathbf{FO_{Bij}}$ if it is of the form: \[ \exists \tu y \ \varphi \] \noindent where $\varphi$ is quantifier-free and in disjunctive normal form (DNF). \begin{corollary} The model-checking problem for $\Sigma_1\!-\!\mathbf{FO_{Bij}}$ formulas can be evaluated in time $O(|\varphi|^{d}.|{\cal S}|)$ where $d$ is the number of distinct variables of $\varphi$. \end{corollary} \begin{proof} The result obviously holds for $d=1$. So, assume $d>1$. For the same reason as in Theorem~\ref{TH elimination quantificateur}, we may consider any formula of the form: \begin{equation} \varphi(\barre{x})\equiv \ex y \ (\alpha_1 \wedge \dots \wedge \alpha_r) \end{equation} \noindent where each $\alpha_i$ is a bijective literal~\footnote{In this proof, bijective literals do not involve cardinality statements} with variables among $\barre{x}$ and $y$. For sake of completeness here, we consider also terms not containing $y$. Then, $\varphi(\barre{x})$ is of the form: \begin{equation} \varphi(\barre{x})\equiv \ex y \ [\psi(y) \wedge y =_{\eps_1}\t_1(x_{i_1}) \wedge \dots \wedge y =_{\eps_k}\t_k(x_{i_k}) \wedge \gamma(\tu x)] \end{equation} \noindent with the same notation $\eps_{j}$ as in the proof of Theorem~\ref{TH elimination quantificateur} and $\gamma(\tu x)$ involves variables of $\tu x$ only. Again, if $\eps_{j}=1$, for some $j$, then all the occurences of $y$ are replaced by $\t_j(x_{i_j})$ and $\varphi(\barre{x})$ is equivalent to a conjunction of literals without variable $y$. Suppose now that $\eps_{j}=-1$ for all $j\leq k$. Let $A=\{a\in D : ({\cal S},a)\models \psi(y)\}$. Since $\psi(y)$ is quantifier-free, $A$ can be computed in time $O(|\psi|.|{\cal S}|)$. Two cases need to be considered now. If $|A|>k$, since there are at most $k$ different values $\t_j(x_{j})$ for $j=1,\dots,k$, then the conjunction $\ex y [\psi(y) \wedge y \neq \t_1(x_{i_1}) \wedge \dots \wedge y \neq \t_k(x_{i_k})]$ is always true and $\varphi(\tu x)$ is simply equivalent to $\gamma(\tu x)$. If $|A|\leq k$ let $A= \{a_1,\dots,a_h\}$, with $h\leq k$. Formula $\varphi(\barre{x})$ is replaced by the equivalent formula below over the richer signature $\sg\cup\{a_1,\dots,a_h\}$: \[ \bigvee_{i\leq h} ( \bigwedge_{j\leq k} a_i\neq \t_j(x_{i_j}) \wedge \gamma(\tu x) ) \] In all cases, the formula obtained is also in DNF. Time $O(|\varphi|.|{\cal S}|)$ is needed to eliminate variable $y$ and the new formula is of size bounded by $O(k.|\varphi|)$, i.e., less than $O(|\varphi|^2)$. Elimination of all the $d$ existentially quantified variables except the last one can be pursued from this new formula (without need for a normalisation). In the worst case (where all literals are of the form $ x_i \neq \t_1(x_{j})$), the process will result in a disjunction of less than $|\varphi|^{d-1}$ conjunctions of at most $|\varphi|$ literals. \end{proof} \subsection{Constant delay algorithm for first-order queries on bijective structures} We are now ready to state the main result of this section. \begin{theorem}~\label{TH bijective query} The problem $\query{\mathbf{FO_{Bij}}}\in \textsc{Constant-Delay$_{lin}$}$. In particular, from Lemma~\ref{LEM total time}, it can be computed in time $O_{\varphi}(|{\cal S}| + |\varphi({\cal S})|)$ and space $O_{\varphi}(|{\cal S}|)$. \end{theorem} \begin{definition} A bijective literal is a bijective atomic formula or its negation. \end{definition} Before proving Theorem~\ref{TH bijective query}, we establish the following lemma. \begin{lemma}~\label{LEM query output} Let $S$ be a bijective structure and $\Psi$ be a conjunction of bijective literals. Computing query ${\cal S} \mapsto \Psi({\cal S})$ can be done in $\textsc{Constant-Delay$_{lin}$}$. \end{lemma} \begin{proof} The result is proved by induction on $k$ the number of free variables of $\Psi(\tu x)$ where $\tu x = (x_1,\dots,x_k)$. We even assume that $\Psi$ makes use of explicit constants from domain $D$ of ${\cal S}$. For the case $k=1$, it is evident that the one variable query $Q = \{a\in D: ({\cal S}, a) \models \Psi(x) \}$ can be evaluated in time $O_{\Psi}(|D|)= O_{\Psi}(|{\cal S}|)$ and hence, by Lemma~\ref{LEM linear time = constant delay}, is in $\textsc{Constant-Delay$_{lin}$}$. The result is supposed to be true for $k$ ($k\geq 1$) and proved now for $k+1$. Let's consider the query: \[ Q = \{(\tu a,b) \in D^{k+1} : {\cal S} \models \Psi(\tu x,y)\} \] \noindent where the conjunction of bijective literals $\Psi$ is over variables $\tu x =(x_1,\dots,x_k)$ and $y$. As for Theorem~\ref{TH elimination quantificateur}, two cases need to be distinguished. \begin{enumerate} \item~\label{cas 1} $\Psi$ contains at least one literal of the form $\t_1(y) = \t_2(x_{i_0})$, $1\leq i_0 \leq k$, that can also be rephrased as $y = \tau(x_{i_0})$, \item~\label{cas 2} $\Psi$ does not contain such a literal. \end{enumerate} In the first case, $\Psi$ can rewritten as: \[ \Psi(\tu x, y) = \Psi_0(\tu x, y) \wedge y = \tau(x_{i_0}).\] Query $Q$ is then equivalent to: \[ Q = \{(\tu a,\tau(a_{i_0})) \in D^{k+1} : ({\cal S} , \tu a ) \models \Psi_0(\tu x,\tau(x_{i_0}))\}, \] which is essentially the following $k$ variable query $Q'$: \[ Q' = \{\tu a \in D^{k} : ({\cal S} , \tu a ) \models \Psi_0(\tu x,\tau(x_{i_0})\}. \] To be precise, $Q=\{(\tu a,\tau(a_{i_0})) : \tu a \in Q'\}$. By the induction hypothesis, query $Q'$ can be computed by some algorithm ${\cal A}'$ in constant delay. This provides the following constant delay procedure for query $Q$. \begin{algorithm} \caption{Evaluating query $Q$} \begin{algorithmic}[1] \State \textbf{Input:} ${\cal S}, \Psi$ \State $\precompAlgo{{\cal A}'}$ \State Apply $\enumAlgo{{\cal A}'}$ and for each enumerated tuple $\tu a$, output $(\tu a,\tau(a_{i_0}))$ instead \end{algorithmic} \end{algorithm} \medskip Case~\ref{cas 2} is a little more complicated. Formula $\Psi$ can be put under the following form: \[ \Psi \equiv \Psi_1 (\tu x) \wedge \Psi_2 (y) \wedge \bigwedge_{1\leq i \leq r} y \neq \tau_i(x_{j_i}) \] \noindent with $1\leq j_i\leq k $ for $1\leq i \leq r$. By induction hypothesis, the $k$ variable query: \[ Q_1 = \{\tu a \in D^k : ({\cal S} , \tu a ) \models \Psi_1(\tu x)\} \] \noindent can be computed by an algorithm ${\cal A}_1$ on input ${\cal S}$ with constant delay. For similar reason, the $k$ variable query $Q_b$ over structure $({\cal S},b)$ defined by: \[ Q_b = \{\tu a \in D^k : ({\cal S} , \tu a, b ) \models \Psi (\tu x, y)\}\} \] \noindent can be enumerated by an algorithm using constant delay. Let now $Q_2$ be: \[ Q_2 = \{ b \in D : ({\cal S} , b) \models \Psi_2(y)\}. \] If $|Q_2|\leq r$ then, by Lemma~\ref{LEM disjoint union}, there exists an algorithm ${\cal A}_0$ which enumerates the disjoint union $\cup_{b\in Q_2} Q_b\times \{b\}$ with constant delay. Note that $\cup_{b\in Q_2} Q_b\times \{b\} = Q$. From what has been said Algorithm~\ref{ALGO enum constant delay 1} below correctly computes query $Q$. \begin{algorithm}[h] \caption{Evaluating query $Q$}~\label{ALGO enum constant delay 1} \begin{algorithmic}[1] \State \textbf{Input:} ${\cal S}, \Psi$ \State Compute $Q_2$ and $|Q_2|$ \If{$|Q_2|\leq r$} run ${\cal A}_0$ \Else \State $\precompAlgo{{\cal A}_1}$~\label{ALGO precomp} \For{$\tu a \in \enumAlgo{{\cal A}_1}$} \For{$b\in Q_2$} \If{$({\cal S}, \tu a, b)\not\models \bigvee_{1\leq i\leq r} y = \tau_i(x_{j_i})$} Output $(\tu a, b)$ \EndIf \EndFor \EndFor \EndIf \end{algorithmic} \end{algorithm} \medskip Up to step~\ref{ALGO precomp} of the algorithm, all can be done in linear time. It remains to show that, in the case where $|Q_2|\geq r+1$, the delay between two successive solutions is bounded by some constant. Since $|Q_2|\geq r+1$ and the number of $b\in Q_2$ that verify $({\cal S}, \tu a, b)\not\models \bigvee_{1\leq i\leq r} y = \tau_i(x_{j_i})$ is bounded by $r$, the algorithm outputs at least one $(\tu a,b)$ for each $\tu a\in Q_1$. More precisely, it outputs $|Q_2|-r$ such tuples. For the same reasons, the maximal delay between two successive outputs is then bounded by $2r$. The same arguments apply for the delay between the last solution and the end of the algorithm. Then, computing $Q$ can be done in constant delay. \end{proof} \begin{proofOf}{Theorem~\ref{TH bijective query}} Let ${\cal S}$ and $\varphi (\tu x)$ be instances of the $\query{\mathbf{FO_{Bij}}}$ problem. From Theorem~\ref{TH elimination quantificateur}, one can transform $\varphi (\tu x)$ into the following equivalent formula in disjunctive normal form: \[ \varphi(\tu x)\equiv \Psi_1(\tu x) \vee \dots \vee \Psi_q(\tu x) \] \noindent where each $\Psi_i$ is a conjunction of bijective literals and for all $i,j$, $1\leq i < j \leq q$ and all bijective structures ${\cal S}$, $\Psi_i({\cal S}) \cap \Psi_j({\cal S}) = \emptyset$. The Theorem immediately follows from Lemma~\ref{LEM disjoint union} since the enumeration problem of each query ${\cal S} \mapsto \Psi_i({\cal S})$, $1\leq i \leq q$, belongs to $\textsc{Constant-Delay$_{lin}$}$ by Lemma~\ref{LEM query output}. \end{proofOf} \section{Relational structures of bounded degree}~\label{SEC degre borne} \subsection{Two equivalent definitions} Let $\rho = \{R_1,\ldots, R_q\}$ be a relational signature, i.e., a signature made of relational symbols $R_i$ each of arity $a_i$. Recall that $arity(\rho)=max_{1\leq i \leq q} (a_i)=m$. Let ${\cal S}=\st{D}{\su{R}{q}}$ be a $\rho$-structure. For each $i\leq q$, $R_i\subseteq D^{a_i}$. The \textit{degree} of an element $x$ in ${\cal S}$ is defined as follows: \[ degree_{{\cal S}}(x) = \sum_{1\leq i \leq q} \sum_{1\leq j \leq a_i} \sharp \{(y_1,\dots,y_{a_i})\in D^{a_i}: \exists j \leq a_i \mbox{ s.t. } x=y_j \mbox{ and } {\cal S} \models R_i(y_1,\dots,y_{a_i})\}. \] Intuitively, $degree_{{\cal S}}(x)$ is the total number of tuples of relations $R_i$ to which $x$ belongs to. One defines the degree of a structure as $degree({\cal S}) = max_{x\in D} (degree_{{\cal S}}(x))$. \begin{remark} In~\cite{Seese-96} a different definition of the degree of a structure is given. It counts, for each $x$, the number of distinct elements $y\neq x$ adjacent to $x$, i.e., that appear in some tuple with $x$. More precisely, \[ degree^1_{{\cal S}}(x) = \sharp \{y : y\neq x \mbox{ and } \exists i \leq q , \tu t \in D^{a_i}, \mbox{ s.t. } {\cal S} \models R_i (\tu t) \mbox{ and } x,y\in \tu t\}, \] \noindent and $degree^1({\cal S}) = max_{x\in D} (degree^1_{{\cal S}}(x))$. Since each tuple containing $x$ contains at most $m-1$ elements different from $x$, it is easily seen that: \[ degree^1({\cal S}) \leq (m-1). degree({\cal S}) \mbox{ where } m=arity(\rho). \]. Conversely, for each $x$, if there exist at most $d$ elements $y\in D$ adjacent to $x$ then, the number of distinct tuples involving $x$ and $y$ is bounded by $q.m.d^{m-1}$. Hence, \[ degree({\cal S}) \leq q.m.(degree^1({\cal S}))^{m-1}. \] So, the two measures yield the same notion of bounded degree structure. \end{remark} We are interested in the complexity of the following query problem for bounded degree structures (which is clearly independent of either measure of degree we choose). \bigskip \noindent $\query{\mathbf{FO_{Deg}}}$\\ \noindent \textbf{Input:} an integer $d$, a relational signature $\rho$, a $\rho$-structure ${\cal S}$ with $degree({\cal S})\leq d$ and a first-order $\rho$-formula $\varphi(\tu x)$ with $k$ free variables $\tu x = (x_1,\dots,x_k)$\\ \noindent \textit{Parameter:} $d,\varphi$ \\ \noindent \textbf{Output:} $\varphi({\cal S}) = \{\tu a \in D^k : ({\cal S}, \tu a) \models \varphi(\tu x)\}$. \subsection{Interpreting a structure of bounded degree into a bijective structure} In this section, we present a natural reduction from $\query{\mathbf{FO_{Deg}}}$ to $\query{\mathbf{FO_{Bij}}}$ which is obtained by interpreting any structure of bounded degree into a bijective one. Let ${\cal S}=\st{D}{\su{R}{q}}$ be a $\rho$-structure of domain $D$, of arity $m=max_{1\leq i \leq q} arity(R_i)$ and of degree bounded by some constant $d$. One associates to ${\cal S}$ a bijective $\sg$-structure ${\cal S}'=\st{D'}{D,\su{T}{q},g,\su{f}{m}}$ of domain $D'$ where $D,\su{T}{q}$ are pairwise disjoints unary relations (i.e. subsets of $D'$) and $g,\su{f}{m}$ are permutations of $D'$. Structure ${\cal S}'$ is precisely defined as follows: \begin{itemize} \item $D$ corresponds to the domain of ${\cal S}$. \item $T_i$ ($1\leq i \leq q$) is a set of elements each representing a tuple of $R_i$ (hence, $card(T_i)=card(R_i)$). The new domain $D'$ is the disjoint union: $D\cup (D\times\{1,\dots,d\})\cup T_1\cup \dots \cup T_q$. Let us use the following convenient abbreviations: $U = D\cup (D\times\{1,\dots,d\})$ and $T= \bigcup_{1\leq i \leq q} T_i$. \item $g$ creates a cycle that relates $d$ copies of each element $x$ of the domain. More precisely, for each $x\in D$, it holds $g(x)=(x,1)$, $g((x,i))=(x,i+1)$ for $1 \leq i < d$, and $g((x,d))=x$. We also set $g(x)=x$ for all other $x$ ($x\in T$). \item Each $f_i$ is an involutive permutation and essentially represents a projection of $T$ into $D$ as follows. Let $R_i(x_1,\dots,x_k)$ be true in ${\cal S}$ for some relation $R_i$ of arity $k\leq m$ and some $k$-tuple $(x_1,\dots,x_k)\in D^k$. Suppose $R_i(x_1,\dots,x_k)$ is represented by element $t\in T_i$, then, for each $j\leq k$, set $f_j(t)=(x_j,h)$ and set the reciprocal $f((x_j,h))=t$ if $R(x_1,\dots,x_k)$ is the $h^{th}$ tuple in which $x_j$ appears (with $h\leq d$). The construction is completed by loops $f_j(x)=x$ for all other $x\in D'$. \end{itemize} Figure~\ref{ex} details the reduction on an example. \begin{figure}[tc] \begin{center} \input{fig1.pstex_t} \end{center} \caption{Our reduction on an example: the original structure (digraph) of degree $3$ is on the right side of the picture}~\label{ex} \end{figure} It is clear that, by construction, ${\cal S}'$ is a bijective structure and that we have the following interpretation Lemma. \begin{lemma}~\label{LEM interpretation} Let $\theta_i$ be the $\sg$-formula below associated to any symbol $R_i\in\rho$ of arity $k$: \[ \theta_i(x_1,\dots,x_k)\equiv \exists t (T_i(t) \wedge \bigwedge_{1\leq j\leq k} \bigvee_{1\leq h\leq d} f_j(t)=g^h(x_j)). \] Then, for all $(a_1\dots,a_k)\in D^k$: \[ ({\cal S}, a_1,\dots, a_k) \models R_i(x_1,\dots,x_k) \iff ({\cal S}', a_1,\dots, a_k) \models \theta_i(x_1,\dots,x_k). \] \end{lemma} To each first-order $\rho$-formula $\varphi(x_1,\dots,x_p)$, one associates the $\sg$-formula $\varphi''(x_1,\dots,x_p)$ obtained by replacing each quantification $\exists v$ (resp. $\forall v$) by the relativized quantification $(\exists v D(v))$ (resp. $(\forall v D(v))$) (that can be written respectively as $\exists v (D(v)\wedge ...)$ and $\forall v (D(v)\rightarrow ...)$) and by replacing each subformula $R_i(x_1,\dots,x_k)$ by $\theta_i(x_1,\dots,x_k)$. The following proposition and lemma express that our reduction is correct and linear in $|{\cal S}|$. Because of Lemma~\ref{LEM interpretation}, Proposition~\ref{PROP interpretation} can be easily proved by induction on formula $\varphi$. \begin{proposition}[interpretation of ${\cal S}$ into ${\cal S}'$]~\label{PROP interpretation} For all $(x_1\dots,x_p)\in D^p$: \[ ({\cal S}, a_1,\dots, a_p) \models \varphi(x_1,\dots,x_p) \iff ({\cal S}', a_1,\dots, a_p) \models \varphi''(x_1,\dots,x_p). \] In other words: $\varphi({\cal S}) = \varphi''({\cal S}')\cap D^p$. Then, setting $\varphi'(x_1,\dots,x_p) \equiv \varphi''(x_1,\dots,x_p)\wedge \bigwedge_{i\leq p}D(x_i)$, it holds: $\varphi({\cal S}) = \varphi'({\cal S}')$ \end{proposition} \begin{lemma}~\label{LEM complexite reduction} Computing ${\cal S}'$ from ${\cal S}$ can be done in linear time $O_{\rho, d}(|{\cal S}|)$. \end{lemma} \begin{proof} As computing ${\cal S}'$ from ${\cal S}$ is easy, one has only to compare the size of the two structures. The size of ${\cal S}$ is: \[ |{\cal S}| = \Theta(|D|+\sum_{i=1}^q card(R_i).arity(R_i))= \Theta_{\rho}(|D|+\sum_{i=1}^q card(R_i)). \] For ${\cal S}'$, by construction, it holds that: \[ |D'|=(d+1).|D| + \sum_{i=1}^q card(R_i)= \Theta_{d,\rho}(|{\cal S}|). \] Hence, $|{\cal S}'|=\Theta(m|D'|)=\Theta_{d,\rho}(|{\cal S}|)$. \end{proof} \medskip We are now ready to state and prove the main result of this section. \begin{theorem}~\label{TH degre borne} $\query{\mathbf{FO_{Deg}}}$ belongs to $\textsc{Constant-Delay$_{lin}$}$. \end{theorem} \begin{proof} Let ${\cal A}$ be a constant delay algorithm that computes queries of $\query{\mathbf{FO_{Bij}}}$. By using Proposition~\ref{PROP interpretation}, the algorithm below correctly evaluates queries in $\query{\mathbf{FO_{Deg}}}$. \begin{algorithm*}~\label{ALGO Bounded Degree} \caption{Evaluating $\query{\mathbf{FO_{Deg}}}$} \begin{algorithmic}[1] \State \textbf{Input:} ${\cal S}, d, \varphi$ \State Compute the $\sg$-formula $\varphi'(\tu x)$ associated to $\varphi$ (and $d$)~\label{algo2cpt1} \State Compute the bijective $\sg$-structure ${\cal S}'$ associated to ${\cal S}$ (and $d$)~\label{algo2cpt2} \State Run ${\cal A}$ on input ${\cal S}'$, $\varphi'$~\label{algo2cpt3} \end{algorithmic} \end{algorithm*} The cost of instruction~\ref{algo2cpt1} is $O_{\varphi,d}(1)$, that of instruction~\ref{algo2cpt2} is $O_{\varphi,d}(|{\cal S}|)$ (by Lemma~\ref{LEM complexite reduction}) and the precomputation part of algorithm~${\cal A}$ (included in instruction~\ref{algo2cpt3}) is $O_{\varphi'}(|{\cal S}'|)$ (hence $O_{\varphi,d}(|{\cal S}|)$) by Theorem~\ref{TH bijective query}. These steps form a precomputation phase of time complexity $O_{\varphi,d}(|{\cal S}|)$. Finally, the effective enumeration of $\varphi({\cal S})=\varphi'({\cal S}')$ is handled on ${\cal S}', \varphi'$ by ${\cal A}$ and is performed with constant delay. \end{proof} \subsection{Complexity of subgraphs problems}~\label{Complexity of subgraphs problems} In this part, we present a simple application of our result to a well-known graph problem. Given two graphs $G=\st{V}{E}$ and $H=\st{V_H}{E_H}$, $H$ is said to be a \textit{subgraph} (resp. \textit{induced subgraph}) of $G$ if there is a one-to-one function $g$ from $V_H$ to $V$ such that, for all $u,v\in V_H$, $E(g(u),g(v))$ holds if (resp. if and only if) $E_H(u,v)$ holds. \bigskip \noindent \textsc{generate subgraph} (resp. \textsc{generate induced subgraph})\\ \noindent \begin{tabular}{rl} \textbf{Input:} & \parbox[t]{300 pt}{any graph $H$ and a graph $G$ of degree bounded by $d$}\\ \textbf{Parameter:} & $|H|, d$.\\ \textbf{Output:} & All the subgraphs (resp. induced subgraphs) of $G$ isomorphic to $H$.\\ \end{tabular} \bigskip The treewidth of a graph $G$ is the maximal size of a node in a tree decomposition of $G$ (see, for example,~\cite{DowneyF-99}). In~\cite{PlehnV-90} it is proved that for graphs $H$ of treewidth at most $w$, testing if a given graph $H$ is an induced subgraph of a graph $G$ of degree at most $d$ can be done in time $f(|H|,d).|G|^{w+1}$. In what follows, we show that there is no reason to focus on graphs of bounded treewidth and that a better bound can be obtained for \textit{any} graph $H$ (provided $G$ is of bounded degree). In the result below, we prove that not only the complexity of this decision problem is $f(|H|,d).|G|$ but that generating all the (induced) subgraphs isomorphic to $H$ can be done with constant delay. \begin{corollary} The problem \textsc{generate subgraph} (resp. \textsc{generate induced subgraph}) belongs to $\textsc{Constant-Delay$_{lin}$}$ \end{corollary} \begin{proof} The proof is given for the erate geinduced subgraph problem. Let $G=\st{V}{E}$ and $H=\st{V_H=\{h_1,\dots,h_k\}}{E_H}$ ($|V_H|=k$) be the two inputs of the problem. Since $G$ is of maximum degree $d$, we can partition its vertex set $V$ into $d$ sets $V^0,\dots,V^d$ where each $V^{\alpha}$ is the set of vertices of degree $\alpha$. This can be done in linear time $O(|G|)$. We proceed the same for graph $H$ and obtain the sets $V_H^0, \dots, V_H^d$. In case there exists a vertex in $H$ of degree greater than $d$, it can be concluded immediately that the problem has no solution. Now, let $Q$ be the following formula: \[ Q (x_1,\dots,x_k) \equiv \bigwedge_{i<j\leq k} x_i \neq x_j \wedge \bigwedge_{V_H^{\alpha}(h_i)} V_G^{\alpha}(x_i) \wedge \bigwedge_{E_H(h_i,h_j)} E(x_i,x_j). \] Formula $Q$ simply checks that $H$ is a subgraph of $G$ and that each distinguished vertex $x_i$ of $G$ has the same degree as its associated vertex $h_i$ in $H$. Note that formula $Q$ only depends on $H$ and $d$. The result follows now from Theorem~\ref{TH degre borne}. \end{proof} \section{Conclusion} In this paper, we study the complexity of evaluating first-order queries on bounded degree structures and consider this evaluation as a dynamical process, i.e., as an enumeration problem. Our main contributions are two-fold. First, we define a simple quantifier elimination method suitable for first-order formulas which have to be evaluated against a bijective structure. Second, we define a new complexity class, called $\textsc{Constant-Delay$_{lin}$}$, for enumeration problem which can be seen as the minimal robust complexity class for this kind of problems and we prove that our query problem on bounded degree structures belong to this class. There are several interesting directions for further researches. Among them, the two following series of questions seem worth to be studied: \begin{itemize} \item Which "natural" query problems belong to $\textsc{Constant-Delay$_{lin}$}$ ? More generally, which kind of combinatorial or algorithmic enumeration problems admit constant delay procedures ? The same questions can be asked for the larger class $\textsc{Constant-Delay$_{poly}$}$ of constant delay enumeration problems for which polynomial time (instead of linear time) precomputations are allowed. \item What are the structural properties of the class $\textsc{Constant-Delay$_{lin}$}$ or of the larger $\textsc{Constant-Delay$_{poly}$}$ ? Do they have complete problems ? Under which kind of reductions ? Could they be proved to be different from the classes of enumeration problems solvable with linear or polynomial delay ? \end{itemize} \medskip \textbf{Acknowledgment.} \ We thank Ron Fagin for a very fruitful email exchange that lead us to define complexity notions about constant delay computation. \bibliographystyle{alpha}
2,877,628,089,289
arxiv
\section{Introduction} \setcounter{equation}{0}One of the possible approaches to $M$-theory is to consider compactifications of the $11$-dimensional spacetimes of the form $% M_{4}\times X$ where $M_{4}$ is the $4$-dimensional Minkowski space and $X$ is a $7$-dimensional manifold. If $X$ is a compact manifold with $G_{2}$ holonomy, then this gives a vacuum solution of the low-energy effective theory, and moreover, since $X$ has one covariantly constant spinor, the resulting theory in $4$ dimensions has $N=1$ supersymmetry. The physical content of the $4$-dimensional theory is given by the moduli of $G_{2}$ holonomy manifolds. Such a compactification of $M$-theory is in many ways analogous to Calabi-Yau compactifications in String Theory, where much progress has been made through the study of the Calabi-Yau moduli spaces. In particular, as it was shown in \cite{Candelas:1990pi} and \cite% {Strominger:1990pd}, the moduli space of complex structures and the complexified moduli space of K\"{a}hler structures are both in fact, K\"{a}% hler manifolds. Moreover, both have a \emph{special geometry} - that is, both have a line bundle whose first Chern class coincides with the K\"{a}% hler class. However until recently, the structure of the moduli space of $% G_{2}$ holonomy manifolds has not been studied in that much detail. Generally, it turned out that the study of $G_{2}$ manifolds is quite difficult. Firstly, unlike in the Calabi-Yau case \cite{CalabiYau}, there is no general theorem for existence of $G_{2}$ manifolds. Although there are constructions of compact $G_{2}$ manifolds such as those that can be found in \cite{Joycebook} and \cite{Kovalev:2001zr}, they are not explicit (a non-compact construction was also given in \cite{Gibbons:1989er}). Another difficulty is that the $G_{2}$-invariant $3$-form which defines the $G_{2}$% -structure and the metric corresponding to it are related in a non-linear fashion. This makes the study of $G_{2}$ manifolds more difficult from a computational point of view. We first start with an overview of $G_{2}$ structures in section 2, where we state the basic facts about $G_{2}$ manifolds and set up the notation. A $% G_{2}$-structure is defined by a $G_{2}$-invariant $3$-form $\varphi $, and in section 3 we review some of the computational properties of $\varphi $ and its Hodge dual $\ast \varphi $, which we will need later on. Since one of our main motivation to study $G_{2}$ manifolds comes from physics, in section 4, we review the role of $G_{2}$ manifolds in $M$-theory, and in particular we consider the Kaluza-Klein compactification of the effective $M$% -theory low-energy action on a $G_{2}$ manifold. It turns that in the reduced action, the moduli of the $M$-theory $3$-form $C_{mnp}$ and the $% G_{2}$ moduli naturally combine, to effectively give a complexification of the $G_{2}$ moduli space. Moreover, the metric on this complexified space turns out to be K\"{a}hler, and the K\"{a}hler potential is essentially the logarithm of the volume of the $G_{2}$ manifold. The aim of this paper is to gain more information about the geometry of the moduli space, and so the aim is to compute the curvature of this K\"{a}hler metric. This involves calculation of the fourth derivative of the K\"{a}hler potential. The method which we use for this requires us to know the expansion of $\ast \varphi $ to third order in the deformations of $\varphi $% . So in section 5, we in fact explicitly give the expansion of $\ast \varphi $ to fourth order in the deformations of $\varphi $. Previously, only the full expansion to first order was known \cite{Joycebook}, and only partially to second order \cite{bryant-2003}. However, there are approaches to calculating higher derivatives of the K\"{a}hler potential without explicitly computing an expansion of $\ast \varphi $ - for example the third derivative has been computed by de Boer et al in \cite{deBoer:2005pt} and by Karigiannis and Leung in \cite{karigiannis-2007a}. Finally, in section 6, we use our expansion of $\ast \varphi $ from section 5 to calculate the full curvature of the $G_{2}$ moduli space, and then the Ricci curvature as well. As it has already been noted in \cite{deBoer:2005pt} and \cite{karigiannis-2007a}, the third derivative of the K\"{a}hler can be interpreted as a Yukawa coupling, and it bears a great resemblance to the Yukawa coupling encountered in the study of Calabi-Yau moduli spaces. At the end of section 6 we consider look at some properties of covariant derivatives on the moduli space. \textbf{Acknowledgements.} The first author would like to thank Spiro Karigiannis and Alexei Kovalev for useful discussions, and would also like to thank UC\ Irvine and Harvard University, where much of this work has been completed, for hospitality. The research of the first author is funded by EPSRC. \section{Overview of $G_{2}$ structures} We will first review the basics of $G_{2}$ structures on smooth manifolds. The main references for this section are \cite{Joycebook},\cite{bryant-2003} and \cite{karigiannis-2005-57}. The $14$-dimensional Lie group $G_{2}$ can be defined as a subgroup of $% GL\left( 7,\mathbb{R}\right) $ in the following way. Suppose $% x^{1},...,x^{7} $ are coordinates on $\mathbb{R}^{7}$ and let $% e^{ijk}=dx^{i}\wedge dx^{j}\wedge dx^{k}$. Then define $\varphi _{0}$ to be the $3$-form on $\mathbb{R}^{7}$ given by \begin{equation} \varphi _{0}=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356}. \label{phi0def} \end{equation}% Then $G_{2}$ is defined as the subgroup of $GL\left( 7,\mathbb{R}\right) $ which preserves $\varphi _{0}$. Moreover, it also fixes the standard Euclidean metric \begin{equation} g_{0}=\left( dx^{1}\right) ^{2}+...+\left( dx^{7}\right) ^{2} \label{g0def} \end{equation} on $\mathbb{R}^{7}$ and the $4$-form $\ast \varphi _{0}$ which is the corresponding Hodge dual of $\varphi _{0}$:% \begin{equation} \ast \varphi _{0}=e^{4567}+e^{2367}+e^{2345}+e^{1357}-e^{1346}-e^{1256}-e^{1247}. \label{sphi0def} \end{equation} Now suppose $X$ is a smooth, oriented $7$-dimensional manifold. A $G_{2}$ structure $Q$ on $X$ is a principal subbundle of the frame bundle $F$, with fibre $G_{2}$. However we can also uniquely define $Q$ via $3$-forms on $X.$ Define a $3$-form $\varphi $ to be \emph{positive }if we locally can choose coordinates such that $\varphi $ is written in the form (\ref{phi0def}) - that is for every $p\in X$ there is an isomorphism between $T_{p}X$ and $% \mathbb{R}^{7}$ such that $\left. \varphi \right\vert _{p}=\varphi _{0}$. Using this isomorphism, to each positive $\varphi $ we can associate a metric $g$ and a Hodge dual $\ast \varphi $ which are identified with $g_{0}$ and $\ast \varphi _{0}$ under this isomorphism. and the associated metric is written (\ref{g0def}). It is shown in \cite{Joycebook} that there is a $1-1$ correspondence between positive $3$-forms $\varphi $ and $G_{2}$ structures $% Q$ on $X$. So given a positive $3$-form $\varphi $ on $X$, it is possible to define a metric $g$ associated to $\varphi $ and this metric then defines the Hodge star, which in turn gives the $4$-form $\ast \varphi $. Thus although $\ast \varphi $ looks linear in $\varphi $, it actually is not, so sometimes we will write $\psi =\ast \varphi $ to emphasize that the relation between $% \varphi $ and $\ast \varphi $ is very non-trivial. In general, any $G$-structure on a manifold $X$ induces a splitting of bundles of $p$-forms into subbundles corresponding to irreducible representations of $G$. The same is of course true for $G_{2}\,$-structure. From \cite{Joycebook} we have the following decomposition of the spaces of $% p $-forms $\Lambda ^{p}$: \begin{subequations} \label{formdecompose} \begin{eqnarray} \Lambda ^{1} &=&\Lambda _{7}^{1} \label{l1decom} \\ \Lambda ^{2} &=&\Lambda _{7}^{2}\oplus \Lambda _{14}^{2} \label{l2decom} \\ \Lambda ^{3} &=&\Lambda _{1}^{3}\oplus \Lambda _{7}^{3}\oplus \Lambda _{27}^{3} \label{l3decom} \\ \Lambda ^{4} &=&\Lambda _{1}^{4}\oplus \Lambda _{7}^{4}\oplus \Lambda _{27}^{4} \label{l4decom} \\ \Lambda ^{5} &=&\Lambda _{7}^{5}\oplus \Lambda _{14}^{5} \label{l5decom} \\ \Lambda ^{6} &=&\Lambda _{7}^{6} \label{l6decom} \end{eqnarray} Here each $\Lambda _{k}^{p}$ corresponds to the $k\,$-dimensional irreducible representation of $G_{2}$. Moreover, for each $k$ and $p$, $% \Lambda _{k}^{p}$ and $\Lambda _{k}^{7-p}$ are isomorphic to each other via Hodge duality, and also $\Lambda _{7}^{p}$ are isomorphic to each other for $% n=1,2,...,6$. Note that $\varphi $ and $\ast \varphi $ are $G_{2}$% -invariant, so they generate the $1$-dimensional sectors $\Lambda _{1}^{3}$ and $\Lambda _{1}^{4}$, respectively. Define the standard inner product on $\Lambda ^{p},$ so that for $p\,$-forms $\alpha $ and $\beta $, \end{subequations} \begin{equation} \left\langle \alpha ,\beta \right\rangle =\frac{1}{p!}\alpha _{a_{1}...a_{p}}\beta ^{a_{1}...a_{p}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{forminp} \end{equation}% This is related to the Hodge star, since \begin{equation} \alpha \wedge \ast \beta =\left\langle \alpha ,\beta \right\rangle \mathrm{% vol} \label{hodgedef} \end{equation}% where $\mathrm{vol}$ is the invariant volume form given locally by \begin{equation} \mathrm{vol}=\sqrt{\det g}dx^{1}\wedge ...\wedge dx^{7}. \label{voldef} \end{equation}% Then it turns out that the decompositions (\ref{formdecompose}) are orthogonal with respect to (\ref{forminp}). This will be seen easily when we consider these decompositions in more detail in the next section. As we already know, the metric $g$ on a manifold with $G_{2}$ structure is determined by the invariant $3$-form $\varphi $. It is in fact possible to write down an explicit relationship between $\varphi $ and $g$. Let $u$ and $% v$ be vector fields on $X$. Then \begin{equation} \left\langle u,v\right\rangle \mathrm{vol}=\frac{1}{6}\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \varphi . \label{metricdef} \end{equation}% Here $\lrcorner $ denotes interior multiplication, so that \begin{equation} \left( u\lrcorner \varphi \right) _{bc}=u^{a}\varphi _{abc}. \label{intmultdef} \end{equation}% The definition (\ref{metricdef}) is rather indirect because $\mathrm{vol}$ depends on $g$ via (\ref{voldef}). To make more sense of it, rewrite in components% \begin{equation} g_{ab}\sqrt{\det g}=\frac{1}{144}\varphi _{amn}\varphi _{bpq}\varphi _{rst}% \hat{\varepsilon}^{mnpqrst} \label{metriccomp1} \end{equation}% where $\hat{\varepsilon}^{mnpqrst}$ is the alternating symbol with $% \varepsilon ^{12...7}=+1$. Define \begin{equation} B_{ab}=\frac{1}{144}\varphi _{amn}\varphi _{bpq}\varphi _{rst}\hat{% \varepsilon}^{mnpqrst} \label{sabdef} \end{equation}% so that then, after taking the determinant of (\ref{metriccomp1}) we get \begin{equation} g_{ab}=\left( \det B\right) ^{-\frac{1}{9}}B_{ab}. \label{metricdefdirect} \end{equation}% This gives a direct definition, but because $\det s$ may be awkward to compute, (\ref{metricdefdirect}) is not always the most practical definition. For us, it will be more useful to take the trace of (\ref% {metriccomp1}) with respect to $g$, which gives% \begin{equation} \sqrt{\det g}=\frac{1}{7}\func{Tr}B \label{detgtrs} \end{equation}% and hence \begin{equation} g_{ab}=\frac{7B_{ab}}{\func{Tr}B}. \label{gabtrs1} \end{equation}% Although this is also an indirect definition, it is sometimes easier to handle this expression. There are in fact a total of 16 torsion classes of $G_{2}$ structures, each of which places certain restrictions on $d\varphi $ or $d\ast \varphi $ \cite% {FernandezGray}. One of the most important classes of manifolds with $G_{2}$ structure are manifolds with $G_{2}$ holonomy. The group $G_{2}$ appears as one of two exceptional holonomy groups - the other one is $Spin\left( 7\right) $ for $8$-dimensional manifolds. The list of possible holonomy groups is limited and they were fully classified by Berger \cite{Berger1955}% . Specifically, if $\left( X,g\right) $ is a simply-connected Riemannian manifold which is neither locally a product nor is symmetric, the only possibilities are shown in the table below. \begin{equation*} \begin{tabular}{lll} \textbf{Dimension} & \textbf{Holonomy} & \textbf{Type of Manifold} \\ $2k$ & \thinspace $U\left( k\right) $ & K\"{a}hler \\ $2k$ & $SU\left( k\right) $ & Calabi-Yau \\ $4k$ & $Sp\left( k\right) $ & HyperK\"{a}hler \\ $4k$ & $Sp\left( k\right) Sp\left( 1\right) $ & Quaternionic \\ $7$ & $G_{2}$ & Exceptional \\ $8$ & $Spin\left( 7\right) $ & Exceptional% \end{tabular}% \end{equation*} It turns out that the holonomy group $Hol\left( X,g\right) \subseteq G_{2}$ if and only if $X$ has a torsion-free $G_{2}$ structure \cite{Joycebook}. In this case, the invariant $3$-form $\varphi $ satisfies% \begin{equation} d\varphi =d\ast \varphi =0 \label{torsionfreedef} \end{equation}% and equivalently, ${\Greekmath 0272} \varphi =0$ where ${\Greekmath 0272} $ is the Levi-Civita connection of $g$. So in fact, in this case $\varphi $ is harmonic. Moreover, if $Hol\left( X,g\right) \subseteq G_{2}$, then $X$ is Ricci-flat. For a torsion-free $G_{2}$ structure, the decompositions (\ref{formdecompose}% ) carry over to de Rham cohomology \cite{Joycebook}, so that we have \begin{subequations} \label{cohodecom} \begin{eqnarray} H^{2}\left( X,\mathbb{R}\right) &=&H_{7}^{2}\oplus H_{14}^{2} \\ H^{3}\left( X,\mathbb{R}\right) &=&H_{1}^{3}\oplus H_{7}^{3}\oplus H_{27}^{3} \\ H^{4}\left( X,\mathbb{R}\right) &=&H_{1}^{4}\oplus H_{7}^{4}\oplus H_{27}^{4} \\ H^{5}\left( X,\mathbb{R}\right) &=&H_{7}^{5}\oplus H_{14}^{5} \end{eqnarray}% Define the refined Betti numbers $b_{k}^{p}=\dim \left( H_{k}^{p}\right) $. Clearly, $b_{1}^{3}=b_{1}^{4}=1$ and we also have $b_{1}=b_{7}^{k}$ for $% k=1,...,6$. Moreover, it turns out that $b_{1}=0$ if and only if $Hol\left( X,g\right) =G_{2}$. Therefore, in this case the $H_{7}^{k}$ component vanishes in (\ref{cohodecom}). An example of a construction of a manifold with a torsion-free $G_{2}$ structure is to consider $X=Y\times S^{1}$ where is a Calabi-Yau $3$-fold. Define the metric and a $3$-form on $X$ as \end{subequations} \begin{eqnarray} g_{X} &=&d\theta ^{2}\times g_{Y} \label{metCY} \\ \varphi &=&d\theta \wedge \omega +\func{Re}\Omega \label{phiCY} \end{eqnarray}% where $\theta $ is the coordinate on $S^{1}$. This then defines a torsion-free $G_{2}$ structure, with \begin{equation} \ast \varphi =\frac{1}{2}\omega \wedge \omega -d\theta \wedge \func{Im}% \Omega . \label{psiCY} \end{equation}% However, the holonomy of $X$ in this case is $SU\left( 3\right) \subset G_{2} $. From the K\"{u}nneth formula we get the following relations between the refined Betti numbers of $X$ and the Hodge numbers of $Y$ \begin{eqnarray*} b_{7}^{k} &=&1\ \ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for }k=1,...,6 \\ b_{14}^{k} &=&h^{1,1}-1\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for }k=2,5 \\ b_{27}^{k} &=&h^{1,1}+2h^{2,1}\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ for }k=3,4. \end{eqnarray*} \section{Properties of $\protect\varphi $} The invariant $3$-form $\varphi $ which defines a $G_{2}$ structure on the manifold $X$ has a number of useful and interesting properties. In particular, contractions of $\varphi $ and $\psi =\ast \varphi $ are very useful in computations. From \cite{bryant-2003}, \cite{karigiannis-2007} and \cite{House:2004pm}, we have \begin{eqnarray} \varphi _{abc}\varphi _{mn}^{\ \ \ c} &=&g_{am}g_{bn}-g_{an}g_{bm}+\psi _{abmn} \label{phiphi1} \\ \varphi _{abc}\psi _{mnp}^{\ \ \ \ \ \ c} &=&3\left( g_{a[m}\varphi _{np]b}-g_{b[m}\varphi _{np]a}\right) \label{phipsi} \end{eqnarray}% Essentially, these identities can be derived straight from the definitions of $\varphi $ and $\psi =\ast \varphi $ in flat space - (\ref{phi0def}) and (% \ref{sphi0def}) respectively. For more details, please refer to \cite% {bryant-2003} and \cite{karigiannis-2007}. Note that we are using a different convention to \cite{karigiannis-2007}, and hence some of the signs are different. Consider the product $\psi _{abcd}\psi ^{mnpq}$. Expanding $\psi $ as the Hodge star of $\varphi $ and then using the usual identity for a product of Levi-Civita tensors and then applying (\ref{phiphi1}) gives \begin{equation} \psi _{abcd}\psi ^{mnpq}=24\delta _{a}^{[m}\delta _{b}^{n}\delta _{c}^{p}\delta _{d}^{q]}+72\psi _{\lbrack ab}^{\ \ \ [mn}\delta _{c}^{p}\delta _{d]}^{q]}-16\varphi _{\lbrack abc}\varphi ^{\lbrack mnp}\delta _{d]}^{q]} \label{psipsi0} \end{equation}% Contracting over $d$ and $q$ gives% \begin{equation} \psi _{abcd}\psi ^{mnpd}=6\delta _{a}^{[m}\delta _{b}^{n}\delta _{c}^{p]}+9\psi _{\lbrack ab}^{\ \ \ \ [mn}\delta _{c]}^{p]}-\varphi _{abc}\varphi ^{mnp} \label{psipsi1} \end{equation}% which agrees with the expression given in \cite{House:2004pm}. Of course the above relations can be further contracted to obtain \begin{eqnarray} \varphi _{abc}\varphi _{m}^{\ \ \ bc} &=&6g_{am} \label{phiphi2} \\ \varphi _{abc}\psi _{mn}^{\ \ \ \ \ \ bc} &=&4\varphi _{amn} \label{phipsi2} \\ \psi _{abcd}\psi _{mn}^{\ \ \ \ \ cd} &=&4g_{am}g_{bn}-4g_{an}g_{bm}+2\psi _{abmn}. \label{psipsi2} \end{eqnarray}% Contracting even further, we are left with \begin{eqnarray} \varphi _{abc}\varphi ^{abc} &=&42 \label{phiphi3} \\ \varphi _{abc}\psi _{m}^{\ \ abc} &=&0 \label{phipsi3} \\ \psi _{abcd}\psi _{m}^{\ \ \ bcd} &=&24g_{am} \label{psipsi3} \\ \psi _{abcd}\psi ^{abcd} &=&168. \label{psipsi4} \end{eqnarray}% The relations (\ref{phiphi3}) and (\ref{psipsi4}) both yield $\left\vert \varphi \right\vert ^{2}=7$ in the inner product (\ref{forminp}). So in fact we have \begin{equation} V=\frac{1}{7}\int \varphi \wedge \ast \varphi \label{phiwpsi} \end{equation}% where $V$ is the volume of the manifold $X$. Now look in more detail at the decompositions (\ref{formdecompose}). We are in particular interested in decompositions of $2$-forms and $3$-forms since the decompositions for $4$-forms and $5$-forms are derived from these via Hodge duality. From \cite{bryant-2003} and \cite{karigiannis-2005-57}, we have \begin{eqnarray} \Lambda _{7}^{2} &=&\left\{ \omega \lrcorner \varphi :\omega \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a vector field}\right\} \label{om27} \\ \Lambda _{14}^{2} &=&\left\{ \alpha =\frac{1}{2}\alpha _{ab}dx^{a}\wedge dx^{b}:\left( \alpha _{ab}\right) \in \mathfrak{g}_{2}\right\} \label{om214} \\ \Lambda _{1}^{3} &=&\left\{ f\varphi :f\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a smooth function}\right\} \label{om31} \\ \Lambda _{7}^{3} &=&\left\{ \omega \lrcorner \ast \varphi :\omega \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a vector field}\right\} \label{om37} \\ \Lambda _{27}^{3} &=&\left\{ \chi \in \Omega ^{3}:\chi \wedge \varphi =0\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }\chi \wedge \ast \varphi =0\right\} \label{om327} \end{eqnarray}% Following \cite{bryant-2003}, it is enough to consider what happens in $% \mathbb{R}^{7}$ in order to understand these decompositions. Consider first the Lie algebra $\mathfrak{so}\left( 7\right) $, which is the space of antisymmetric $7\times 7$ matrices. For a vector $\omega \in \mathbb{R}^{7}$% , define the map $\rho _{\varphi }:\mathbb{R}^{7}\longrightarrow \mathfrak{so% }\left( 7\right) $ by $\rho _{\varphi }\left( \omega \right) =\omega \lrcorner \varphi $, and this map is clearly injective. Conversely, define the map $\tau _{\varphi }:$ $\mathfrak{so}\left( 7\right) \longrightarrow \mathbb{R}^{7}$ such that $\tau _{\varphi }\left( \alpha _{ab}\right) ^{c}=% \frac{1}{6}\varphi _{\ \ ab}^{c}\alpha ^{ab}$. From (\ref{phiphi2}), we get that \begin{equation*} \tau _{\varphi }\left( \rho _{\varphi }\left( \omega \right) \right) =\omega , \end{equation*}% so that $\tau _{\varphi }$ is a partial inverse of $\rho _{\varphi }$. Now the Lie algebra $\mathfrak{g}_{2}$ can be defined as the kernel of $\tau _{\varphi }$ \cite{karigiannis-2007}, that is% \begin{equation} \mathfrak{g}_{2}=\ker \tau _{\varphi }=\left\{ \alpha \in \mathfrak{so}% \left( 7\right) :\varphi _{abc}\alpha ^{bc}=0\right\} . \label{liealgg2} \end{equation}% This further implies that we get the following decomposition of $\mathfrak{so% }\left( 7\right) $:% \begin{equation} \mathfrak{so}\left( 7\right) =\mathfrak{g}_{2}\oplus \rho _{\varphi }\left( \mathbb{R}^{7}\right) . \label{so7decom} \end{equation}% The group $G_{2}$ acts via the adjoint representation on the $14$% -dimensional vector space $\mathfrak{g}_{2}$ and via the natural, vector representation on the $7$-dimensional space $\rho _{\varphi }\left( \mathbb{R% }^{7}\right) $. This is a $G_{2}\,$-invariant irreducible decomposition of $% \mathfrak{so}\left( 7\right) $ into the representations $\mathbf{7}$ and $% \mathbf{14}$. Hence follows the decomposition of $\Lambda ^{2}$ (\ref% {l1decom} and also the characterizations (\ref{om27}) and (\ref{om214}). Following \cite{bryant-2003} again, let us look at $\Lambda _{27}^{3}$ in more detail. Consider $Sym^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $ - the space of symmetric $2$-tensors and define a map $\mathrm{i}% _{\varphi }:Sym^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) \longrightarrow \Lambda ^{3}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $ by \begin{equation} \mathrm{i}_{\varphi }\left( h\right) _{abc}=h_{[a}^{d}\varphi _{bc]d} \label{iphidef} \end{equation}% Clearly, \begin{equation*} \mathrm{i}_{\varphi }\left( g\right) _{abc}=\varphi _{abc}. \end{equation*}% Now, we can decompose $Sym^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) =\mathbb{R}g\oplus Sym_{0}^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $ where $\mathbb{R}g$ is the set of symmetric tensors proportional to the metric $g$ and $Sym_{0}^{2}\left( \left( \mathbb{R}% ^{7}\right) ^{\ast }\right) $ is the set of traceless symmetric tensors. This is a $G_{2}$-invariant irreducible decomposition of $Sym^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $ into $1$-dimensional and $27$% -dimensional components. The map $\mathrm{i}_{\varphi }$ is also $G_{2}$% -invariant and is injective on each summand of this decomposition. Looking at the first summand, we get that $\mathrm{i}_{\varphi }\left( \mathbb{R}% g\right) =\Lambda _{1}^{3}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $. Now look at the second summand and consider $\mathrm{i}_{\varphi }\left( Sym_{0}^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) \right) $. This is $27$-dimensional and irreducible, so by dimension count it follows easily that $\mathrm{i}_{\varphi }\left( Sym_{0}^{2}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) \right) =\Lambda _{27}^{3}\left( \left( \mathbb{R}^{7}\right) ^{\ast }\right) $. All of this carries over to $% 3$-forms on our $G_{2}$ manifold $X$, and so we get \begin{equation} \Lambda _{27}^{3}=\left\{ \chi \in \Lambda ^{3}:\chi _{abc}=h_{[a}^{d}\varphi _{bc]d}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }h_{ab}~\RIfM@\expandafter\text@\else\expandafter\mbox\fi{traceless and symmetric}\right\} . \label{lamb327} \end{equation}% From the identities for contraction of $\varphi $ and $\ast \varphi $, it is possible to see that this is equivalent to the description (\ref{om327}) of $% \Lambda _{27}^{3}$. Thus we see that $1$-dimensional components correspond to scalars, $7$-dimensional components correspond to vectors and $27$% -dimensional components correspond to traceless symmetric matrices. Now suppose we have $\chi \in \Lambda ^{3}$, then it is always useful to be able to compute the different projections of $\chi $ into $\Lambda _{1}^{3}$% , $\Lambda _{7}^{3}$ and $\Lambda _{27}^{3}$. Denote these projections by $% \pi _{1}$, $\pi _{7}$ and $\pi _{27}$, respectively. As shown in Appendix 1, we have the following relations% \begin{eqnarray} \pi _{1}\left( \chi \right) &=&a\varphi \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{where }a=\frac{1}{42}\left( \chi _{abc}\varphi ^{abc}\right) =\frac{1}{7}\,\left\langle \chi ,\varphi \right\rangle \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }\left\vert \pi _{1}\left( \chi \right) \right\vert ^{2}=7a^{2} \label{p1chi} \\ \pi _{7}\left( \chi \right) &=&\omega \lrcorner \ast \varphi \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{where }% \omega ^{a}=-\frac{1}{24}\chi _{mnp}\psi ^{mnpa}\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }\left\vert \pi _{7}\left( \chi \right) \right\vert ^{2}=4\left\vert \omega \right\vert ^{2} \label{p7chi} \\ \pi _{27}\left( \chi \right) &=&\mathrm{i}_{\varphi }\left( h\right) \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{% where }h_{ab}=\frac{3}{4}\chi _{mn\{a}\varphi _{b\}}^{\ \ mn}\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }% \left\vert \pi _{27}\left( \chi \right) \right\vert ^{2}=\frac{2}{9}% \left\vert h\right\vert ^{2}. \label{p27chi} \end{eqnarray}% Here $\{a$ $b\}$ denotes the traceless symmetric part. \section{$G_{2}$ manifolds in $M$-theory \label{mtheorysec}} Special holonomy manifolds play a very important role in string and $M$% -theory because of their relation to supersymmetry. In general, if we compactify string or $M$-theory on a manifold of special holonomy $X$ the preservation of supersymmetry is related to existence of covariantly constant spinors (also known as parallel spinors). In fact, if all bosonic fields except the metric are set to zero, and a supersymmetric vacuum solution is sought, then in both string and $M$-theory, this gives precisely the equation \begin{equation} {\Greekmath 0272} \xi =0 \label{covconstspinor} \end{equation}% for a spinor $\xi $. As lucidly explained in \cite{AcharyaGukov}, condition (% \ref{covconstspinor}) on a spinor immediately implies special holonomy. Here $\xi $ is invariant under parallel transport, and is hence invariant under the action of the holonomy group $Hol\left( X,g\right) $. This shows that the spinor representation of $Hol\left( X,g\right) $ must contain the trivial representation. For $Hol\left( X,g\right) =SO\left( n\right) $, this is not possible since the spinor representation is reducible, so $Hol\left( X,g\right) \subset SO\left( n\right) $. In particular, Calabi-Yau 3-folds with $SU\left( 3\right) $ holonomy admit two covariantly constant spinors and $G_{2}$ holonomy manifolds admit only one covariantly constant spinor. Consider the bosonic action of eleven-dimensional supergravity \cite% {Cremmer:1978km}, which is supposed to describe low-energy $M$-theory : \begin{equation} S=\frac{1}{2}\int d^{11}x\left( -\hat{g}\right) ^{\frac{1}{2}}R^{\left( 11\right) }-\frac{1}{4}\int G\wedge \ast G-\frac{1}{12}\int C\wedge G\wedge G \label{sugraction} \end{equation}% where $\hat{g}$ is the metric on the $11$-dimensional space $M$ and $C$ is a $3$-form potential with field strength $G=dC$. From (\ref{sugraction}), the equation of motion for $C$ is found to be \begin{equation} d\ast G=\frac{1}{2}G\wedge G. \label{dgeom} \end{equation}% Suppose we fix $M=M_{4}\times X$ where $M_{4}$ is the $4$-dimensional Minkowski space and $X$ is a space with holonomy equal to $G_{2}$. Then $M$ is Ricci flat, so from Einstein's equation, $G$ has to vanish. However, it turns out that the assumption that $G_{X}=\left. G\right\vert _{X}=0$ is not an obvious one to make. In fact, as explained in \cite{Witten:1996md}, Dirac quantization on $X$ gives a shifted quantization condition and gives the statement \begin{equation} \left[ \frac{G_{X}}{2\pi }\right] -\frac{\lambda }{2}\in H^{4}\left( X,% \mathbb{Z}\right) \label{quantcond} \end{equation}% where $\left[ \frac{G_{X}}{2\pi }\right] $ is the cohomology class of $\frac{% G_{X}}{2\pi }$ and $\lambda =\frac{1}{2}$ $p_{1}\left( X\right) $ where $% p_{1}\left( X\right) $ is the first Pontryagin class on $X$. So if $\lambda $ were not even in $H^{4}\left( X,\mathbb{Z}\right) $, then the ansatz $% G_{X}=0 $ would not be consistent. Nonetheless, it was shown in \cite% {Harvey:1999as} that if $X$ is a seven dimensional spin manifold (or in particular $G_{2}$ holonomy manifold), then in fact $\lambda $ is even, and setting $G_{X}=0$ is consistent. So overall the simplest, Ricci-flat vacuum solutions are given by% \begin{eqnarray} \left\langle \hat{g}\right\rangle &=&\eta \times g_{7} \label{vacghat} \\ \left\langle C\right\rangle &=&0 \label{vacc} \\ \left\langle G\right\rangle &=&0 \label{vacg} \end{eqnarray}% where $\left\langle \cdot \right\rangle $ denotes the vacuum expectation value and $g_{7}$ is some metric with $G_{2}$ holonomy while $\eta $ is the standard metric on the four dimensional Minkowski space. However, we know that a $G_{2}$ structure and hence the metric $g_{7}$ is defined by a $G_{2}$% -invariant $3$-form $\varphi _{0}$, so we have \begin{equation} \left\langle \varphi \right\rangle =\varphi _{0}. \label{vacphi} \end{equation}% Now consider small fluctuations about the vacuum, \begin{eqnarray} \hat{g} &=&\left\langle \hat{g}\right\rangle +\delta \hat{g} \label{vacgpert} \\ C &=&\left\langle C\right\rangle +\delta C=\delta C \label{vaccpert} \\ \varphi &=&\left\langle \varphi \right\rangle +\delta \varphi =\varphi _{0}+\delta \varphi \label{vacphipert} \end{eqnarray} So a Kaluza-Klein ansatz for $C$ can be written as \begin{equation} C=\sum_{N=1}^{b_{3}}c^{N}\left( x\right) \phi _{N}+\sum_{I=1}^{b_{2}}A^{I}\left( x\right) \wedge \alpha _{I} \label{Cansatz} \end{equation}% where $\left\{ \phi _{N}\right\} $ are a basis for harmonic $3$-forms on $X$% , $\left\{ \alpha _{I}\right\} $ are a basis for harmonic $2$-forms on $X$, $% c^{N}\left( x\right) $ are scalars on $M_{4}$ and $A^{I}\left( x\right) $ are $1$-forms on $M_{4}$ which describe the fluctuations of $C$. Also $b_{2}$ and $b_{3}$ are the Betti numbers of $X$. Since we assume that $X$ has holonomy equal to $G_{2}$, $b_{1}=0$, so in (\ref{Cansatz}) we do not have a contribution from harmonic $1$-forms on $X$. Now, deformations of the metric on $X$ are encoded in the deformations of $\varphi $ and since $\varphi $ is harmonic on $X$, we parametrize $\varphi $ as \begin{equation} \varphi =\sum_{N=1}^{b_{3}}s^{N}\left( x\right) \phi _{N}. \label{phiansatz} \end{equation}% Overall, in $4$ dimensions we get $b_{3}$ real scalars $c^{N}$ and $b_{3}$ real scalars $s^{N}$. Together these combine into $b_{3}$ massless complex scalars $z^{N}$:% \begin{equation} z^{N}=\frac{1}{2}\left( s^{N}+ic^{N}\right) . \label{complexz} \end{equation}% In the $4$-dimensional supergravity theory this gives $b_{3}$ massless chiral superfields. The $1$-forms $A^{I}$ in (\ref{Cansatz}) give rise to $% b_{2}$ massless Abelian gauge fields, and together with superpartners arising from the gravitino fields, these form $b_{2}$ massless vector superfields \cite{AcharyaGukov}. Thus overall, in four dimensions the effective low energy theory is $\mathcal{N}=1$ supergravity coupled to $% b_{2} $ abelian vector supermultiplets and $b_{3}$ massless chiral supermultiplets. The physical theory is not very interesting from a phenomenological point of view, since the gauge group is abelian and there are no charged particles. However the combination (\ref{complexz}) proves to be very useful for studying the moduli space of $G_{2}$ manifolds, since it provides a natural, physically motivated complexification of the pure $G_{2}$ moduli space - something very similar to the complexified K\"{a}hler cone used in the study of Calabi-Yau moduli spaces. Let us now use our Kaluza-Klein ansatz to reduce the $11$-dimensional action (\ref{sugraction}) to $4$ dimensions. Here we follow \cite{WittenBeasley},% \cite{Gutowski:2001fm} and \cite{House:2004pm}. The term which interests us is the kinetic term for the $z^{N}$. The kinetic term for the $c^{N}$, $% L_{kin}\left( c\right) $ comes from the reduction of the $G\wedge \ast G$ term in (\ref{sugraction}). After switching to the Einstein frame by $g_{\mu \nu }\longrightarrow V^{-1}g_{\mu \nu }$ we immediately see this gives us \begin{equation} L_{kin}\left( c\right) =-\frac{1}{4V}\partial _{\mu }c^{M}\partial ^{\mu }c^{N}\int_{X}\phi _{M}\wedge \ast \phi _{N} \label{lkinc} \end{equation}% The kinetic term for the $s^{M}$ appears from the reduction of the $% R^{\left( 11\right) }$ term in (\ref{sugraction}). This is less straightforward that the derivation of $L_{kin}\left( c\right) $, but the calculation was shown explicitly in \cite{House:2004pm}. From the general properties of the Ricci scalar we can decompose the eleven-dimensional Einstein-Hilbert action as \begin{equation} \int d^{11}x\left( -\hat{g}\right) ^{\frac{1}{2}}R^{\left( 11\right) }=\int d^{11}x\left( -\hat{g}\right) ^{\frac{1}{2}}V\left( R^{\left( 4\right) }+R^{\left( 7\right) }+\frac{1}{4V}\left( \partial _{\mu }g_{mn}\partial ^{\mu }g^{mn}-\func{Tr}\left( \partial _{\mu }g\right) \func{Tr}\left( \partial ^{\mu }g\right) \right) \right) . \label{riccidecom} \end{equation}% Then, using deformation properties of the $G_{2}$ metric $g_{mn}$ from section \ref{deformsec}, and switching to the Einstein frame $g_{\mu \nu }\longrightarrow V^{-1}g_{\mu \nu }$, we eventually get \begin{equation} L_{kin}\left( s\right) =-\frac{1}{4V}\partial _{\mu }s^{M}\partial ^{\mu }s^{N}\int_{X}\phi _{M}\wedge \ast \phi _{N}. \label{lkins} \end{equation}% The kinetic term of the dimensionally reduced action is in general given in the Einstein frame by \begin{equation} L_{kin}=-G_{M\bar{N}}\partial _{\mu }z^{M}\partial ^{\mu }\bar{z}^{N}. \label{lkingen} \end{equation}% Comparing (\ref{lkingen}) with (\ref{lkinc}) and (\ref{lkins}), we can read off the moduli space metric $G_{M\bar{N}}$ as \begin{equation} G_{M\bar{N}}=\frac{1}{V}\int_{X}\phi _{M}\wedge \ast \phi _{\bar{N}}. \label{modulimetric} \end{equation}% Note that the Hodge star implicitly depends on the coordinates $z^{M}$, so this metric is quite non-trivial. The bosonic part of fully reduced $4$-dimensional Lagrangian is given in this case by \cite{Papadopoulos:1995da},\cite{Gutowski:2001fm} \begin{equation} L=-G_{M\bar{N}}\partial _{\mu }z^{M}\partial ^{\mu }\bar{z}^{N}-\frac{1}{4}% \func{Re}h_{IJ}F_{mn}^{I}F^{Jmn}+\frac{1}{4}\func{Im}h_{IJ}F_{mn}^{I}\ast F^{Jmn} \label{fulllag} \end{equation}% where $G_{M\bar{N}}$ is as in (\ref{modulimetric}), and \begin{equation*} F_{mn}^{I}=\partial _{m}A_{n}^{I}-\partial _{n}A_{m}^{I}. \end{equation*}% The couplings $\func{Re}h_{IJ}$ and $\func{Im}h_{IJ}$ are given by \begin{eqnarray} \func{Re}h_{IJ}\left( s\right) &=&\frac{1}{2}\int \alpha _{I}\wedge \ast \alpha _{J}=-\frac{1}{2}s^{M}\int \alpha _{I}\wedge \alpha _{J}\wedge \phi _{M} \label{rehij} \\ \func{Im}h_{IJ}\left( c\right) &=&-\frac{1}{2}c^{M}\int \alpha _{I}\wedge \alpha _{J}\wedge \phi _{M} \label{imhij} \end{eqnarray}% To get the second equality in (\ref{rehij}) we have used that $% H^{2}=H_{14}^{2}$ for manifolds with $G_{2}$ holonomy and that for a $2$% -form $\alpha $, $2\ast \pi _{7}\left( \alpha \right) -\ast \pi _{14}\left( \alpha \right) =\alpha \wedge \varphi $. Proof of this fact can be found in \cite{karigiannis-2005-57}. \section{Deformations of $G_{2}$ structures \label{deformsec}} \setcounter{equation}{0} As we already know, the $G_{2}$ structure on $X$ and the corresponding metric $g$ are all determined by the invariant $3$% -form $\varphi $. Hence, deformations of $\varphi $ will induce deformations of the metric. These deformations of metric will then also affect the deformation of $\ast \varphi $. Since the relationship (\ref{metricdef}) between $g$ and $\varphi $ is non-linear, the resulting deformations of the metric are highly non-trivial, and in general it is not possible to write them down in closed form. However, as shown by Karigiannis in \cite% {karigiannis-2005-57}, metric deformations can be made explicit when the $3$% -form deformations are either in $\Lambda _{1}^{3}$ or $\Lambda _{7}^{3}$. We now briefly review some of these results. First suppose \begin{equation} \tilde{\varphi}=f\varphi \label{phitildeconf} \end{equation}% Then from (\ref{metriccomp1}) we get \begin{eqnarray} \tilde{g}_{ab}\sqrt{\det \tilde{g}} &=&\frac{1}{144}\tilde{\varphi}_{amn}% \tilde{\varphi}_{bpq}\tilde{\varphi}_{rst}\hat{\varepsilon}^{mnpqrst} \notag \\ &=&f^{3}g_{ab}\sqrt{\det g} \label{p1g1} \end{eqnarray}% After taking the determinant on both sides, we obtain \begin{equation} \det \tilde{g}=f^{\frac{14}{3}}\det g. \label{p1detg1} \end{equation}% Substituting (\ref{p1detg1}) into (\ref{p1g1}), we finally get \begin{equation} \tilde{g}_{ab}=f^{\frac{2}{3}}g_{ab}. \label{p1g2} \end{equation}% and hence \begin{equation} \tilde{\ast}\tilde{\varphi}=f^{\frac{4}{3}}\ast \varphi . \label{p1sphi1} \end{equation}% Therefore, a scaling of $\varphi $ gives a conformal transformation of the metric. hence deformations of $\varphi $ in the direction $\Lambda _{1}^{3}$ also give infinitesimal conformal transformation. Suppose $f=1+\varepsilon a$% , then to fourth order in $\varepsilon $, we can write \begin{equation} \tilde{\ast}\tilde{\varphi}=\left( \allowbreak 1+\frac{4}{3}a\varepsilon +% \frac{2}{9}a^{2}\varepsilon ^{2}-\frac{4}{81}a^{3}\varepsilon ^{3}+\frac{5}{% 243}a^{4}\varepsilon ^{4}+O\left( \varepsilon ^{5}\right) \right) \ast \varphi \allowbreak . \label{p1sphi2} \end{equation} Now, suppose in general that $\tilde{\varphi}=\varphi +\varepsilon \chi $ for some $\chi \in \Lambda ^{3}$. Then using (\ref{metricdef}) for the definition of the metric associated with $\tilde{\varphi}$, \begin{eqnarray} \widetilde{\left\langle u,v\right\rangle }\widetilde{\mathrm{vol}} &=&\frac{1% }{6}\left( u\lrcorner \tilde{\varphi}\right) \wedge \left( v\lrcorner \tilde{% \varphi}\right) \wedge \tilde{\varphi} \notag \\ &=&\frac{1}{6}\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \varphi \label{rhsdeform} \\ &&+\frac{1}{6}\varepsilon \left[ \left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \varphi +\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \chi \right) \wedge \varphi +\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \chi \right] \notag \\ &&+\frac{1}{6}\varepsilon ^{2}\left[ \left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \chi \right) \wedge \varphi +\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \chi \right) \wedge \chi +\left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \chi \right] \notag \\ &&+\frac{1}{6}\varepsilon ^{3}\left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \chi \right) \wedge \chi \notag \end{eqnarray}% After some manipulations, we can rewrite this as: \begin{eqnarray} \widetilde{\left\langle u,v\right\rangle }\widetilde{\mathrm{vol}} &=&\frac{1% }{6}\left( u\lrcorner \varphi \right) \wedge \left( v\lrcorner \varphi \right) \wedge \varphi \label{rhsdeform2} \\ &&+\frac{1}{2}\varepsilon \left[ \left( u\lrcorner \chi \right) \wedge \ast \left( v\lrcorner \varphi \right) +\left( v\lrcorner \chi \right) \wedge \ast \left( u\lrcorner \varphi \right) \right] \notag \\ &&+\frac{1}{2}\varepsilon ^{2}\left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \chi \right) \wedge \varphi \notag \\ &&+\frac{1}{6}\varepsilon ^{3}\left( u\lrcorner \chi \right) \wedge \left( v\lrcorner \chi \right) \wedge \chi . \notag \end{eqnarray}% Rewriting (\ref{rhsdeform2}) in local coordinates, we get% \begin{equation} \tilde{g}_{ab}\frac{\sqrt{\det \tilde{g}}}{\sqrt{\det g}}=g_{ab}+\frac{1}{2}% \varepsilon \chi _{mn(a}\varphi _{b)}^{\ \ mn}+\frac{1}{8}\varepsilon ^{2}\chi _{amn}\chi _{bpq}\psi ^{mnpq}+\frac{1}{24}\varepsilon ^{3}\chi _{amn}\chi _{bpq}\left( \ast \chi \right) ^{mnpq} \label{rhsdeform3} \end{equation}% \qquad \qquad Now suppose the deformation is in the $\Lambda _{7}^{3}$ direction. This implies that \begin{equation} \chi =\omega \lrcorner \ast \varphi \label{p7chi1} \end{equation}% for some vector field $\omega $. Look at the first order term. From (\ref% {p1chi}) and (\ref{p27chi}) we see that this is essentially a projection onto $\Lambda _{1}^{3}\oplus \Lambda _{27}^{3}$ - the traceless part gives the $\Lambda _{27}^{3}$ component and the trace gives the $\Lambda _{1}^{3}$ component. Hence this term vanishes for $\chi \in \Lambda _{7}^{3}$. For the third order term, it is more convenient to study at it in (\ref{rhsdeform2}% ). By looking at \begin{equation*} \omega \lrcorner \left( \left( u\lrcorner \omega \lrcorner \ast \varphi \right) \wedge \left( v\lrcorner \omega \lrcorner \ast \varphi \right) \wedge \ast \varphi \right) =0 \end{equation*}% we immediately see that the third order term vanishes. So now we are left with% \begin{eqnarray} \tilde{g}_{ab}\sqrt{\det \tilde{g}} &=&\left( g_{ab}+\frac{1}{8}\varepsilon ^{2}\omega ^{c}\omega ^{d}\psi _{camn}\psi _{dbpq}\psi ^{mnpq}\right) \sqrt{% \det g} \notag \\ &=&\left( g_{ab}\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) -\varepsilon ^{2}\omega _{a}\omega _{b}\right) \sqrt{\det g} \label{p7gabtil} \end{eqnarray}% where we have used the contraction identity for $\psi $ (\ref{psipsi2}) twice. Taking the determinant of (\ref{p7gabtil}) gives \begin{eqnarray} \sqrt{\det \tilde{g}} &=&\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) ^{\frac{2}{3}}\sqrt{\det g} \label{p7detg} \\ \tilde{g}_{ab} &=&\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) ^{-\frac{2}{3}}\left( \left( g_{ab}\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) -\varepsilon ^{2}\omega _{a}\omega _{b}\right) \right) \label{p7gab2} \end{eqnarray}% and eventually, \begin{equation} \tilde{\ast}\tilde{\varphi}=\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) ^{-\frac{1}{3}}\left( \ast \varphi +\ast \varepsilon \left( \omega \lrcorner \ast \varphi \right) +\varepsilon ^{2}\omega \lrcorner \ast \left( \omega \lrcorner \varphi \right) \right) . \label{p7starphi1} \end{equation}% The details of these last steps can be found in \cite{karigiannis-2005-57}. Notice that to first order in $\varepsilon $, both $\sqrt{\det g}$ and $% g_{ab}$ remain unchanged under this deformation. Now let us examine the last term in (\ref{p7starphi1}) in more detail. Firstly, we have \begin{equation*} \omega \lrcorner \ast \left( \omega \lrcorner \varphi \right) =\ast \left( \omega ^{\flat }\wedge \left( \omega \lrcorner \varphi \right) \right) \end{equation*}% and \begin{eqnarray} \left( \omega ^{\flat }\wedge \left( \omega \lrcorner \varphi \right) \right) _{mnp} &=&3\omega _{\lbrack m}\omega ^{a}\varphi _{\left\vert a\right\vert np]} \notag \\ &=&3\mathrm{i}_{\varphi }\left( \omega \circ \omega \right) \label{omomphi1} \end{eqnarray}% where $\left( \omega \circ \omega \right) _{ab}=\omega _{a}\omega _{b}$. Therefore, in (\ref{p7starphi1}), this term gives $\Lambda _{1}^{4}$ and $% \Lambda _{27}^{4}$ components. So, can write (\ref{p7starphi1}) as \begin{equation} \tilde{\ast}\tilde{\varphi}=\left( 1+\varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) ^{-\frac{1}{3}}\left( \left( 1+\frac{3}{7}% \varepsilon ^{2}\left\vert \omega \right\vert ^{2}\right) \ast \varphi +\ast \varepsilon \left( \omega \lrcorner \ast \varphi \right) +\varepsilon ^{2}\ast \mathrm{i}_{\varphi }\left( \left( \omega \circ \omega \right) _{0}\right) \right) . \label{p7starphi2} \end{equation}% Here $\left( \omega \circ \omega \right) _{0}$ denotes the traceless part of $\omega \circ \omega ,$ so that $\mathrm{i}_{\varphi }\left( \left( \omega \circ \omega \right) _{0}\right) \in \Lambda _{27}^{3}$ and thus, in (\ref% {p7starphi2}), the components in different representations are now explicitly shown. As we have seen above, in the cases when the deformations were in $\Lambda _{1}^{3}$ or $\Lambda _{7}^{3}$ directions, there were some simplifications, which make it possible to write down all results in a closed form. Now however we will look at deformations in the $\Lambda _{27}^{3}$ directions, and we will work to fourth order in $\varepsilon $. So suppose we have a deformation \begin{equation*} \tilde{\varphi}=\varphi +\varepsilon \chi \end{equation*}% where $\chi \in \Lambda _{27}^{3}$. Now let us set up some notation. Define \begin{eqnarray} \tilde{s}_{ab} &=&\frac{1}{144}\frac{1}{\sqrt{\det g}}\tilde{\varphi}_{amn}% \tilde{\varphi}_{bpq}\tilde{\varphi}_{rst}\hat{\varepsilon}^{mnpqrst} \label{sabtilde} \\ &=&\tilde{g}_{ab}\frac{\sqrt{\det \tilde{g}}}{\sqrt{\det g}} \label{sabtilde2} \end{eqnarray}% From (\ref{metriccomp1}), the untilded $s_{ab}$ is then just equal to $% g_{ab} $. We can rewrite (\ref{sabtilde2}) as \begin{equation} \left( g_{ab}+\delta g_{ab}\right) \frac{\sqrt{\det \tilde{g}}}{\sqrt{\det g}% }=g_{ab}+\delta s_{ab} \label{sabtilde3} \end{equation}% \qquad where $\delta g_{ab}$ is the deformation of the metric and $\delta s_{ab}$ is the deformation of $s_{ab}$, which from (\ref{rhsdeform3}) is given by% \begin{equation} \delta s_{ab}=\frac{1}{2}\varepsilon \chi _{mn(a}\varphi _{b)}^{\ \ mn}+% \frac{1}{8}\varepsilon ^{2}\chi _{amn}\chi _{bpq}\psi ^{mnpq}+\frac{1}{24}% \varepsilon ^{3}\chi _{amn}\chi _{bpq}\left( \ast \chi \right) ^{mnpq}. \label{deltasab1} \end{equation}% Also introduce the following short-hand notation% \begin{eqnarray} s_{k} &=&\func{Tr}\left( \left( \delta s\right) ^{k}\right) \label{skdef} \\ t_{k} &=&\func{Tr}\left( \left( \delta g\right) ^{k}\right) \label{tkdef} \end{eqnarray}% where the trace is taken using the original metric $g$. From (\ref{deltasab1}% ), note that since $\chi \in \Lambda _{27}^{3}$, when taking the trace the first order term vanishes, and hence $s_{1}$ is second-order in $\varepsilon $. Further, after taking the trace of (\ref{sabtilde3}) using $g^{ab}$ and rearranging, we have% \begin{equation} \sqrt{\frac{\det \tilde{g}}{\det g}}=\left( 1+\frac{1}{7}s_{1}\right) \left( 1+\frac{1}{7}t_{1}\right) ^{-1} \label{detgdeform} \end{equation}% and hence \begin{equation} \tilde{g}_{ab}=\tilde{s}_{ab}\left( 1+\frac{1}{7}t_{1}\right) \left( 1+\frac{% 1}{7}s_{1}\right) ^{-1}. \label{metricdeform2} \end{equation}% As shown in Appendix B, we can also expand $\det \tilde{g}$ as \begin{eqnarray} \frac{\det \tilde{g}}{\det g} &=&1+t_{1}+\frac{1}{2}\left( t_{1}^{2}-t_{2}\right) +\frac{1}{6}\left( t_{1}^{3}-3t_{1}t_{2}+2t_{3}\right) \label{detgtilde1} \\ &&+\frac{1}{24}\left( t_{1}^{4}-6t_{1}^{2}t_{2}+3t_{2}^{2}+8t_{1}t_{3}-6t_{4}\right) +O\left( \left\vert \delta g\right\vert ^{5}\right) \notag \end{eqnarray}% and hence \begin{eqnarray} \sqrt{\frac{\det \tilde{g}}{\det g}} &=&1+\frac{1}{2}t_{1}+\left( \frac{1}{8}% t_{1}^{2}-\frac{1}{4}t_{2}\right) +\left( \frac{1}{48}t_{1}^{3}-\frac{1}{8}% t_{1}t_{2}+\frac{1}{6}t_{3}\right) \label{rdgtild1} \\ &&+\left( \frac{1}{384}t_{1}^{4}-\frac{1}{32}t_{1}^{2}t_{2}+\frac{1}{32}% t_{2}^{2}\allowbreak +\frac{1}{12}t_{1}t_{3}-\frac{1}{8}t_{4}\right) +O\left( \left\vert \delta g\right\vert ^{5}\right) . \notag \end{eqnarray}% Thus we can equate (\ref{detgdeform}) and (\ref{rdgtild1}). Suppose $t_{1}$ is first order in $\varepsilon $. Then the only first order term in (\ref% {rdgtild1}) is $\frac{1}{2}t_{1}$, but since $s_{1}$ is second-order, the only first order term in (\ref{detgdeform}) is $-\frac{1}{7}t_{1}$. It therefore follows that first order terms vanish, and so in fact $t_{1}$ is also second-order in $\varepsilon $. This has profound consequences in that we can ignore some of the terms in (\ref{rdgtild1}), as they give terms higher than fourth order: \begin{equation} \sqrt{\frac{\det \tilde{g}}{\det g}}=1+\left( \frac{1}{2}t_{1}-\frac{1}{4}% t_{2}\right) +\frac{1}{6}t_{3}+\left( \frac{1}{8}t_{1}^{2}-\frac{1}{8}% t_{1}t_{2}+\frac{1}{32}t_{2}^{2}\allowbreak -\frac{1}{8}t_{4}\right) +O\left( \varepsilon ^{5}\right) . \label{rdgtild3} \end{equation} From (\ref{metricdeform2}) we can write down $\delta g_{ab}$ to fourth order in $\varepsilon $ in terms of $t_{1}$ and quantities related to $\delta s_{ab}$ and from this get $t_{2}$, $t_{3}$ and $t_{4}$ in terms of $t_{1}$ and $\delta s_{ab}$. So we have \begin{equation} \delta g_{ab}=g_{ab}\left( \left( \frac{1}{7}t_{1}-\frac{1}{7}s_{1}\right) +\left( \frac{1}{49}s_{1}^{2}-\frac{1}{49}s_{1}t_{1}\right) \right) +\delta s_{ab}\left( \allowbreak 1+\left( \frac{1}{7}t_{1}-\frac{1}{7}s_{1}\right) \right) +O\left( \varepsilon ^{5}\right) \label{deltagab1} \end{equation}% and then from this, \begin{eqnarray} t_{2} &=&s_{2}+\frac{1}{7}\left( -s_{1}^{2}+t_{1}^{2}+2t_{1}s_{2}-2s_{1}s_{2}\right) +O\left( \varepsilon ^{5}\right) \label{t2} \\ t_{3} &=&s_{3}+\frac{3}{7}\left( t_{1}s_{2}-s_{1}s_{2}\right) +O\left( \varepsilon ^{5}\right) \label{t3} \\ t_{4} &=&s_{4}+O\left( \varepsilon ^{5}\right) . \label{t4} \end{eqnarray}% Substituting, (\ref{t2})-(\ref{t4}) into (\ref{rdgtild3}), we obtain% \begin{equation} \sqrt{\frac{\det \tilde{g}}{\det g}}=1+\left( -\frac{1}{4}s_{2}+\frac{1}{2}% t_{1}\right) +\frac{1}{6}s_{3}+\left( -\frac{1}{8}s_{4}-\frac{1}{8}% s_{2}t_{1}+\frac{1}{28}s_{1}^{2}+\frac{1}{32}s_{2}^{2}+\frac{5}{56}% t_{1}^{2}\right) +O\left( \varepsilon ^{5}\right) \label{rdgtild4} \end{equation}% After expanding (\ref{detgdeform}) to fourth order in $\varepsilon $ and equating with (\ref{rdgtild4}), we are left with a quadratic equation for $% t_{1}$:% \begin{equation} \frac{27}{392}t_{1}^{2}+t_{1}\left( \frac{9}{14}+\frac{1}{49}s_{1}-\frac{1}{8% }s_{2}\right) +\left( -\frac{1}{7}s_{1}-\frac{1}{4}s_{2}+\frac{1}{6}s_{3}-% \frac{1}{8}s_{4}+\frac{1}{28}s_{1}^{2}+\frac{1}{32}s_{2}^{2}\right) +O\left( \varepsilon ^{5}\right) . \label{t1quadeq} \end{equation}% Obviously there are two solutions, but turns out that one of them has a term which is zero order in $\varepsilon $, so this does not fit our assumptions, and hence we are only left with one solution, which to fourth order in $% \varepsilon $ is given by% \begin{equation} t_{1}=\frac{2}{9}s_{1}+\frac{7}{18}s_{2}-\frac{7}{27}s_{3}+\left( \frac{7}{36% }s_{4}+\frac{1}{81}s_{1}s_{2}-\frac{11}{162}s_{1}^{2}+\frac{7}{648}% \allowbreak s_{2}^{2}\right) +O\left( \varepsilon ^{5}\right) . \label{t1a} \end{equation}% Now that we have $t_{1}=\func{Tr}\left( \delta g\right) $, from (\ref% {detgdeform}) we have \begin{equation} \sqrt{\frac{\det \tilde{g}}{\det g}}=1+\left( \frac{1}{9}s_{1}-\frac{1}{18}% s_{2}\right) +\frac{1}{27}s_{3}+\left( \frac{1}{162}s_{1}^{2}-\frac{1}{162}% s_{1}s_{2}-\frac{1}{36}s_{4}+\frac{1}{648}s_{2}^{2}\right) +O\left( \varepsilon ^{5}\right) . \label{rdgtild5} \end{equation}% Using this and (\ref{sabtilde3}) we can immediately get the deformed metric. The precise expression however is not very useful for us at this stage. What we want is to be able to calculate the Hodge star with respect to the deformed metric. So let $\alpha $ be a $3$-form, and consider the Hodge dual of $\alpha $ with respect to the deformed metric:% \begin{eqnarray*} \left( \tilde{\ast}\alpha \right) _{mnpq} &=&\frac{1}{3!}\frac{1}{\sqrt{\det \tilde{g}}}\hat{\varepsilon}^{abcdrst}\tilde{g}_{ma}\tilde{g}_{nb}\tilde{g}% _{pc}\tilde{g}_{qd}\alpha _{rst} \\ &=&\frac{\sqrt{\det g}}{\sqrt{\det \tilde{g}}}(\ast \alpha )^{abcd}\tilde{g}% _{ma}\tilde{g}_{nb}\tilde{g}_{pc}\tilde{g}_{qd} \\ &=&\left( \frac{\det g}{\det \tilde{g}}\right) ^{\frac{5}{2}}(\ast \alpha )^{abcd}\tilde{s}_{ma}\tilde{s}_{nb}\tilde{s}_{pc}\tilde{s}_{qd} \\ &=&\left( \frac{\det g}{\det \tilde{g}}\right) ^{\frac{5}{2}}\left( \left( \ast \alpha \right) _{mnpq}+4\left( \ast \alpha \right) _{[mnp}^{\ \ \ \ \ \ d}\delta s_{q]d}+6\left( \ast \alpha \right) _{[mn}^{\ \ \ \ \ cd}\delta s_{p\left\vert c\right\vert }\delta s_{q]d}\right. \\ &&\left. +4\left( \ast \alpha \right) _{[m}^{\ \ \ \ bcd}\delta s_{n\left\vert b\right\vert }\delta s_{p\left\vert c\right\vert }\delta s_{q]d}+\left( \ast \alpha \right) ^{abcd}\delta s_{am}\delta s_{bn}\delta s_{cp}\delta s_{dq}\right) \end{eqnarray*}% From (\ref{rdgtild5}), the prefactor $\left( \frac{\det g}{\det \tilde{g}}% \right) ^{\frac{5}{2}}$ is given to fourth order by \begin{equation} \left( \frac{\det g}{\det \tilde{g}}\right) ^{\frac{5}{2}}=1+\left( -\frac{5% }{9}s_{1}+\frac{5}{18}s_{2}\right) -\frac{5}{27}s_{3}+\left( \frac{5}{36}% s_{4}-\frac{25}{162}s_{1}s_{2}+\frac{25}{162}s_{1}^{2}+\frac{25}{648}% s_{2}^{2}\right) +O\left( \varepsilon ^{5}\right) . \label{rdgtild5m5} \end{equation}% Finally, consider how $\ast \varphi $ deforms:% \begin{eqnarray} \left( \tilde{\ast}\tilde{\varphi}\right) _{mnpq} &=&\tilde{\ast}\varphi _{mnpq}+\varepsilon \tilde{\ast}\chi _{mnpq} \notag \\ &=&\left( \frac{\det g}{\det \tilde{g}}\right) ^{\frac{5}{2}}\left( \left( \ast \varphi \right) _{mnpq}+4\left( \ast \varphi \right) _{[mnp}^{\ \ \ \ \ \ d}\delta s_{q]d}+6\left( \ast \varphi \right) _{[mn}^{\ \ \ \ \ cd}\delta s_{p\left\vert c\right\vert }\delta s_{q]d}\right. \label{starphi1} \\ &&+4\left( \ast \varphi \right) _{[m}^{\ \ \ \ bcd}\delta s_{n\left\vert b\right\vert }\delta s_{p\left\vert c\right\vert }\delta s_{q]d}+\left( \ast \varphi \right) ^{abcd}\delta s_{am}\delta s_{bn}\delta s_{cp}\delta s_{dq} \notag \\ &&+\varepsilon \left( \ast \chi \right) _{mnpq}+4\varepsilon \left( \ast \chi \right) _{[mnp}^{\ \ \ \ \ \ d}\delta s_{q]d}+6\varepsilon \left( \ast \chi \right) _{[mn}^{\ \ \ \ \ cd}\delta s_{p\left\vert c\right\vert }\delta s_{q]d} \notag \\ &&\left. +4\varepsilon \left( \ast \chi \right) _{[m}^{\ \ \ \ bcd}\delta s_{n\left\vert b\right\vert }\delta s_{p\left\vert c\right\vert }\delta s_{q]d}+O\left( \varepsilon ^{5}\right) \right) \notag \end{eqnarray}% We ignored the last term, because overall it is at least fifth order. So far, the only property of $\Lambda _{27}^{3}$ that we have used is that it is orthogonal to $\varphi $, thus in fact, up to this point everything applies to $\Lambda _{7}^{3}$ as well. Now however, let $\chi $ be of the form \begin{equation} \chi _{abc}=h_{[a}^{d}\varphi _{bc]d} \label{chi27def} \end{equation}% where $h_{ab}$ is traceless and symmetric, so that $\chi \in \Lambda _{27}^{3}$. Let us first introduce some further notation. Let $% h_{1},h_{2},h_{3},h_{4}$ be traceless, symmetric matrices, and introduce the following shorthand notation% \begin{eqnarray} \left( \varphi h_{1}h_{2}\varphi \right) _{mn} &=&\varphi _{abm}h_{1}^{ad}h_{2}^{be}\varphi _{den} \label{bphiab} \\ \varphi h_{1}h_{2}h_{3}\varphi &=&\varphi _{abc}h_{1}^{ad}h_{2}^{be}h_{3}^{cf}\varphi _{def} \label{bphi} \\ \left( \psi h_{1}h_{2}h_{3}\psi \right) _{mn} &=&\psi _{abcm}\psi _{defn}h_{1}^{ad}h_{2}^{be}h_{3}^{cf} \label{bpsiab} \\ \psi h_{1}h_{2}h_{3}h_{4}\psi &=&\psi _{abcm}\psi _{defn}h_{1}^{ad}h_{2}^{be}h_{3}^{cf}h_{4}^{mn} \label{bpsi} \end{eqnarray}% It is clear that all of these quantities are symmetric in the $h_{i}$ and moreover $\left( \varphi h_{1}h_{2}\varphi \right) _{mn}$ and $\left( \psi h_{1}h_{2}h_{3}\psi \right) _{mn}$ are both symmetric in indices $m$ and $n$% . Then, it can be shown that \begin{eqnarray*} \chi _{(a\left\vert mn\right\vert }\varphi _{b)}^{\ \ mn} &=&\frac{4}{3}% h_{ab} \\ \chi _{amn}\chi _{bpq}\ast \varphi ^{mnpq} &=&-\frac{4}{7}\left\vert \chi \right\vert ^{2}g_{ab}+\frac{16}{9}\left( h^{2}\right) _{\{ab\}}-\frac{4}{9}% \left( \varphi hh\varphi \right) _{\{ab\}} \\ \chi _{amn}\chi _{bpq}\ast \chi ^{mnpq} &=&\frac{32}{189}\func{Tr}\left( h^{3}\right) g_{ab}-\frac{8}{9}\left( \varphi hh^{2}\varphi \right) _{\{ab\}} \end{eqnarray*}% where as before $\left\{ a\ b\right\} $ denotes the traceless symmetric part. Using this and (\ref{deltasab1}), we can now express $\delta s_{ab}$ in terms of $h$:% \begin{eqnarray} \delta s_{ab} &=&\frac{2}{3}\varepsilon h_{ab}+g_{ab}\left( -\frac{1}{14}% \varepsilon ^{2}\left\vert \chi \right\vert ^{2}+\frac{4}{567}\varepsilon ^{3}\func{Tr}\left( h^{3}\right) \right) \label{deltasabfull} \\ &&+\varepsilon ^{2}\left( \frac{2}{9}\left( h^{2}\right) _{\{ab\}}-\frac{1}{% 18}\left( \varphi hh\varphi \right) _{\{ab\}}\right) -\frac{\varepsilon ^{3}% }{27}\left( \varphi hh^{2}\varphi \right) _{\{ab\}} \notag \end{eqnarray}% and hence \begin{eqnarray} s_{1} &=&\func{Tr}\left( \delta s\right) =-\frac{1}{2}\varepsilon ^{2}\left\vert \chi \right\vert ^{2}+\frac{4}{81}\varepsilon ^{3}\func{Tr}% \left( h^{3}\right) \label{s1full} \\ s_{2} &=&\func{Tr}\left( \delta s^{2}\right) =2\varepsilon ^{2}\left\vert \chi \right\vert ^{2}+\varepsilon ^{3}\left( \frac{8}{27}\func{Tr}\left( h^{3}\right) -\frac{2}{27}\left( \varphi hhh\varphi \right) \right) \label{s2full} \\ &&+\varepsilon ^{4}\left( -\frac{1}{16}\left\vert \chi \right\vert ^{4}+% \frac{7}{162}\func{Tr}\left( h^{4}\right) -\frac{2}{81}\left( \varphi hhh^{2}\varphi \right) +\frac{1}{324}\left( \psi hhhh\psi \right) \right) \notag \\ s_{3} &=&\func{Tr}\left( \delta s^{3}\right) =\frac{8}{27}\varepsilon ^{3}% \func{Tr}\left( h^{3}\right) +\varepsilon ^{4}\left( -\frac{3}{2}\left\vert \chi \right\vert ^{4}+\frac{8}{27}\func{Tr}\left( h^{4}\right) -\frac{2}{27}% \left( \varphi hhh^{2}\varphi \right) \right) . \label{s3full} \\ s_{4} &=&\func{Tr}\left( \delta s^{4}\right) =\frac{16}{81}\varepsilon ^{4}% \func{Tr}\left( h^{4}\right) \label{s4full} \end{eqnarray}% To get the full expression for $\tilde{\ast}\tilde{\varphi},$ (\ref{s1full}% )-(\ref{s4full}) have to be substituted into the expression for the prefactor $\left( \frac{\det g}{\det \tilde{g}}\right) ^{\frac{5}{2}}$ (\ref% {rdgtild5m5}), and then both (\ref{rdgtild5m5}) and (\ref{deltasabfull}) have to be substituted into the expression for $\tilde{\ast}\tilde{\varphi}$ (\ref{starphi1}). Obviously, the expressions involved quickly become absolutely gargantuan. Thankfully, we were able to use Maple and the freely available package \emph{"Riegeom"} \cite{Riegeom} to help with these calculations. After all the substitutions, the resulting expression still has dozens of terms which are not of much use. In order for the expression for $\tilde{\ast}\tilde{\varphi}$ to be useful, the terms in it have to be separated according to which representation of $G_{2}$ they belong to. Thus the final step is to apply projections onto $\Lambda _{1}^{4}$, $\Lambda _{7}^{4}$ and $\Lambda _{27}^{4}$ (\ref{p1chi})-(\ref{p27chi}). When applying these projections, many of the terms have $\varphi $ and $\psi $ contracted in some way, so the contraction identities (\ref{phiphi1})-(\ref% {psipsi1}) have to be used to simplify the expressions. The package \emph{% "Riegeom"} lacks the ability to make such substitutions, so a few simple custom Maple programs based on \emph{"Riegeom" }had to be written in order to facilitate these calculations. Overall, the expansion of $\tilde{\ast}% \tilde{\varphi}$ to third order is% \begin{eqnarray} \tilde{\ast}\tilde{\varphi} &=&\ast \varphi -\varepsilon \ast \chi +\varepsilon ^{2}\left( \frac{1}{6}\ast \mathrm{i}_{\varphi }\left( \left( \phi hh\phi \right) _{0}\right) -\frac{1}{42}\left\vert \chi \right\vert ^{2}\ast \varphi \right) \label{starphi27} \\ &&-\varepsilon ^{3}\left( \frac{2}{1701}\left( \varphi hhh\varphi \right) \ast \varphi +\frac{5}{24}\left\vert \chi \right\vert ^{2}\ast \chi -\frac{1% }{18}\ast \mathrm{i}_{\varphi }\left( h_{0}^{3}\right) +\frac{1}{36}\ast \mathrm{i}_{\varphi }\left( \left( \psi hhh\psi \right) _{0}\right) +\frac{1% }{324}u\lrcorner \ast \varphi \right) \notag \\ &&+O\left( \varepsilon ^{4}\right) \notag \end{eqnarray}% where $\left( \phi hh\phi \right) _{0}$, $h_{0}^{3}$ and $\left( \psi hhh\psi \right) _{0}$ denote the traceless parts of $\left( \phi hh\phi \right) _{ab}$, $\left( h^{3}\right) _{ab}$ and $\left( \psi hhh\psi \right) _{ab},$ respectively, and \begin{equation} u^{a}=\psi _{\ mnp}^{a}\varphi _{rst}h^{mr}h^{ns}h^{pt} \label{p27ua} \end{equation} Although above we did all calculations to fourth order, we will really only need the expansion of $\tilde{\ast}\tilde{\varphi}$ to third order. However for possible future reference here is the $G_{2}$ singlet piece of the fourth order \begin{equation} \left. \pi _{1}\left( \tilde{\ast}\tilde{\varphi}\right) \right\vert _{\varepsilon ^{4}}=\frac{5}{13608}\left( \psi hhhh\psi \right) +\frac{25}{% 2016}\left\vert \chi \right\vert ^{4}-\frac{5}{6804}\func{Tr}\left( h^{4}\right) . \label{star4phi27p1} \end{equation}% In fact, using the homogeneity property of $\varphi \wedge \ast \varphi \,$% it is possible to relate $\Lambda _{27}^{4}$ terms with a higher order $% \Lambda _{1}^{4}$ term, so calculating higher order terms is also a way to make sure that all the coefficients are consistent. Now that we have expansions of $\tilde{\ast}\tilde{\varphi}$ for $1$- and $% 27 $-dimensional deformations, it is not difficult to combine them together. Suppose we want to combine conformal transformation and $27$-dimensional deformations. As in the case with $7$-dimensional deformations consider% \begin{equation*} \tilde{\varphi}=\hat{\varphi}+\varepsilon \chi \end{equation*}% where $\hat{\varphi}=f\varphi $ and $\chi \in \Lambda _{27}^{3}$. Consider only up to second order in (\ref{starphi27}), \begin{equation*} \tilde{\ast}\tilde{\varphi}=\hat{\ast}\hat{\varphi}-\varepsilon \hat{\ast}% \chi +\varepsilon ^{2}\left( -\frac{1}{42}\widehat{\left\vert \chi \right\vert }^{2}\hat{\ast}\hat{\varphi}+\frac{1}{6}\hat{\ast}\mathrm{i}_{% \hat{\varphi}}\left( \left( \hat{\varphi}\hat{h}\hat{h}\hat{\varphi}\right) _{0}\right) \right) +O\left( \varepsilon ^{3}\right) . \end{equation*}% Note that since $h_{ab}=\frac{3}{4}\chi _{mn\{a}\varphi _{b\}}^{\ \ mn}$, \begin{eqnarray*} \hat{h}_{ab} &=&\frac{3}{4}\chi _{mn\{a}\hat{\varphi}_{b\}}^{\ \ mn}=\frac{3% }{4}\hat{g}^{mr}\hat{g}^{ms}\chi _{mn\{a}\hat{\varphi}_{b\}rs}^{\ \ } \\ &=&f^{-\frac{1}{3}}h_{ab} \end{eqnarray*}% and hence \begin{eqnarray*} \left( \hat{\varphi}\hat{h}\hat{h}\hat{\varphi}\right) _{ab} &=&\hat{\varphi}% _{abm}\hat{\varphi}_{den}\hat{h}^{ad}\hat{h}^{be} \\ &=&f^{-\frac{4}{3}}\left( \varphi hh\varphi \right) _{ab}. \end{eqnarray*}% Moreover, \begin{equation*} \mathrm{i}_{\hat{\varphi}}\left( \left( \hat{\varphi}\hat{h}\hat{h}\hat{% \varphi}\right) _{0}\right) =f^{-1}\mathrm{i}_{\varphi }\left( \left( \varphi hh\varphi \right) _{0}\right) . \end{equation*}% Therefore, overall, \begin{equation} \tilde{\ast}\tilde{\varphi}=f^{\frac{4}{3}}\ast \varphi -\varepsilon f^{% \frac{1}{3}}\ast \chi +\varepsilon ^{2}\left( -\frac{1}{42}f^{-\frac{2}{3}% }\left\vert \chi \right\vert ^{2}\ast \varphi +\frac{1}{6}f^{-\frac{2}{3}% }\ast \mathrm{i}_{\varphi }\left( \left( \varphi hh\varphi \right) _{0}\right) \right) +O\left( \varepsilon ^{3}\right) . \label{sphiconfp27} \end{equation}% Let $f=1+\varepsilon a$, and expand in powers of $\varepsilon $ to third order to get \begin{eqnarray} \tilde{\ast}\tilde{\varphi} &=&\ast \varphi +\varepsilon \left( \frac{4}{3}% a\ast \varphi -\ast \chi \right) +\varepsilon ^{2}\left( \left( \frac{2}{9}% a^{2}-\frac{1}{42}\left\vert \chi \right\vert ^{2}\right) \ast \varphi -% \frac{1}{3}a\ast \chi +\frac{1}{6}\ast \mathrm{i}_{\varphi }\left( \left( \varphi hh\varphi \right) _{0}\right) \right) \label{sphip1p27} \\ &&+\varepsilon ^{3}\left( \left( \frac{1}{63}a\left\vert \chi \right\vert ^{2}-\frac{4}{81}a^{3}\right) \ast \varphi -\frac{1}{9}a\ast \mathrm{i}% _{\varphi }\left( \left( \varphi hh\varphi \right) _{0}\right) +\left( \frac{% 1}{9}a^{2}-\frac{5}{24}\left\vert \chi \right\vert ^{2}\right) \ast \chi \right) \notag \\ &&+\varepsilon ^{3}\left( \frac{1}{18}\ast \mathrm{i}_{\varphi }\left( h_{0}^{3}\right) -\frac{1}{36}\ast \mathrm{i}_{\varphi }\left( \left( \psi hhh\psi \right) _{0}\right) -\frac{2}{1701}\left( \varphi hhh\varphi \right) \ast \varphi -\frac{1}{324}u\lrcorner \ast \varphi \right) +O\left( \varepsilon ^{4}\right) \notag \end{eqnarray} \section{Moduli space} \setcounter{equation}{0}In section \ref{mtheorysec} we described how $M$% -theory can be used to give a natural complexification of the $G_{2}$ moduli space - denote this space by $\mathcal{M}_{\mathbb{C}}$. The metric (\ref% {modulimetric}) on $\mathcal{M}_{\mathbb{C}}$ arises naturally from the Kaluza-Klein reduction of the $M$-theory action. As shown in \cite% {WittenBeasley}, it turns out that this metric is in fact K\"{a}hler, with the K\"{a}hler potential $K$ given by \begin{equation} K=-3\log V. \label{kahlerpot} \end{equation}% where as before, $V$ is the volume of $X$ \begin{equation*} V=\frac{1}{7}\int \varphi \wedge \ast \varphi . \end{equation*}% Note that in sometimes $K$ is given with a different normalization factor. Here we follow \cite{WittenBeasley}, but in \cite{Gutowski:2001fm} and \cite% {karigiannis-2007a}, in particular, a different convention is used. Let us show that $K$ is indeed the K\"{a}hler potential for $G_{M\bar{N}}$. Clearly, $V,$ $K$ and $G_{M\bar{N}}$ only depend on the parameters $s^{M}$ for the $G_{2}$ $3$-form - that is, only the real part $s^{M}$ of the complex coordinates $z^{M}$ on $\mathcal{M}_{\mathbb{C}}$. So let us for now just look at the $s^{M}$ derivatives. Note that under a scaling $% s^{M}\longrightarrow \lambda s^{M}$, $\varphi $ scales as $\varphi \longrightarrow \lambda \varphi $ and from (\ref{p1sphi1}), $\ast \varphi $ scales as $\ast \varphi \longrightarrow \lambda ^{\frac{4}{3}}\ast \varphi $% , and so $V$ scales as \begin{equation*} V\longrightarrow \lambda ^{\frac{7}{3}}V. \end{equation*}% So $V$ is homogeneous of order $\frac{7}{3}$ in the $s^{M}$, and hence% \begin{eqnarray*} s^{M}\frac{\partial V}{\partial s^{M}} &=&\frac{7}{3}V \\ &=&\frac{1}{3}\int s^{M}\phi _{M}\wedge \ast \varphi \end{eqnarray*}% and thus, \begin{equation} \frac{\partial V}{\partial s^{M}}=\frac{1}{3}\int \phi _{M}\wedge \ast \varphi . \label{dvdsm} \end{equation}% Hence, \begin{equation} \frac{\partial K}{\partial s^{M}}=-\frac{1}{V}\int \phi _{M}\wedge \ast \varphi . \label{dkdsm} \end{equation}% Here the dependence on the $s^{M}$ is encoded in $V$ and in $\ast \varphi $, which depends non-linearly on the $s^{M}$. Thus we have, \begin{eqnarray*} \frac{\partial ^{2}K}{\partial z^{M}\partial \bar{z}^{N}} &=&\frac{3}{V^{2}}% \frac{\partial V}{\partial s^{M}}\frac{\partial V}{\partial s^{N}}-\frac{3}{V% }\frac{\partial ^{2}V}{\partial s^{M}\partial s^{N}} \\ &=&\frac{1}{3}\frac{1}{V^{2}}\left( \int \phi _{M}\wedge \ast \varphi \right) \left( \int \phi _{N}\wedge \ast \varphi \right) -\frac{1}{V}\int \phi _{(M}\wedge \partial _{N)}\left( \ast \varphi \right) . \end{eqnarray*}% As we know from section \ref{deformsec} , the first derivative of $\ast \varphi $ is given by \begin{equation} \partial _{N}\left( \ast \varphi \right) =\frac{4}{3}\ast \pi _{1}\left( \phi _{N}\right) +\ast \pi _{7}\left( \phi _{N}\right) -\ast \pi _{27}\left( \phi _{N}\right) . \label{sphi1der} \end{equation}% so therefore, \begin{eqnarray*} \int \phi _{(M}\wedge \partial _{N)}\left( \ast \varphi \right) &=&\frac{4}{% 3}\int \left( \pi _{1}\left( \varphi _{M}\right) \wedge \ast \pi _{1}\left( \varphi _{N}\right) \right) +\int \left( \pi _{7}\left( \varphi _{M}\right) \wedge \ast \pi _{7}\left( \varphi _{N}\right) \right) \\ &&-\int \left( \pi _{27}\left( \varphi _{M}\right) \wedge \ast \pi _{27}\left( \varphi _{N}\right) \right) \end{eqnarray*}% Also using (\ref{p1chi}), we get \begin{equation} \frac{1}{3}\frac{1}{V^{2}}\left( \int \phi _{M}\wedge \ast \varphi \right) \left( \int \phi _{N}\wedge \ast \varphi \right) =\frac{7}{3}\frac{1}{V}\int \pi _{1}\left( \varphi _{M}\right) \wedge \ast \pi _{1}\left( \varphi _{N}\right) \label{p1p1a} \end{equation}% Thus overall, \begin{equation} \frac{\partial ^{2}K}{\partial z^{M}\partial \bar{z}^{N}}=\frac{1}{V}\left( \int \left( \pi _{1}\left( \varphi _{M}\right) \wedge \ast \pi _{1}\left( \varphi _{N}\right) \right) -\int \left( \pi _{7}\left( \varphi _{M}\right) \wedge \ast \pi _{7}\left( \varphi _{N}\right) \right) +\int \left( \pi _{27}\left( \varphi _{M}\right) \wedge \ast \pi _{27}\left( \varphi _{N}\right) \right) \right) . \label{d2kss} \end{equation}% Note that if $Hol\left( X\right) =G_{2}$ then all the seven-dimensional components vanish, and hence we get \begin{equation} \frac{\partial ^{2}K}{\partial z^{M}\partial \bar{z}^{N}}=\frac{1}{V}% \int_{X}\phi _{M}\wedge \ast \phi _{\bar{N}}=G_{M\bar{N}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{d2kzz} \end{equation}% as claimed. Since the negative definite part of (\ref{d2kss}) vanishes, the resulting metric is positive definite. In general, there is at least one other good candidate for the metric on the $G_{2}$ moduli space. The Hessian of $V$, rather than of $\log V$, can be used as a K\"{a}hler potential and gives a metric with signature $\left( 1,b_{27}^{3}\right) $. This metric is in particular used in \cite% {Hitchin:2000jd} and \cite{karigiannis-2007a}. There are some advantages to using $V$ as the K\"{a}hler potential, because some computations give more elegant results. However if we use the supergravity action as a starting point for the study of the moduli space, our choice of the K\"{a}hler potential is very natural. Now we have a complex manifold $\mathcal{M}_{\mathbb{C}}$, equipped with the K\"{a}hler metric $G_{M\bar{N}}$, so it is now interesting to study the properties of this metric, and the geometry which it gives. We will use the metric $G_{M\bar{N}}$ to calculate the associated curvature tensor $\mathcal{% R}_{M\bar{N}P\bar{Q}}$ of the manifold $\mathcal{M}_{\mathbb{C}}.$ Note that calculation of the curvature of the moduli space but for a different choice of metric is done in \cite{KarigiannisLin}. Let us introduce local special coordinates on $\mathcal{M}_{\mathbb{C}}$. Let $\phi _{0}=a\varphi $ and $\phi _{\mu }\in \Lambda _{27}^{3}$ for $\mu =1,...,b_{27}^{3}$, so $s^{0}$ defines directions parallel to $\varphi $ and $s^{\mu }$ define directions in $\Lambda _{27}^{3}$. Since our metric is K% \"{a}hler, the expression for $\mathcal{R}_{M\bar{N}P\bar{Q}}$ is given by \begin{equation} \mathcal{R}_{\bar{K}L\bar{M}N}=\partial _{\bar{M}}\partial _{N}\partial _{L}\partial _{\bar{K}}K-G^{R\bar{S}}\left( \partial _{\bar{M}}\partial _{R}\partial _{\bar{K}}K\right) \left( \partial _{N}\partial _{L}\partial _{% \bar{S}}K\right) . \label{modcurv1} \end{equation}% Also define \begin{equation} A_{MNR}=\frac{\partial ^{3}K}{\partial z^{M}\partial z^{N}\partial z^{R}} \label{yuk1} \end{equation}% so that we can rewrite (\ref{modcurv1}) as \begin{equation} \mathcal{R}_{\bar{K}L\bar{M}N}=\partial _{\bar{M}}\partial _{N}\partial _{L}\partial _{\bar{K}}K-G^{R\bar{S}}A_{\bar{M}R\bar{K}}A_{NL\bar{S}}. \label{modcurv2} \end{equation}% Now it only remains to work out the third and fourth derivatives of $K$. Starting from (\ref{dkdsm}) we find that \begin{eqnarray} A_{MNR} &=&-\frac{1}{V}\int \phi _{M}\wedge \frac{\partial ^{2}}{\partial s^{N}\partial s^{R}}\left( \ast \varphi \right) +\frac{1}{V^{2}}\left( \int \phi _{(M}\wedge \ast \varphi \right) \left( \int \phi _{N}\wedge \frac{% \partial }{\partial s^{R)}}\left( \ast \varphi \right) \right) \label{amnk1} \\ &&-\frac{2}{9V^{3}}\left( \int \phi _{M}\wedge \ast \varphi \right) \left( \int \phi _{N}\wedge \ast \varphi \right) \left( \int \phi _{R}\wedge \ast \varphi \right) \notag \end{eqnarray}% and from the power series expansion of $\ast \varphi $ (\ref{sphip1p27}), we can extract the higher derivatives of $\ast \varphi $: \begin{subequations} \label{sphiderivs} \begin{eqnarray} \partial _{0}\partial _{0}\left( \ast \varphi \right) &=&\frac{4}{9}% a^{2}\ast \varphi \ \ \ \ \ \ \ \partial _{0}\partial _{0}\partial _{0}\left( \ast \varphi \right) =-\frac{8}{27}a^{3}\ast \varphi \label{sphider1} \\ \partial _{0}\partial _{\mu }\left( \ast \varphi \right) &=&-\frac{1}{3}% a\ast \phi _{\mu }\ \ \ \ \ \ \partial _{0}\partial _{0}\partial _{\mu }\left( \ast \varphi \right) =\frac{2}{9}a^{2}\ast \phi _{\mu }\ \ \ \label{sphider2} \\ \partial _{\mu }\partial _{\nu }\left( \ast \varphi \right) &=&-\frac{1}{21}% \left\langle \phi _{\mu },\phi _{\nu }\right\rangle \ast \varphi +\frac{1}{3}% \ast \mathrm{i}_{\varphi }\left( \left( \varphi h_{\mu }h_{\nu }\varphi \right) _{0}\right) \label{sphider3} \\ \partial _{0}\partial _{\mu }\partial _{\nu }\left( \ast \varphi \right) &=&% \frac{2}{63}a\left\langle \phi _{\mu },\phi _{\nu }\right\rangle \ast \varphi -\frac{2}{9}a\ast \mathrm{i}_{\varphi }\left( \left( \varphi h_{\mu }h_{\nu }\varphi \right) _{0}\right) \label{sphider4} \\ \partial _{\mu }\partial _{\nu }\partial _{\kappa }\left( \ast \varphi \right) &=&-\frac{5}{4}\left\langle \phi _{\mu },\phi _{\nu }\right\rangle \ast \phi _{\kappa }+\frac{1}{3}\ast \mathrm{i}_{\varphi }\left( (h_{\mu }h_{\nu }h_{\kappa })_{0}\right) -\frac{1}{6}\ast \mathrm{i}_{\varphi }\left( \left( \psi h_{\mu }h_{\nu }h_{\kappa }\psi \right) _{0}\right) \label{sphider5} \\ &&-\frac{4}{567}\left( \varphi h_{\mu }h_{\nu }h_{\kappa }\varphi \right) \ast \varphi \notag \end{eqnarray}% where $h_{\mu }$,$h_{\nu }$ and $h_{\kappa }$ are traceless symmetric matrices corresponding to the $3$-forms $\phi _{\mu }$,$\varphi _{\nu }$ and $\phi _{\kappa }$, respectively. Using these expressions, we can now write down all the components of $A_{MNR}$: \end{subequations} \begin{subequations} \label{Amnkp1p27} \begin{eqnarray} A_{\bar{0}0\bar{0}} &=&-14a^{3} \label{a000} \\ A_{\bar{0}0\bar{\mu}} &=&0 \label{a00m} \\ A_{\bar{0}\mu \bar{\nu}} &=&-\frac{2a}{V}\int \phi _{\mu }\wedge \ast \phi _{% \bar{\nu}}=-2aG_{\mu \bar{\nu}} \label{a0mn} \\ A_{\bar{\mu}\nu \bar{\rho}} &=&-\frac{2}{27V}\int \left( \varphi h_{\bar{\mu}% }h_{\nu }h_{\bar{\rho}}\varphi \right) dV \label{amnr} \end{eqnarray}% Now also look at the fourth derivative of $K$. From (\ref{sphiderivs}), we get \end{subequations} \begin{subequations} \label{k4thder} \begin{eqnarray} \frac{\partial ^{4}K}{\partial z^{0}\partial \bar{z}^{0}\partial z^{0}\partial \bar{z}^{0}} &=&42a^{4} \label{k0000} \\ \frac{\partial ^{4}K}{\partial z^{0}\partial \bar{z}^{0}\partial z^{0}\partial \bar{z}^{\mu }} &=&0 \label{k000m} \\ \frac{\partial ^{4}K}{\partial z^{0}\partial \bar{z}^{0}\partial z^{\mu }\partial \bar{z}^{\nu }} &=&\frac{4}{3}\frac{a^{2}}{V}\int \phi _{\mu }\wedge \ast \phi _{\bar{\nu}}=\frac{4}{3}a^{2}G_{\mu \bar{\nu}} \label{k00mn} \\ \frac{\partial ^{4}K}{\partial z^{0}\partial \bar{z}^{\mu }\partial z^{\nu }\partial \bar{z}^{\rho }} &=&\frac{2}{9}\frac{a}{V}\int \left( \varphi h_{\mu }h_{\nu }h_{\rho }\varphi \right) \mathrm{vol}=-3aA_{\bar{\mu}\nu \bar{\rho}} \label{k0mnr} \\ \frac{\partial ^{4}K}{\partial z^{\kappa }\partial \bar{z}^{\mu }\partial z^{\nu }\partial \bar{z}^{\rho }} &=&\frac{1}{3}\left( G_{\bar{\mu}\nu }G_{\kappa \bar{\rho}}+G_{\bar{\mu}\kappa }G_{\nu \bar{\rho}}\right) +\frac{1% }{3}\frac{1}{V^{2}}\int \phi _{\kappa }\wedge \ast \phi _{\nu }\int \phi _{% \bar{\mu}}\wedge \ast \phi _{\bar{\rho}} \label{kkmnr} \\ &&+\frac{1}{27V}\int \left( \left( \psi h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{% \bar{\rho}}\psi \right) -2\func{Tr}\left( h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{% \bar{\rho}}\right) +\frac{5}{3}\func{Tr}\left( h_{(\kappa }h_{\bar{\mu}% }\right) \func{Tr}\left( h_{\nu }h_{\bar{\rho})}\right) \right) \mathrm{vol} \notag \end{eqnarray}% Note that it can be shown using the identity (\ref{psipsi0}) that \end{subequations} \begin{equation*} \psi hhhh\psi =12\left( \varphi h^{2}hh\varphi \right) +3\func{Tr}\left( h^{2}\right) ^{2}-6\func{Tr}\left( h^{4}\right) \end{equation*}% \qquad Now define \begin{equation} C_{MN}=\frac{\partial ^{2}K}{\partial z^{M}\partial z^{N}} \label{cmn0} \end{equation}% This is the second derivative of $K$ but with pure indices, rather than the derivative with mixed indices which gives the metric $G_{M\bar{N}}$. Note that since $K=K\left( \func{Im}z\right) $, we have \begin{equation} \frac{\partial ^{2}K}{\partial z^{M}\partial z^{N}}=\frac{\partial ^{2}K}{% \partial z^{M}\partial \bar{z}^{N}} \label{gmncmn} \end{equation}% so numerically, $C_{MN}$ and $G_{M\bar{N}}$ are in fact equal, and in particular, \begin{equation} C_{\mu \nu }=\frac{1}{V}\int \phi _{\mu }\wedge \ast \phi _{\nu } \label{cmn} \end{equation}% So while $C_{MN}$ is not technically part of the metric, it inherits some similar properties. This happens due to the fact that while the complexification of the moduli space comes naturally, the holomorphic structure is artificial to some extent, because the $G_{2}$ and $C$-field moduli do not really mix. Using this definition, we can rewrite (\ref{kkmnr}% ) as \begin{eqnarray*} \frac{\partial ^{4}K}{\partial z^{\kappa }\partial \bar{z}^{\mu }\partial z^{\nu }\partial \bar{z}^{\rho }} &=&\frac{1}{3}\left( G_{\bar{\mu}\nu }G_{\kappa \bar{\rho}}+G_{\bar{\mu}\kappa }G_{\nu \bar{\rho}}\right) +\frac{1% }{3}C_{\bar{\mu}\bar{\rho}}C_{\kappa \nu } \\ &&-\frac{1}{27V}\int \left( 2\func{Tr}\left( h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{\bar{\rho}}\right) -\left( \psi h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{\bar{% \rho}}\psi \right) -\frac{5}{3}\func{Tr}\left( h_{(\kappa }h_{\bar{\mu}% }\right) \func{Tr}\left( h_{\nu }h_{\bar{\rho})}\right) \right) \mathrm{vol} \end{eqnarray*}% Taking into account that $G^{0\bar{0}}=\frac{1}{7a^{2}}$ and $G^{0\bar{\mu}% }=0$, we have enough information to be able to write down the full expressions for the components of the curvature tensor: \begin{eqnarray} \mathcal{R}_{0\bar{0}0\bar{0}} &=&14a^{4} \label{R0000} \\ \mathcal{R}_{0\bar{0}0\bar{\mu}} &=&0 \label{R000m} \\ \mathcal{R}_{0\bar{0}\mu \bar{\nu}} &=&2a^{2}G_{\mu \bar{\nu}} \label{R00mn} \\ \mathcal{R}_{0\bar{\mu}\nu \bar{\rho}} &=&-A_{\bar{\mu}\nu \bar{\rho}}a \label{R0mnr} \\ \mathcal{R}_{\kappa \bar{\mu}\nu \bar{\rho}} &=&\frac{1}{3}\left( G_{\bar{\mu% }\nu }G_{\kappa \bar{\rho}}+G_{\bar{\mu}\kappa }G_{\nu \bar{\rho}}\right) -G^{\tau \bar{\sigma}}A_{\bar{\mu}\tau \bar{\rho}}A_{\kappa \nu \bar{\sigma}% }-\frac{5}{21}C_{\bar{\mu}\bar{\rho}}C_{\kappa \nu } \label{Rkmnr} \\ &&+\frac{1}{27V}\int \left( \left( \psi h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{% \bar{\rho}}\psi \right) -2\func{Tr}\left( h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{% \bar{\rho}}\right) +\frac{5}{3}\func{Tr}\left( h_{(\kappa }h_{\bar{\mu}% }\right) \func{Tr}\left( h_{\nu }h_{\bar{\rho})}\right) \right) \mathrm{vol} \notag \end{eqnarray}% Let us look at more detail at the expression for $A_{\mu \bar{\nu}\bar{\rho}} $: \begin{eqnarray*} A_{\bar{\mu}\nu \bar{\rho}} &=&-\frac{2}{27V}\int \varphi h_{\bar{\mu}% }h_{\nu }h_{\bar{\rho}}\varphi \mathrm{vol} \\ &=&-\frac{2}{27V}\int \varphi _{abc}\varphi _{mnp}h_{\bar{\mu}}^{am}h_{\nu }^{bn}h_{\bar{\rho}}^{cp}\mathrm{vol} \end{eqnarray*}% Define $h_{\mu }^{a}=h_{\mu \ m}^{\ a}dx^{m}.$ Then \begin{equation*} \varphi _{abc}\varphi _{mnp}h_{\bar{\mu}}^{am}h_{\nu }^{bn}h_{\bar{\rho}% }^{cp}\mathrm{vol}=6\varphi _{abc}h_{\bar{\mu}}^{a}\wedge h_{\nu }^{b}\wedge h_{\bar{\rho}}^{c}\wedge \ast \varphi \end{equation*}% and so, \begin{equation} A_{\bar{\mu}\nu \bar{\rho}}=-\frac{4}{9V}\int \varphi _{abc}h_{\bar{\mu}% }^{a}\wedge h_{\nu }^{b}\wedge h_{\bar{\rho}}^{c}\wedge \ast \varphi . \label{amnryuk} \end{equation}% This is the precise analogue of the Yukawa coupling which is defined on the Calabi-Yau moduli space. Similar expressions have appeared previously in \cite{Lee:2002fa},\cite{deBoer:2005pt} and \cite{karigiannis-2007a}. Similarly, we can write \begin{eqnarray} \left( \psi h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{\bar{\rho}}\psi \right) \mathrm{vol} &=&\psi _{abcd}\psi _{mnpq}h_{\kappa }^{am}h_{\bar{\mu}% }^{bn}h_{\nu }^{cp}h_{\bar{\rho}}^{dq}\mathrm{vol} \notag \\ &=&24\left\langle \psi _{abcd}h_{\kappa }^{a}\wedge h_{\bar{\mu}}^{b}\wedge h_{\nu }^{c}\wedge h_{\bar{\rho}}^{d},\ast \psi \right\rangle \mathrm{vol} \notag \\ &=&24\psi _{abcd}h_{\kappa }^{a}\wedge h_{\bar{\mu}}^{b}\wedge h_{\nu }^{c}\wedge h_{\bar{\rho}}^{d}\wedge \varphi \label{psi4hyuk} \end{eqnarray}% Hence, can rewrite (\ref{Rkmnr}) as \begin{eqnarray} \mathcal{R}_{\kappa \bar{\mu}\nu \bar{\rho}} &=&\frac{1}{3}\left( G_{\bar{\mu% }\nu }G_{\kappa \bar{\rho}}+G_{\bar{\mu}\kappa }G_{\nu \bar{\rho}}\right) -G^{\tau \bar{\sigma}}A_{\bar{\mu}\tau \bar{\rho}}A_{\kappa \nu \bar{\sigma}% }-\frac{5}{21}C_{\bar{\mu}\bar{\rho}}C_{\kappa \nu } \\ &&+\frac{8}{9}\frac{1}{V}\int \psi _{abcd}h_{\kappa }^{a}\wedge h_{\bar{\mu}% }^{b}\wedge h_{\nu }^{c}\wedge h_{\bar{\rho}}^{d}\wedge \varphi \notag \\ &&+\frac{1}{81}\frac{1}{V}\int \left( 5\func{Tr}\left( h_{(\kappa }h_{\bar{% \mu}}\right) \func{Tr}\left( h_{\nu }h_{\bar{\rho})}\right) -6\func{Tr}% \left( h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{\bar{\rho}}\right) \right) \mathrm{% vol} \notag \end{eqnarray}% \qquad Note that because in the $\Lambda _{27}^{3}$ directions the first derivative of $V$ vanishes, some of these terms which appear in the curvature expression can also be expressed as derivatives of \ $V$: \begin{eqnarray*} \frac{\partial ^{3}V}{\partial \bar{z}^{\mu }\partial z^{\nu }\partial \bar{z% }^{\rho }} &=&-\frac{1}{3}A_{\bar{\mu}\nu \bar{\rho}} \\ \frac{\partial ^{4}V}{\partial z^{\kappa }\partial \bar{z}^{\mu }\partial z^{\nu }\partial \bar{z}^{\rho }} &=&-\frac{8}{27}\int \psi _{abcd}h_{\kappa }^{a}\wedge h_{\bar{\mu}}^{b}\wedge h_{\nu }^{c}\wedge h_{\bar{\rho}% }^{d}\wedge \varphi \\ &&+\frac{1}{243}\int \left( 6\func{Tr}\left( h_{\kappa }h_{\bar{\mu}}h_{\nu }h_{\bar{\rho}}\right) -5\func{Tr}\left( h_{(\kappa }h_{\bar{\mu}}\right) \func{Tr}\left( h_{\nu }h_{\bar{\rho})}\right) \right) \mathrm{vol} \end{eqnarray*}% So alternatively, can write \begin{eqnarray*} \mathcal{R}_{\kappa \bar{\mu}\nu \bar{\rho}} &=&\frac{1}{3}\left( G_{\bar{\mu% }\nu }G_{\kappa \bar{\rho}}+G_{\bar{\mu}\kappa }G_{\nu \bar{\rho}}\right) -G^{\tau \bar{\sigma}}A_{\bar{\mu}\tau \bar{\rho}}A_{\kappa \nu \bar{\sigma}% }-\frac{5}{21}C_{\bar{\mu}\bar{\rho}}C_{\kappa \nu } \\ &&-\frac{3}{V}\frac{\partial ^{4}V}{\partial z^{\kappa }\partial \bar{z}% ^{\mu }\partial z^{\nu }\partial \bar{z}^{\rho }} \end{eqnarray*}% Define \begin{equation} U_{\bar{M}}=\frac{3}{V}\frac{\partial ^{3}V}{\partial \bar{z}^{M}\partial z^{N}\partial \bar{z}^{R}}G^{N\bar{R}} \label{UM} \end{equation}% Then, \begin{equation} \partial _{K}U_{\bar{M}}=\frac{3}{V}\left( \frac{\partial ^{4}V}{\partial z^{K}\partial \bar{z}^{M}\partial z^{N}\partial \bar{z}^{R}}G^{N\bar{R}}-% \frac{\partial ^{3}V}{\partial \bar{z}^{M}\partial z^{N}\partial \bar{z}^{R}}% A_{K}^{\ \ N\bar{R}}\right) \label{dkum} \end{equation}% We can use this to express the Ricci curvature% \begin{equation} \mathcal{R}_{\kappa \bar{\mu}}=\left( \frac{1}{3}b^{3}\left( X\right) -\frac{% 1}{63}\right) G_{\kappa \bar{\mu}}-\partial _{\kappa }U_{\bar{\mu}} \label{riccimn} \end{equation}% where $b^{3}\left( X\right) =b_{27}^{3}+1$ is the third Betti number of $X$. Also, \begin{eqnarray} \mathcal{R}_{0\bar{\mu}} &=&-aA_{\bar{\mu}\nu \bar{\rho}}G^{\nu \bar{\rho}% }=-\partial _{0}U_{\bar{\mu}} \label{ricci0m} \\ \mathcal{R}_{0\bar{0}} &=&2a^{2}b^{3}\left( X\right) \label{ricci00} \end{eqnarray} Although here we have certain similarities with the structure of the Calabi-Yau moduli space, but we are lacking a key feature of Calabi-Yau moduli space - a particular line bundle over the moduli space. For example, the holomorphic $3$-form on a Calabi-Yau $3$-fold defines a complex line bundle over the complex structure moduli space. In the $G_{2}$ case, we could try and see what happens if we look at the real line bundle $L$ defined by $\varphi $ over the complexified $G_{2}$ moduli space $\mathcal{M}% _{\mathbb{C}}$. So consider the gauge transformations \begin{equation} \varphi \longrightarrow f\left( \func{Re}z\right) \varphi \label{gaugetransform} \end{equation}% where each $f\left( z\right) $ is a real number. Then, as in \cite% {deBoer:2005pt}, define a covariant derivative $\mathcal{D}$ on $L$ by \begin{equation} \mathcal{D}_{M}\varphi =\partial _{M}\varphi +\frac{1}{7}\left( \partial _{M}K\right) \varphi . \label{covderiv} \end{equation}% Under the transformation (\ref{gaugetransform}) \begin{eqnarray*} V &\longrightarrow &f^{\frac{7}{3}}V \\ K &\longrightarrow &K-7\log f \end{eqnarray*}% and so \begin{equation*} \partial _{M}K\longrightarrow \partial _{M}K-\frac{7}{f}\partial _{M}f. \end{equation*}% Hence, \begin{equation} \mathcal{D}_{M}\varphi \longrightarrow f\mathcal{D}_{M}\varphi . \label{dmphitrans} \end{equation}% Moreover, from the expression for $\partial _{M}K$ (\ref{dkdsm}), we find that \begin{equation*} \mathcal{D}_{0}\varphi =0\ \ \ \ \ \mathcal{D}_{\mu }\varphi =\partial _{\mu }\varphi \end{equation*}% \qquad \qquad So as noted in \cite{deBoer:2005pt}, this covariant derivative projects out the $G_{2}$ singlet contribution. It also gives a covariant way in which to extract the $\mathbf{27}$ contributions so we can use $\mathcal{D}% _{M}\varphi $ when just need to extract $\partial _{\mu }\varphi $. Also consider% \begin{eqnarray} \frac{1}{V}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}% _{\bar{N}}\varphi \right\rangle \right\rangle &=&\frac{1}{V}\int \mathcal{D}% _{M}\varphi \wedge \ast \mathcal{D}_{\bar{N}}\varphi \label{27metric} \\ &=&G_{M\bar{N}}-\frac{1}{7}\partial _{M}K\partial _{\bar{N}}K. \notag \end{eqnarray}% When one of the indices is equal to zero, the whole expression vanishes. However if both refer to the $27$-dimensional components, then we just get $% G_{\mu \bar{\nu}}$. A similar expression holds for $C_{MN}$. More generally, we can extend the covariant to any quantity which transforms under (\ref{gaugetransform}). Suppose $Q\left( z\right) $ is a function on $% \mathcal{M}_{\mathbb{C}}$, which under (\ref{gaugetransform}) transforms as \begin{equation*} Q\left( z\right) \longrightarrow f\left( z\right) ^{a}Q\left( z\right) . \end{equation*}% Then define the covariant derivative on it by \begin{equation} \mathcal{D}_{M}Q=\partial _{M}Q+\frac{a}{7}\left( \partial _{M}K\right) Q. \label{dmq} \end{equation}% From this we get \begin{eqnarray*} \mathcal{D}_{M}V &=&0 \\ \mathcal{D}_{M}\left( \ast \varphi \right) &=&\partial _{M}\left( \ast \varphi \right) +\frac{1}{7}\frac{4}{3}\left( \partial _{M}K\right) \left( \ast \varphi \right) \end{eqnarray*}% and in particular, \begin{equation*} \mathcal{D}_{0}\left( \ast \varphi \right) =0\ \ \ \ \ \mathcal{D}_{\mu }\left( \ast \varphi \right) =-\ast \left( \partial _{\mu }\varphi \right) \end{equation*}% so, in fact \begin{equation*} \mathcal{D}_{M}\left( \ast \varphi \right) =-\ast \mathcal{D}_{M}\varphi . \end{equation*}% \qquad Further we can extend $\mathcal{D}_{M}$ to objects with moduli space indices by replacing $\partial $ by ${\Greekmath 0272} $ - the metric-compatible covariant derivative with respect to the moduli space metric $G_{M\bar{N}}$. For which the Christoffel symbols are given by \begin{equation} \Gamma _{M\ Q}^{\ \ N}=G^{N\bar{P}}\partial _{M}G_{\bar{P}Q}=A_{\ MQ}^{N} \label{christoffel} \end{equation}% With these Christoffel symbols the covariant derivative of $C_{MN}$ is hence \begin{equation} {\Greekmath 0272} _{Q}C_{MN}=-A_{QMN}. \label{dqcmn} \end{equation}% Then we also find that \begin{eqnarray} \mathcal{D}_{M}\mathcal{D}_{N}\varphi &=&\partial _{M}\left( \partial _{N}\varphi +\frac{1}{7}\left( \partial _{N}K\right) \varphi \right) -A_{\ NM}^{P}\mathcal{D}_{P}\varphi +\frac{1}{7}\partial _{M}K\mathcal{D}% _{N}\varphi \notag \\ &=&\frac{1}{7}\left( C_{MN}-\frac{1}{7}\partial _{M}K\partial _{N}K\right) \varphi -A_{\ NM}^{P}\mathcal{D}_{P}\varphi +\frac{2}{7}\partial _{(M}K% \mathcal{D}_{N)}\varphi \label{dmdnphi} \\ &=&\frac{1}{7}\frac{1}{V}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}_{N}\varphi \right\rangle \right\rangle \varphi -A_{\ NM}^{P}\mathcal{D}_{P}\varphi +\frac{2}{7}\partial _{(M}K\mathcal{D}% _{N)}\varphi \end{eqnarray}% and for mixed type derivatives, we have \begin{eqnarray*} \mathcal{D}_{\bar{M}}\mathcal{D}_{N}\varphi &=&\partial _{\bar{M}}\left( \partial _{N}\varphi +\frac{1}{7}\left( \partial _{N}K\right) \varphi \right) +\frac{2}{7}\partial _{(\bar{M}}K\mathcal{D}_{N)}\varphi \\ &=&\frac{1}{7}\frac{1}{V}\left\langle \left\langle \mathcal{D}_{\bar{M}% }\varphi ,\ast \mathcal{D}_{N}\varphi \right\rangle \right\rangle \varphi +% \frac{2}{7}\partial _{(\bar{M}}K\mathcal{D}_{N})\varphi \\ &=&\frac{1}{7}\left( G_{\bar{M}N}\varphi +\frac{2}{7}\left( \partial _{\bar{M% }}K\partial _{N}K\right) \varphi +\partial _{(\bar{M}}K\partial _{N)}\varphi \right) \end{eqnarray*}% Note that here the covariant derivatives commute, so this connection is in fact flat. Now look at the third covariant derivative of $\varphi $ \begin{eqnarray} \left\langle \left\langle \mathcal{D}_{R}\mathcal{D}_{M}\mathcal{D}% _{N}\varphi ,\ast \varphi \right\rangle \right\rangle &=&\mathcal{D}% _{R}\left\langle \left\langle \mathcal{D}_{M}\mathcal{D}_{N}\varphi ,\ast \varphi \right\rangle \right\rangle -\left\langle \left\langle \mathcal{D}% _{M}\mathcal{D}_{N}\varphi ,\mathcal{D}_{R}\ast \varphi \right\rangle \right\rangle \notag \\ &=&\mathcal{D}_{R}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}_{N}\varphi \right\rangle \right\rangle +\left\langle \left\langle \mathcal{D}_{M}\mathcal{D}_{N}\varphi ,\ast \mathcal{D}% _{R}\varphi \right\rangle \right\rangle \label{3covphi1} \end{eqnarray}% First look at the second term in (\ref{3covphi1}). Since $\mathcal{D}% _{R}\varphi \in \Lambda _{27}^{3}$, we basically get the projection $\pi _{27}\left( \mathcal{D}_{M}\mathcal{D}_{N}\varphi \right) $:% \begin{eqnarray*} \left\langle \left\langle \mathcal{D}_{M}\mathcal{D}_{N}\varphi ,\ast \mathcal{D}_{R}\varphi \right\rangle \right\rangle &=&-A_{\ NM}^{P}\left\langle \left\langle \mathcal{D}_{P}\varphi ,\ast \mathcal{D}% _{R}\varphi \right\rangle \right\rangle +\frac{2}{7}\partial _{(M}K\left\langle \left\langle \mathcal{D}_{N)}\varphi ,\ast \mathcal{D}% _{R}\varphi \right\rangle \right\rangle \\ &=&-A_{MNR}+\frac{1}{7}A_{\ MN}^{P\ \ \ \ \ }\partial _{R}K\partial _{P}K+% \frac{2}{7}C_{R(N}\partial _{M)}K-\frac{2}{49}\partial _{R}K\partial _{M}K\partial _{N}K \end{eqnarray*}% In the first term of (\ref{3covphi1}), we have \begin{eqnarray*} \mathcal{D}_{R}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}_{N}\varphi \right\rangle \right\rangle &=&V\mathcal{D}_{R}\left( \frac{1}{V}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}% _{N}\varphi \right\rangle \right\rangle \right) \\ &=&V{\Greekmath 0272} _{R}\left\langle \left\langle \mathcal{D}_{M}\varphi ,\ast \mathcal{D}_{N}\varphi \right\rangle \right\rangle \\ &=&V\left( {\Greekmath 0272} _{R}C_{MN}-\frac{1}{7}{\Greekmath 0272} _{R}\left( \partial _{M}K\partial _{N}K\right) \right) \\ &=&V\left( -A_{RMN}-\frac{2}{7}C_{R(M}\partial _{N)}K+\frac{2}{7}A_{\ R(M}^{P}\partial _{N)}K\partial _{P}K\right) \end{eqnarray*}% Combining, we overall obtain \begin{equation} \frac{1}{V}\left\langle \left\langle \mathcal{D}_{R}\mathcal{D}_{M}\mathcal{D% }_{N}\varphi ,\ast \varphi \right\rangle \right\rangle =-2A_{RMN}-\frac{2}{49% }\partial _{R}K\partial _{M}K\partial _{N}K+\frac{3}{7}A_{(MN}^{\ \ \ \ \ \ P}\partial _{R)}K\partial _{P}K \label{3covphi2} \end{equation}% Decomposing this into components, we have \begin{eqnarray*} \frac{1}{V}\left\langle \left\langle \mathcal{D}_{\rho }\mathcal{D}_{\mu }% \mathcal{D}_{\nu }\varphi ,\ast \varphi \right\rangle \right\rangle &=&-2A_{\rho \mu \nu } \\ \frac{1}{V}\left\langle \left\langle \mathcal{D}_{0}\mathcal{D}_{\mu }% \mathcal{D}_{\nu }\varphi ,\ast \varphi \right\rangle \right\rangle &=&2C_{\mu \nu } \\ \frac{1}{V}\left\langle \left\langle \mathcal{D}_{0}\mathcal{D}_{0}\mathcal{D% }_{\nu }\varphi ,\ast \varphi \right\rangle \right\rangle &=&0 \\ \frac{1}{V}\left\langle \left\langle \mathcal{D}_{0}\mathcal{D}_{0}\mathcal{D% }_{0}\varphi ,\ast \varphi \right\rangle \right\rangle &=&0 \end{eqnarray*}% Therefore, the quantity$\frac{1}{V}\left\langle \left\langle \mathcal{D}% _{\rho }\mathcal{D}_{\mu }\mathcal{D}_{\nu }\varphi ,\ast \varphi \right\rangle \right\rangle $ essentially gives the Yukawa coupling, again giving a result analogous to the case of Calabi-Yau moduli spaces. \section{Concluding remarks} In this paper, we have computed the curvature of the complexified $G_{2}$ moduli space and found that while it has terms which are similar to the curvature of Calabi-Yau moduli, there are a number of new terms. In future work it would be interesting to interpret these new terms geometrically. If we consider a $7$-manifold of the form $CY_{3}\times S^{1}$ where $CY_{3}$ is a Calabi-Yau $3$-fold, then we can define a torsion-free $G_{2}$ structure on it. The relationship between the Calabi-Yau moduli space and the $G_{2}$ moduli space is however very non-trivial, because the complex structure moduli and the K\"{a}hler structure moduli become intertwined with each other. So it could turn out to be illuminating to try and relate the curvature of the $G_{2}$ moduli space to the curvatures of complex and K\"{a}% hler moduli spaces. In that case, however, $b_{7}^{3}=1$, so in fact the second derivative of our K\"{a}hler potential would give a pseudo-K\"{a}hler metric with signature $\left( -+...+\right) $ (\ref{d2kss}). Moreover, the ansatz for the $C$-field (\ref{Cansatz}) would also have to be different. Understanding how the Calabi-Yau moduli space is related to the $G_{2}$ moduli space could also enable us to find a manifestation of mirror symmetry from the $G_{2}$ perspective. Moreover, it would be interesting to see how existing approaches to mirror symmetry on $G_{2}$ manifolds (such as \cite% {Gukov:2002jv}) affect the geometric structures on the moduli space. Another possible direction for further research is to look at $G_{2}$ manifolds in a slightly different way. Suppose we have type $IIA$ superstrings on a non-compact Calabi-Yau $3$-fold with a special Lagrangian submanifold which is wrapped by a $D6$ brane which also fills $M_{4}$. Then, as explained in \cite{VafaKlemm:2001nx}, from the $M$-theory perspective this looks like a $S^{1}$ bundle over the Calabi-Yau which is degenerate over the special Lagrangian submanifold, but this $7$-manifold is still a $% G_{2}$ manifold. The moduli space of this manifold will be then determined by the Calabi-Yau moduli and the special Lagrangian moduli. This possibly could provide more information about mirror symmetry on Calabi-Yau manifolds \cite{StromingerYau:1996it}. \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}% \@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint}}% \def\diiint{\mathop{\displaystyle \iiint}}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \fi \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput} {} \fi \egroup \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput
2,877,628,089,290
arxiv
\section{Introduction} A smooth closed surface in affine 3-space will contain pairs of points at which the affine tangent planes are parallel; indeed the tangent plane at a given point may be parallel to that at several other points if the surface is non-convex. Associated with these pairs of points, and the chords joining them, there are a number of affinely invariant constructions. The {\em affine equidistants} are the loci of points at a fixed ratio $\lambda : 1-\lambda$ along the chords, and the {\em centre symmetry set} is the envelope of the chords, which can be locally empty. These constructions have been examined from the point of view of singularity theory in the last few years by several authors; there are many connexions with earlier work such as the `Wigner caustic' of Berry~\cite{berry} which, for a curve in the plane, is the equidistant corresponding to a ratio $\lambda=\frac{1}{2}$, that is the midpoints of the parallel tangent chords, and the bifurcations of central symmetry of Janeczko~\cite{janeczko}. Notable among recent studies is the work of Domitrz and his co-authors, for example~\cite{domitrz1}. A generic surface $M$ in affine 3-space will generically have pairs of points at which the tangent planes are parallel and for which both points in the pair are parabolic points of $M$. For the locus of parabolic points of $M$ is generically a 1-dimensional set, a union of smooth curves, and requiring parallel tangent planes imposes two conditions on a pair of points of this set, so that a finite number of solutions can be expected. In this article we investigate one possible local degeneration of this generic situation by requiring also that the unique asymptotic directions coincide at such a pair of parabolic points with parallel tangent planes. For this to occur the surface $M$ must be contained in a smoothly varying family $M_\varepsilon$ of surfaces. Since our investigation is local we shall in fact consider two surface patches $M_0$ and $N_0$ which vary in a 1-parameter family $M_\varepsilon, N_\varepsilon$. A similar degeneracy was investigated for plane curves in~\cite{GR2017}; we sometimes call it a `supercaustic' situation. This term is defined in \S\ref{ss:superc}. We find the values $\lambda \ne 0,1$ for which the ratio $\lambda : 1-\lambda$ determines an equidistant at which the structure undergoes a qualitative change. There are one or three of these values, depending on the relative orientation of $M_0$ and $N_0$. One `degenerate' value always exists and results in a high codimension singularity; we are able to give a partial analysis of this case. When the other two values exist we call them {\em special values} (Definition~\ref{def:special}), and a complete analysis is given. The article is organized as follows. In \S\ref{s:general} we introduce the family of surfaces we shall work with (\S\ref{ss:surfaces}), and the maps which we shall classify up to $\mathcal A$-equivalence to study the equidistants (\S\S\ref{ss:generating}, \ref{ss:superc}). We also show how some of the conditions that arise later can be interpreted geometrically in terms of a scaled reflexion map (\S\ref{ss:contact}, Definition~\ref{def:scaledcontactmap}). In \S\ref{s:normal} we find normal forms of maps up to $\mathcal A$-equivalence that generate the equidistants: they are the sets of critical values of these maps. We examine in that section general values of the ratio (Generic Case 1.1) and the two `special' values (Special Case 1.2), leaving the `degenerate' value (Degenerate Case 2) to \S\ref{s:degen}. The main results are contained in Proposition~\ref{prop:def-indef} and the accompanying Figure~\ref{fig:def-indef} for Generic Case~1.1; Proposition~\ref{prop:case1.2} and the accompanying Figure~\ref{fig:special-clock} for Special Case~1.2, and Table~\ref{table1} in~\S\ref{ss:examples} for Degenerate Case~2. \section{The general setup}\label{s:general} \subsection{A generic family of surfaces}\label{ss:surfaces} Consider the parabolic set $P$ (assumed to be a nonempty smooth curve) of a generic smooth closed surface $M$ in $\mathbb R^3$. We can expect generically to find a finite number of pairs of distinct points on $P$ for which the tangent planes to $M$ are parallel, since the two points give us two degrees of freedom and it is two conditions for the tangent planes to be parallel. However it will not be generically true that the unique asymptotic directions at such a pair of points are parallel. For that we require a 1-parameter family of surfaces and it is this situation which we study here. Our considerations are local, and also affinely invariant. For this situation we have two surfaces, $M_\varepsilon$ and $N_\varepsilon$, varying in a 1-parameter family; using a family of affine transformations of $\mathbb R^3$ (coordinates $(x,y,z)$) we can assume that the origin lies on $M_\varepsilon$, that the origin is a parabolic point of $M_\varepsilon$ and that the unique asymptotic direction there is always along the $y$-axis, for all $\varepsilon$ close to 0. Further we can assume that the point $(0,0,1)$ lies on $N_\varepsilon$ for all small $\varepsilon$ and that for $\varepsilon=0$ this point is parabolic, has horizontal tangent plane parallel to the $(x,y)$-plane, and has unique asymptotic direction parallel to the $y$-axis. We realise this setup by the surfaces \begin{eqnarray} M_\varepsilon: z = f(x,y,\varepsilon) &=& f_{20}x^2 + f_{300}x^3+f_{210}x^2y + f_{120}xy^2 + f_{030}y^3 + \ldots \nonumber \\ &+& \varepsilon\left( f_{301}x^3+f_{211}x^2y+\ldots\right) + \varepsilon^2\left(f_{302}x^3 + \ldots \right) + \ldots, \label{eq:M}\\ N_\varepsilon: z = 1 + g(x,y,\varepsilon) &=& 1 + g_{20}x^2 + g_{300}x^3+g_{210}x^2y + g_{120}xy^2 + g_{030}y^3 + \ldots \nonumber \\ &+& \varepsilon\left(g_{101}x+g_{011}y + g_{201}x^2 + g_{111}xy+g_{021}y^2 + \ldots\right) \nonumber \\ &+& \varepsilon^2\left( g_{102}x + g_{012}y + \ldots\right) + \ldots. \label{eq:N} \end{eqnarray} For terms other than $f_{20}, g_{20}$, subscripts $ijk$ indicate that the corresponding monomial is $\varepsilon^kx^iy^j$. We make the following assumptions about these expansions. \begin{assump} {\rm (i) $f_{20}\ne 0, g_{20}\ne 0$, that is neither $M_0$ nor $N_0$ is umbilic at its basepoint $(0,0,0)$ or $(0,0,1)$. (ii) $f_{030} \ne 0, g_{030}\ne 0$, that is the parabolic curves of $M_0$ at the origin and $N_0$ at $(0,0,1)$ are smooth and not tangent to the asymptotic directions there (i.e.\ these points are not cusps of Gauss). We shall take $f_{030}>0$ without loss of generality, and we sometimes write $f_{030}=f_3^2, \ g_{030}=\pm g_3^2$ when a definite sign is needed, to avoid square roots appearing in the formulas. } \label{assump} \end{assump} \subsection{Family of maps for the equidistants}\label{ss:generating} The $\lambda$-equidistant for a fixed $\varepsilon$ is the locus of points in $\mathbb R^3$ of the form $(1-\lambda)\mbox{\boldmath $p$} \,+\lambda\mbox{\boldmath $q$}$ where $\mbox{\boldmath $p$}\in M_\varepsilon, \mbox{\boldmath $q$}\in N_\varepsilon$ and the tangent planes to $M_\varepsilon$ at $\mbox{\boldmath $p$}$ and $N_\varepsilon$ at $\mbox{\boldmath $q$}$ are parallel. \smallskip {\em We always assume $\lambda \ne 0, \lambda \ne 1$ in what follows.} \smallskip We use $s=(s_1, s_2)$ as parameters on $M_\varepsilon$ and similarly $t=(t_1,t_2)$ for $N_\varepsilon$; we have a 2-parameter family of maps $\mathbb R^4\to\mathbb R^3$: \begin{equation} \mathbb R^4 \times \mathbb R^2 \to \mathbb R^3, \ (s,t, \varepsilon,\lambda) \mapsto (1-\lambda)(s_1,s_2,f(s_1, s_2, \varepsilon)) + \lambda(t_1, t_2, 1+g(t_1, t_2, \varepsilon)). \label{eq:H1} \end{equation} Then it is straightforward to check that, for fixed $\varepsilon$ and $\lambda$, the set of critical values of this map is the $\lambda$-equidistant of $M_\varepsilon$ and $N_\varepsilon$. We are therefore interested in this family of maps up to $\mathcal A$-equivalence. We make the change of variables \[ (1-\lambda)s_1+\lambda t_1 = u_1, \ (1-\lambda)s_2 + \lambda t_2 = u_2, \mbox{ and write } \lambda=\lambda_0+\alpha, \] replacing $t_1$ and $t_2$, to rewrite (\ref{eq:H1}) as a map of the form (for any $\lambda_0\ne 0,1$) \begin{equation} H: \mathbb R^4\times\mathbb R^2 \to \mathbb R^3, \ H(s_1,s_2,u_1,u_2,\varepsilon,\alpha)= (u_1, u_2, h(s_1,s_2,u_1,u_2,\varepsilon,\lambda_0+\alpha)). \label{eq:H} \end{equation} regarded as a 2-parameter unfolding of the map $H_0(s_1,s_2,u_1,u_2,0,\lambda_0)$. Therefore we have the following. \begin{prop} The $\lambda$-equidistant for fixed $\varepsilon$ is the set of points $(u_1, u_2, h)\in \mathbb R^3$ for which $\partial h/\partial s_1 = \partial h/\partial s_2 = 0.$ For fixed $\lambda$ the union of all the equidistants, spread out in $\mathbb R^4$, the planar sections of which are the $\varepsilon=$ constant equidistants, is the set of points $ (u_1, u_2, h, \varepsilon) \in\mathbb R^4$ for which the same conditions $\partial h/\partial s_1 = \partial h/\partial s_2 = 0$ hold. \end{prop} \subsection{Maps and supercaustics}\label{ss:superc} Let $\phi:\mathbb R^4\to \mathbb R^2$ be given, for fixed $\lambda$ and $\varepsilon$, by $\phi(s_1,s_2,u_1,u_2)=(h_{s_1}, h_{s_2})$, subscripts denoting partial derivatives as usual. Then the corresponding equidistant, given by $\phi^{-1}(0,0)$, is singular when there is a kernel vector of $d\phi$ with image under $dH$ equal to {\bf 0}, these being evaluated at a point of $\phi^{-1}(0,0)$. This requires that \[ \mbox{rank } J < 4 \mbox{ where } J = \left(\begin{array}{cccc}h_{s_1s_1}&h_{s_1s_2}&h_{s_1u_1}&h_{s_1u_2}\\ h_{s_2s_1}&h_{s_2s_2}&h_{s_2u_1}&h_{s_2u_2}\\ 0&0&h_{u_1}&h_{u_2}\\ 0&0&1&0\\ 0&0&0&1 \end{array}\right),\] that is $h_{s_1s_1}h_{s_2s_2}=h_{s_1s_2}^2.$ The singular points of the equidistant for fixed $\lambda$ and $\varepsilon$ are therefore \begin{equation} \{ (u_1, u_2, h(s_1,s_2,u_1,u_2)) \ : \ h_{s_1}=h_{s_2}=h_{s_1s_1}h_{s_2s_2}-h_{s_1s_2}^2=0\}. \label{eq:singpts} \end{equation} We note here that, for fixed $\varepsilon$, the `centre symmetry set' of the pair of surfaces $M,N$ \cite{GZ}, which is the locus of singular points of the equidistants for varying $\lambda$, is given by the same formula (\ref{eq:singpts}) where $h$ is now a function of $s_1, s_2, u_1, u_2, \lambda$ but with $\varepsilon$ still fixed. It is possible that some singular points of the equidistant arise from singularities of the critical set itself in $\mathbb R^4$. In our case this requires, for fixed $\lambda$ and $\varepsilon$, that the top two rows of the above matrix $J$ are dependent. Indeed, evaluating these rows at $(s_1,s_2,u_1,u_2,\lambda,\varepsilon)=(0,0,0,0,\lambda,0)$ the second row is entirely zero. This means that, for all $\lambda$, but $\varepsilon=0$, the critical set itself is singular at the origin of $\mathbb R^4$. \begin{defs} {\rm In the above situation, the $\lambda$-axis is called a {\em supercaustic}; see \cite{GR2017}. The whole of this axis maps to singular points of the equidistants. } \end{defs} \begin{rem} {\rm This depends crucially on the special nature of our surfaces, with not only parallel tangent planes at parabolic points of $M_0$ and $N_0$ but also the asymptotic directions at those points being parallel. If instead we assume that the asymptotic directions are distinct (without loss of generality we can take them along the $x$ and $y$ axes) then the top two rows of $J$ become independent for $s_1=s_2=u_1=u_2=\varepsilon=0$ and arbitrary $\lambda$. In fact, writing $g_{020}$ for the coefficient of $y^2$ in the parametrization of $N_0$ and putting $g_{20}=0$ these rows become \[ \left(\begin{array}{cccc} 2(1-\lambda)f_{20} & 0 & 0 & 0 \\ 0 & \frac{2g_{020}(1-\lambda)^2}{\lambda} &0 & -\frac{2g_{020}(1-\lambda)}{\lambda} \end{array}\right). \] In this case the `supercaustic' is empty. } \end{rem} \subsection{Scaled reflexion map and contact} \label{ss:contact} Consider the affine map $\mathcal S: \mathbb R^3\to\mathbb R^3$ given by $\mathcal S(x,y,z) = (\mu x, \mu y, \mu(z-1))$ where $\mu=\frac{\lambda}{\lambda-1}\ne 0$. This leaves the point $(0,0,\lambda)$ fixed and maps $(0,0,1)$ to the origin. We can measure the contact between $\mathcal S(N_0)$ and $M_0$ by composing the parametrization of $\mathcal S(N_0)$ given by $\left(\mu x, \mu y, \mu g(x, y, 0)\right) $ with the equation of $M_0$, say $Z-f(X, Y, 0)=0$. \begin{defs}{\rm The {\em scaled contact map} is the contact map germ } \[ K:\mathbb R^2, (0,0) \to \mathbb R, 0, \ K(x,y)=\mu g(x,y,0) - f(\mu x, \mu y, 0), \ \mu=\frac{\lambda}{\lambda-1} \ \ \ {\rm as \ above}. \] \label{def:scaledcontactmap} \end{defs} \vspace*{-0.7cm} We shall find this contact map useful in interpreting the conditions which arise from $\varepsilon$-families of equidistants as $\varepsilon$ passes through 0. The 2-jet of $K$ is $K_2(x,y)=\mu(g_{20}-\mu f_{20})x^2$ so that in our situation $K$ is always non-Morse; it has corank 1 and is of type $A_k$ at $(0,0)$ for some $k$, provided $f_{20}\lambda + g_{20}(1-\lambda)\ne 0$ (when this fails we call this the `Degenerate Case~2'; see \S\ref{s:degen}). The coefficient of $y^3$ in $K$ is $\mu(g_{030}-\mu^2f_{030})$ so that $K$ is then of type exactly $A_3$ provided $f_{030}\lambda^2 - g_{030}(1-\lambda)^2 \ne 0$. If $f_{030}, g_{030}$ are nonzero and have opposite signs then of course this coefficient can never be zero. \begin{defs}{\rm Assume as above that $f_{20}\lambda + g_{20}(1-\lambda)\ne 0$. When $f_{030}, g_{030}$ have the same sign (without loss of generality, positive), and the above coefficient $f_{030}\lambda^2 - g_{030}(1-\lambda)^2$ of $y^3$ is zero, then we refer to the two resulting values of $\lambda$ as {\em special values}. Writing $f_{030}=f_3^2, g_{030}=g_3^2$ where we may take $f_3>0, g_3>0$, these special values of $\lambda$ are $\displaystyle{\frac{g_3}{g_3\pm f_3}}$. (We shall usually assume $f_3 \ne g_3$ to avoid one of the special values `going to infinity'.) These special values of $\lambda$ give rise to what we shall call Special Case~1.2. This is examined in detail in \S\ref{ss:special}. \label{def:special} } \end{defs} When $\lambda$ has a special value, say $\displaystyle{\frac{g_3}{g_3 + f_3}}$, the condition for $K$ to have exactly type $A_3$ at $(0,0)$ works out to be \begin{equation} (4g_{040}g_{20}-g_{120}^2)f_3^4 + 4g_{040}f_{20}f_3^3g_3+2f_{120}g_{120}f_3^2g_3^2+4f_{040}g_{20}f_3g_3^3 + (4f_{040}f_{20}-f_{120}^2)g_3^4 \ne 0. \label{eq:A3} \end{equation} This condition will be satisfied by a generic pair of surfaces $M_0, N_0$. With the other special value the signs in front of the coefficients of $f_3^3g_3$ and $f_3g_3^3$ both change to minus. When the quadratic terms of the contact map $K$ vanish identically, that is when $f_{20}\lambda + g_{20}(1-\lambda)= 0$, the cubic terms will in general be nondegenerate and $K$ will generically have type $D_4^\pm$, that is $\mathcal{R}$-equivalent to $x^3\pm xy^2$. The polynomial in the coefficients of $f$ and $g$ which distinguishes the two cases is rather complicated but, remarkably, it has a different interpretation which we give in \S\ref{s:degen} in the context of self-intersections of the equidistant. See Remark~\S\ref{rem:D4-contact}. \section{The equidistants: normal forms}\label{s:normal} For a general study of the equidistants we need to expand the function $h$ in (\ref{eq:H}) using the parametrizations (\ref{eq:M}) and (\ref{eq:N}). We begin with $\varepsilon=0$ and write, for a fixed $\lambda$, $H_{0\lambda}(s,u)=(u, h_{0\lambda}(s,u))=H(s,u,0,\lambda)$. The coefficient of $s_1^is_2^ju_1^ku_2^\ell$ in $h_{0\lambda}$ will be written $c_{ijk\ell}$. We find: \[ \mbox{The 2-jet of } h_{0\lambda} \mbox{ at } s=u=0 \mbox{ is } (1-\lambda)(\lambda f_{20} + (1-\lambda)g_{20})s_1^2 - 2g_{20}\textstyle{\frac{1-\lambda}{\lambda}}s_1u_1.\] Note that the coefficient of $s_1u_1$ is nonzero. The main subdivision is between those $\lambda$ for what $\lambda f_{20} + (1-\lambda)g_{20}$ is nonzero (Generic Case~1) or zero (Degenerate Case 2). We cover the Generic Case here and the Degenerate Case in \S\ref{s:degen} below. \medskip\noindent {\bf Case 1} \ $\lambda f_{20} + (1-\lambda)g_{20} \ne 0$. From \S\ref{ss:contact} this is also the condition for the contact function $K$ to have type $A_k$ for some $k$. We can now redefine the variable $s_1$ (`completing the square') to eliminate all terms containing $s_1$ besides $s_1^2$ in $h_{0\lambda}$. The coefficient of $s_2^3$ then becomes \[ c_{0300}=\textstyle{\frac{1-\lambda}{\lambda^2}}\displaystyle (f_{030}\lambda^2-g_{030}(1-\lambda)^2).\] \subsection{The general values of $\lambda$} {\bf Generic Case 1.1} $c_{0300}\ne 0$, that is, $Q \ne 0$ where \begin{equation} Q = f_{030}\lambda^2-g_{030}(1-\lambda)^2. \label{eq:Q} \end{equation} From \S\ref{ss:contact} this is also the condition for the contact function $K$ to have type $A_2$ and that $\lambda$ is not a {\em special value}. Consider the 3-jet of $H_{0\lambda}$. There are six degree 3 monomials which do not involve $s_1$ and which do involve $s_2$ (any monomial in $u_1, u_2$ alone can be eliminated by a `left-change' of coordinates). We still have the freedom to change coordinates in $s_2$ (involving $s_2, u_1, u_2$) and in $u_1, u_2$ (involving $u_1, u_2$ only). Using only the first of these the terms in $s_2^2u_1$ and $s_2^2u_2$ can be eliminated, leaving \begin{equation} \left(u_1, u_2, (1-\lambda)(\lambda f_{20} + (1-\lambda)g_{20})s_1^2 +c_{0300}s_2^3 + s_2\left(c_{0120}u_1^2+c_{0111}u_1u_2+c_{0102}u_2^2\right)\right). \label{eq:3jet} \end{equation} (The coefficients $c_{ijk\ell}$ need to be updated to take account of the substitutions.) The quadratic form in $u_1$ and $u_2$ can be diagonalised, eliminating the term in $s_2u_1u_2$ so that, scaling $s_1$, the last coordinate in $\mathbb R^3$ and $s_2$, we have 3-jet, say \[ (u_1, u_2, s_1^2+s_2^3+as_2u_1^2+bs_2u_2^2).\] Suppose that the quadratic form in parentheses in (\ref{eq:3jet}) is not a perfect square, that is $c_{0111}^2-4c_{0120}c_{0102}\ne 0$. Then $a$ and $b$ above are nonzero. The condition for this is $R\ne 0$ where \begin{equation} R = f_{20}^2f_{030}\left(g_{120}^2-3g_{210}g_{030}\right) - g_{20}^2g_{030}\left(f_{120}^2-3f_{210}f_{030}\right). \label{eq:R} \end{equation} Since this condition does not involve $\lambda$ it will be satisfied by a generic pair of surfaces $M_0, N_0$. Note that the condition separates into a quantity for $M_0$ unequal to the same quantity for $N_0$. \begin{prop} The condition $R\ne 0$ can also be interpreted as saying that the images under the Gauss map of the parabolic curves on $M_0$ and $N_0$ have ordinary tangency (that is, 2-point contact) in the Gauss sphere. These images are smooth by Assumptions~\ref{assump}. \label{rem:contact} \end{prop} {\bf Proof} \ The parabolic curves on the two surfaces are given by $f_{xx}f_{yy}-f_{xy}^2=0$ and $g_{xx}g_{yy}-g_{xy}^2=0$ for $M_0$ and $N_0$ respectively. The surface $M_0$ has a parabolic point at the origin and $N_0$ has a parabolic point at $(0,0,1)$ and since they have parallel asymptotic directions at these points the images of the respective parabolic curves under the Gauss map are tangent. We shall use the modified Gauss maps, that is $(x,y)\mapsto (X,Y)=(f_x, f_y)$ and similarly for $g$. By a direct calculation, for $M_0$ the image of the parabolic curve, parametrized by $x$, under the modified Gauss map has an equation, up to terms in $X^2$, of the form \[ Y=\frac{3f_{030}f_{210}-f_{120}^2}{12f_{20}^2f_{030}}X^2 \] with a similar result for $N_0$. The coefficients of $X^2$ are unequal, that is the images have ordinary tangency, if and only if the condition $R$ above is nonzero. \hfill$\Box$ \medskip Further scaling allows this case to be reduced to \begin{equation} H_{0\lambda}(s,u)= (u_1, u_2, s_1^2+s_2^3\pm s_2u_1^2 \pm s_2u_2^2), \label{eq:1.1} \end{equation} where the $\pm$ signs are independent, but by interchanging $u_1$ and $u_2$ we reduce to three cases, as follows. \begin{prop} The normal form {\rm (\ref{eq:1.1})} is as follows, using the notation of {\rm (\ref{eq:Q})} and {\rm (\ref{eq:R})}. See Figure~\ref{fig:def-indef}. \noindent {\rm Subcase 1.1.1} (positive definite): $H_{0\lambda}(s,u)= (u_1, u_2, s_1^2+s_2^3 +s_2u_1^2 + s_2u_2^2)$. \\ The condition for this is $f_{030}g_{030} < 0$ and $QR>0$. Bearing in mind the assumptions \ref{assump} the latter condition is equivalent to $R>0$. This subcase will also be referred to as $A_2^{++}$.\\ {\rm Subcase 1.1.2} (negative definite): $H_{0\lambda}(s,u)= (u_1, u_2, s_1^2+s_2^3 -s_2u_1^2 - s_2u_2^2)$. \\ The condition for this is $f_{030}g_{030} > 0$ and $QR>0$. This subcase will also be referred to as $A_2^{- -}$\\ {\rm Subcase 1.1.3} (indefinite): $H_{0\lambda}(s,u)= (u_1, u_2, s_1^2+s_2^3 +s_2u_1^2 - s_2u_2^2)$. \\ The condition for this is $QR< 0$. In the case when $f_{030}g_{030}<0$ the condition becomes $R<0$. This subcase will also be referred to as $A_2^{+-}$, \hfill$\Box$ \label{prop:def-indef} \end{prop} \smallskip\noindent The values of $f_{030}, g_{030}$ and $R$ are fixed by the two surfaces $M_0$ and $N_0$. However, assuming $f_{030}g_{030}>0$, special values of $\lambda$ exist at which $Q$ as in (\ref{eq:Q}) is zero. Then, as $\lambda$ passes through such a special value, the normal form changes between negative definite and indefinite, so that the family of equidistants, for $\varepsilon$ passing through 0, changes accordingly. \smallskip\noindent Using standard techniques it can be checked that (\ref{eq:1.1}) is 3-$\mathcal A$-determined, and that an $\mathcal A_e$-versal unfolding is given by adding a multiple of $(0,0,s_2)$ to the above normal form: \begin{equation} H_{\varepsilon\lambda}(s,u)= (u_1, u_2, s_1^2+s_2^3\pm s_2u_1^2 \pm s_2u_2^2+\varepsilon s_2). \label{eq:normalform-unf} \end{equation} In terms of the original surfaces the coefficient of $\varepsilon s_2$ is $-g_{011}(1-\lambda)$, and therefore we require $g_{011}\ne 0$ for a versal unfolding by the parameter $\varepsilon$. \begin{figure}[!ht] \begin{center} \hspace*{-1.5cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special11.pdf}} \hspace*{-2.5cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special-1-1e-.pdf}} \hspace*{-2.5cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special-1-1e0.pdf}} \hspace*{-2.7cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special-1-1e+.pdf}} \end{center} \vspace*{-3.5cm} {\small \hspace*{1cm} positive def., $\varepsilon<0$ \hspace{0.2cm} negative def., $\varepsilon<0$ \hspace{0.25cm} negative def., $\varepsilon=0$ \hspace{0.25cm}negative def., $\varepsilon>0$ } \vspace*{-0.5cm} \begin{center} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special1-1e-2.pdf}} \hspace*{-3cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special1-1e0.pdf}} \hspace*{-3cm} \scalebox{1.8}{\includegraphics[width=1.3in]{non-special1-1e+.pdf}} \end{center} \vspace*{-3.5cm} {\small \hspace{4cm} Indefinite, $\varepsilon<0$ \hspace{0.4cm} Indefinite, $\varepsilon=0$ \hspace{0.4cm} Indefinite, $\varepsilon>0$.} \caption{\small The various subcases of Proposition \ref{prop:def-indef}: Positive definite (for $\varepsilon>0$ the equidistant is empty and for $\varepsilon<0$ has a compact cuspidal edge); 1.1.2 Negative definite, where for $\varepsilon>0$ there is a compact cuspidal edge; 1.1.3 Indefinite, which has two cuspidal edges for $\varepsilon\ne 0$ that form a crossing when $\varepsilon=0$. } \label{fig:def-indef} \end{figure} \begin{rem} {\rm It is interesting to relate the above classification to that of the regions on $M$ and $N$ which contribute to the pairs of parallel tangent planes (compare Prop.2.4 and Figure~3 of \cite{GR2015}). A schematic diagram of the common regions for $M$ and $N$ on the Gauss sphere is given in Figure~\ref{fig:parab-curves} below. The relationship between these and the classification of Proposition~\ref{prop:def-indef} is as follows. \noindent Subcase 1.1.1 (positive definite, $f_{030}g_{030} < 0$ and $R>0$): This is (d).\\ Subcase 1.1.2 (negative definite, $f_{030}g_{030} > 0$ and $QR>0$): This is (ac).\\ Subcase 1.1.3 (indefinite): This can arise in two ways, as either (ac) or (b)\\ \hspace*{1cm} (ac) when $f_{030}g_{030}>0$ and $QR<0$,\\ \hspace*{1cm} (b) when $f_{030}g_{030}<0$ and $R < 0$. } \end{rem} \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=3in]{parab-curves2.pdf}} \end{center} \vspace*{-9.5cm} \caption{\small Schematic diagrams of the images of the Gauss map for the surfaces $M_\varepsilon$ and $N_\varepsilon$. The curves represent the parabolic curves of these surfaces, along which the Gauss map has a fold, and the hatched regions represent the regions where the images of the Gauss maps of $M_\varepsilon$ and $N_\varepsilon$ intersect, that is the regions of the Gauss sphere representing parallel normals (or parallel tangent planes). Left to right of each row shows varying $\varepsilon$, with the middle diagram $\varepsilon=0$, and the three possible cases are labelled (ac), (b), (d) as described in the text, to accord with Figure~3 in \cite{GR2015}. Note that the two curves for $\varepsilon=0$ have ordinary tangency---see Remark~\ref{rem:contact}. } \label{fig:parab-curves} \end{figure} Let us call a pair of points, one from $M_\varepsilon$ and the other from $N_\varepsilon$, at which the tangent planes are parallel, `mates'. Consider for example the top left diagram of Figure~\ref{fig:parab-curves} and assume that the upper curve is the image of the parabolic curve of $N_\varepsilon$ in the Gauss sphere. Each point above this curve is the image of two points of $N_\varepsilon$ and two points of $M_\varepsilon$ giving altogether four mates. Each point on the upper curve is the image of two points of $M_\varepsilon$ and a single parabolic point of $N_\varepsilon$ which is a mate for both of them. On the surface $M_\varepsilon$ itself there will be a region close to the base-point $(0,0,0)$ consisting of those points of $M_\varepsilon$ with at least one mate, and usually two mates, on $N_\varepsilon$---a region `doubly covered by mates on $N_\varepsilon$'. This region will have a local boundary corresponding in the way just described to the parabolic curve on $N_\varepsilon$. Turning to the upper right diagram of Figure~\ref{fig:parab-curves} the hatched region representing mates now contains a segment of the parabolic curve of $M_\varepsilon$. On the surface $N_\varepsilon$ this will result in a closed loop on the boundary of the region of points having mates on $M_\varepsilon$. The situation on the surfaces themselves is illustrated schematically in Figure~\ref{fig:Gauss-Map-Projections-3cases}. \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=4in]{Gauss-Map-Projections-3cases.png}} \end{center} \vspace*{-0.5cm} \caption{\small In this diagram, the Gauss map of the surfaces $M_\varepsilon$ and $N_\varepsilon$ is represented by vertical projection and the surfaces in this schematic representation are labelled $\widetilde{M}, \widetilde{N}$. The rows and columns are arranged as in Figure~\ref{fig:parab-curves}. See the above text for further explanation. } \label{fig:Gauss-Map-Projections-3cases} \end{figure} \subsection{The `special values' of $\lambda$}\label{ss:special} {\bf Special Case 1.2} $c_{0300} = 0$, that is $\lambda$ has one of the two special values as in \S\ref{ss:contact}. Note that this requires $f_{030}$ and $g_{030}$ to have the same sign, which we take as positive, and write $f_{030}=f_3^2, g_{030}=g_3^2$ where $f_3>0, g_3>0$. This case will be examined by choosing one of the special values for $\lambda$ given by $c_{0300}=0$, namely $\lambda = \displaystyle{\frac{g_3}{g_3 + f_3}}$. We can eliminate the terms in $s_2u_2^2$ and $s_2u_1u_2$ by a substitution of the form $s_2=s'_2 +au_1+bu_2$, assuming only the condition $\lambda f_{20} + (1-\lambda)g_{20} \ne 0$ of Generic Case 1. The coefficient of $s_2^2u_2$ then becomes $3f_2^2\ne 0$ and the remaining degree 3 terms in $h_{0\lambda}$, namely $s_2^2u_1, s_2^2u_2$ and $s_2u_1^2$ can therefore be reduced to the last two by redefining $u_2$, at the same time making the coefficient of $s_2^2u_2$ equal to 1. The 3-jet of $H_{0\lambda}$ is now of the form (scaling $s_1$) \[ (u_1, u_2, \pm s_1^2 +s_2^2u_2+c_{0120}s_2u_1^2),\] where the updated $c_{0120}$ is nonzero if and only if $R\ne 0$ as in (\ref{eq:R}), and for generic $M_0, N_0$ this will be satisfied. Passing to the 4-jet of $H_{0\lambda}$, we can first remove all monomials divisible by $s_1$ besides $\pm s_1^2$ by completing the square, and then eliminate all degree 4 monomials besides $s_2^4$ and $s_2^3u_1$, without adding any new monomials of degree 3. This can be done, for example, by substitutions of the form $s_2 = s'_2 + \mbox{ quadratic terms in } s'_2, u'_1, u'_2$, $u_1 = u'_1 + \mbox{ quadratic terms in } u'_1, u'_2,$ and similarly for $u_2$. A left change of coordinates will then restore the first two components of $H_{0\lambda}$ to $(u_1, u_2)$. The 4-jet is now reduced to \[ \left(u_1, u_2, \pm s_1^2+s_2^2u_2+c_{0120}s_2u_1^2+c_{0400}s_2^4+c_{0310}s_2^3u_1\right).\] This is 4-$\mathcal A$-determined provided all the coefficients are nonzero. The coefficient $c_{0400}$ is nonzero if and only if the `exactly $A_3$ contact condition' (\ref{eq:A3}) holds. Unfortunately we do not know a geometrical criterion for the coefficient of $s_2^2u_2$ to be nonzero; it involves only the coefficients in the functions $f, g$ which define the surfaces $M_0, N_0$. Scaling reduces all but the coefficient of $s_1^2$ to 1 and we summarize this discussion as follows. \begin{prop} For Special Case 1.2, that is $f_{030}=f_3^2, g_{030}=g_3^2$, a special value of $\lambda=g_3/(g_3\pm f_3)$ (Definition~\ref{def:special} or $Q=0$ as in (\ref{eq:Q})) but $\lambda f_{20} + (1-\lambda)g_{20} \ne 0$, the function $H_{0\lambda}$ reduces under $\mathcal A$-equivalence to the normal form \begin{equation} H_{0\lambda}(s_1,s_2,u_1,u_2)=\left(u_1, u_2, \pm s_1^2+s_2^2u_2+s_2u_1^2+ s_2^4+s_2^3u_1 + (ps_2+qs_2^3) \right), \label{eq:1.2} \end{equation} provided the geometrical conditions $R\ne 0$ (\ref{eq:R}), and `exactly $A_3$-contact' (\ref{eq:A3}) hold, together with a third condition on $M_0, N_0$ which will be generically satisfied. The terms $ps_2+qs_2^3$ in brackets represent an $\mathcal A_e$-versal unfolding provided the geometrical condition $g_{011}\ne 0$ in (\ref{eq:M}) holds. See Figure~\ref{fig:special-clock} for a `clock diagram' of the equidistants in the $(p,q)$-plane. \hfill$\Box$ \label{prop:case1.2} \end{prop} A similar normal form, without the fourth variable $s_1$, but with an additional ambiguity of sign, occurs as $4_2^2$ in \cite{Marar-Tari}; see also \cite{Gory}. The sign in front of $s_1^2$ will not affect our results since the critical set of $H_{0\lambda}$ has $s_1=0$. The versal unfolding condition means that as $\varepsilon$ changes through 0 the normal to $N$ tilts in a direction with a nonzero component along the $y$-axis, which is the asymptotic direction at $\varepsilon=0$. When $\lambda$ moves away from a special value then, in (\ref{eq:1.2}), $p$ remains at 0 while $q$ becomes small and nonzero. We can then reduce (\ref{eq:1.2}) as in Generic Case~1.1, as follows. The 3-jet of (\ref{eq:1.2}) becomes $(u_1, u_2, s_1^2+s_2^2u_2+s_2u_1^2+qs_2^3)$ with $q\ne 0$. Replacing $s_2$ by $ms_2+nu_2$ where $3qn+1=0$ and $qm^3=1$, and then removing terms in the third component involving only $u_1, u_2$, reduces this to \[ \left(u_1, u_2, s_1^2+\frac{1}{q^{1/3}}s_2u_1^2-\frac{1}{3q^{4/3}}s_2u_2^2+s_2^3\right).\] The product of terms in front of $s_2u_1^2$ and $s_2u_2^2$ therefore has the sign of $-q$ and hence changes as $q$ passes through 0. Furthermore it is not possible for both signs to be positive. We deduce the following. \begin{cor} Moving $\lambda$ through a special value $\lambda=g_3/(g_3\pm f_3)$ but keeping $\varepsilon=0$ the type of equidistant always changes between Subcase~1.1.2 (negative definite) and Subcase~1.1.3 (indefinite) as in Proposition~\ref{prop:def-indef}. It is not possible to realize the positive definite Subcase~1.1.1. \label{cor:special-unf} \end{cor} Figure~\ref{fig:special-clock} shows a typical way in which equidistants near to a special value evolve as $\lambda$ and $\varepsilon$ change. \begin{figure} \begin{center} \scalebox{1.4}{\includegraphics[width=5in]{Gulls3d-clock.png}} \end{center} \caption{\small Special Case 1.2. A typical `clock diagram' of equidistants close to a special value of $\lambda_0=g_3/(g_3\pm f_3)$. The vertical axis represents $\lambda=\lambda_0+\alpha$ and the horizontal axis the parameter $\varepsilon$ in the family of surfaces.} \label{fig:special-clock} \end{figure} \subsection{Some further details of Special Case 1.2} We take $\lambda_0=\frac{g_3}{g_3+f_3}$ as a special value, assuming $f_{20}\ne 0, g_{20}\ne 0, f_3>0, g_3>0, \lambda_0 f_{20} + (1-\lambda_0)g_{20} \ne 0$, i.e.\ $f_{20}g_3+g_{20}f_3 \ne 0$, and also $R\ne 0$ (\ref{eq:R}) hold. We write $\lambda=\lambda_0+\alpha$ for nearby values, and examine the full versal unfolding $\mathcal H$ of $H$, as follows. Thus the family of equidistants can be reduced to \begin{equation} \mathcal H(s_1,s_2,u_1,u_2,p,q)= \left(u_1, u_2, \pm s_1^2+s_2^2u_2+s_2u_1^2+ s_2^4+s_2^3u_1 + ps_2 + qs_2^3\right)=(u_1,u_2,\tilde{h}), \label{eq:Htilde} \end{equation} say, where $p,q$ are unfolding parameters that are closely related to $\varepsilon, \alpha$ respectively. As an aid to understanding the equidistants for $(\varepsilon,\alpha)$ close to $(0,0)$ we can calculate the loci in the $(p,q)$-plane at which the structure of the singular set or the self-intersection set on the equidistant changes. \begin{enumerate} \item {\bf Singular set} \ For fixed $p,q$ the singular set is the image under $\mathcal H$ of the set of points (using suffices for partial derivatives) \[ (0,s_2,u_1,u_2) \mbox{ such that } \tilde{h}_{s_2}=\tilde{h}_{s_2s_2}=0.\] Eliminating $u_2$, the equations reduce to \[ u_1^2 - 3s_2^2u_1 + (p - 3s_2^2q-8s_2^3) = 0,\] and the condition for this to have real solutions for $u_1$ is \[ 9s_2^4 +32s_2^3 + 12qs_2^2 - 4p \ge 0.\] We are therefore interested in finding the pairs $(p,q)$ for which there is a change in the number of real intervals in the set of $s_2$ satisfying this inequality. This will occur when the discriminant with respect to $s_2$ vanishes, and that gives a locus of the form \begin{equation} p=0 \mbox{ or } p = \textstyle{\frac{1}{16}}\displaystyle q^3 + \textstyle{\frac{9}{1024}}\displaystyle q^4 + \ldots. \label{eq:pqcuspedge} \end{equation} See Figure~\ref{fig:pq-plane}. \item {\bf Self-intersection locus} \ Suppose $ (0, s_{21}, u_{1}, u_{2})$ and $(0, s_{22}, u_{1}, u_{2})$ are both in the critical set of $\mathcal H$ ($h_{s_1}=0$ gives $s_1=0$) and have the same image under $\mathcal H$. Then with a little more trouble we can eliminate the $u$ variables and obtain a condition in $s_{21}, s_{22}$ alone. It is slightly more convenient to write $s_{21}=v_1+v_2, \ s_{22}=v_1-v_2$; then in fact we require $ v_1(4v_1^3 +16v_1^2+8qv_1 + p+q^2) \ge 0$. The number of $v_1$-intervals on which this holds will change when the discriminant with respect to $v_1$ vanishes. One case here gives the same condition as (i) above, but we are concerned with the remaining possibility: taking into account that $v_1, v_2$ must both have real solutions the locus in the $(p,q)$-plane is \begin{equation} p = -q^2, \ q\ge 0, \label{eq:pqselfint} \end{equation} where of course the double root is $v_1=0$, that is $s_{22}=-s_{21}$. (The other potential double root when $p=-q^2$ leads to $q=2$ and is therefore not relevant to a neighbourhood of the origin in the $(p,q)$-plane.) See Figure~\ref{fig:pq-plane}. \end{enumerate} \begin{figure} \begin{center} \scalebox{1.4}{\includegraphics[width=1.4in]{pq-plane.png}} \end{center} \vspace*{-0.2in} \caption{\small Special Case 1.2. A schematic drawing of two curves in the $p,q$-plane at which the structure of the equidistant in the family~(\ref{eq:Htilde}) changes, either because the cuspidal edge set changes (solid curve, together with the $q$-axis) or the self-intersection set changes (dashed curve). } \label{fig:pq-plane} \end{figure} \section{Degenerate Case 2}\label{s:degen} In this section we give some details of Degenerate Case~2, that is $\lambda f_{20} + (1-\lambda)g_{20} = 0$. This gives a unique value of $\lambda$, namely $\displaystyle{\lambda=\frac{g_{20}}{g_{20}-f_{20}}}$. (If $f_{20}=g_{20}$ then, using $\lambda f_{20} + (1-\lambda)g_{20} = 0$, it follows that $f_{20}=g_{20}=0$, contrary to our assumptions.) Thus whatever surfaces $M_0, N_0$ we start with there will be an equidistant which falls into this case. It turns out to be a rich area for investigation; here we shall give some invariants which help to separate out the many subcases. One of these invariants classifies the effect of changing $\lambda$ slightly from the degenerate value, while preserving the geometrical situation of two surfaces with parallel tangent planes at parabolic points where the asymptotic directions are parallel, that is $\varepsilon=0$ in (\ref{eq:M}), (\ref{eq:N}). See Proposition~\ref{prop:case2near}. \subsection{A normal form for Degenerate Case 2} The 2-jet of $H_{0\lambda}$ is now $(u_1,u_2, 2f_{20}s_1u_1)$. Writing the third component as $u_1(s_1 + \mbox{h.o.t.}) + $ terms independent of $u_1$ and then using the bracketed expression to redefine $s_1$ we can eliminate $u_1$ from the higher terms. Then replacing $s_2$ by an expression of the form $s_2+au_2$ we can remove the degree 3 terms $s_1u_2^2$ and $s_1s_2u_2$. When this is done, the coefficient of $s_2^2u_2$ becomes $3g_{030}f_{20}^2/g_{20}^2 \ne 0$ and the coefficient of $s_2u_2^2$ becomes $3f_{20}g_{030}(g_{20}-f_{20})/g_{20}^2\ne 0$. We shall also assume that the coefficient of $s_1^3$ is nonzero to avoid further degeneration. We can now use scaling to reduce the 3-jet of $H_{0\lambda}$ to \[ \left(u_1, \ u_2, \ s_1u_1+ s_1^3+s_2^2u_2+ s_2u_2^2+bs_1^2s_2+cs_1^2u_2+ds_1s_2^2+es_2^3\right),\] for coefficients $b, c, d, e$. The 4-jet can then be reduced by similar arguments, including scaling, to \begin{eqnarray} (u_1,u_2,h)&=& \left(u_1, \ u_2, \ s_1u_1+ s_1^3+s_2^2u_2+ s_2u_2^2+bs_1^2s_2+cs_1^2u_2+ds_1s_2^2+es_2^3+ s_1^4 \right. \nonumber \\ && \left. + (ps_2+qs_1^2) \right), \label{eq:s1u1} \end{eqnarray} provided the cofficient of $s_1^4$ is nonzero: this and the 4-$\mathcal A$-determinacy of this 4-jet hold generically, by standard calculations. The terms in brackets, $ps_1+qs_1^2$, represent an $\mathcal A$-versal unfolding of this germ. We have not been able to reduce the number of coefficients $b,c,d,e$. We shall work with (\ref{eq:s1u1}) as a `normal form' and when appropriate interpret the coefficients in terms of the surfaces $M_0, N_0$. The equidistant for $M_0, N_0$ and $\lambda = g_{20}/(g_{20}-f_{20})$ is then locally diffeomorphic to the image under (\ref{eq:s1u1}) of the set $\{(s_1,s_2,u_1,u_2): h_{s_1}=h_{s_2}=0\}$. Here, $h_{s_1}=0$ defines $u_1$ as a smooth function of the other three variables, while $h_{s_2}=0$ can be written \begin{equation} \frac{\partial h}{\partial s_2}=(s_2+u_2)^2+bs_1^2+2ds_1s_2+(3e-1)s_2^2=(s_2+u_2)^2-T(s_1,s_2)=0, \label{eq:dhds2} \end{equation} say where $T$ is a quadratic form in $s_1, s_2$ which we shall assume to be nondegenerate, that is $d^2-b(3e-1)\ne 0$. \subsection{Plotting the equidistants}\label{ss:param} It is also useful to rewrite the equation of the quadric cone $C$, given by $h_{s_2}=0$, where $p=q=0$ in (\ref{eq:s1u1}), and provided $b \ne 0$, as \begin{equation} C: \ \ (s_2+u_2)^2 + b\left(s_1+\frac{d}{b}s_2\right)^2 + \left(\frac{3be-b-d^2}{b}\right)s_2^2=0. \label{eq:cone} \end{equation} Note that this is a single point at the origin if and only if all coefficients are $>0$ (since the first one is $>0$), that is \[ b> 0, \ d^2+b-3be<0;\] compare Proposition~\ref{prop:case2near}. The equidistant (for $p=q=0$) is the image of $C$ under the map $\mathbb R^3\to\mathbb R^3$ given by \[ (s_1,s_2,u_2)\mapsto (u_1, u_2, \overline{h}(s_1,s_2,u_2))\] where on the right-hand side $u_1$ is expressed in terms of $s_1,s_2,u_2$ using $h_{s_1}=0$ and this is substituted into $h$, giving the function $\overline{h}$. We can find a `good' parametrization of the equidistant by using coordinates $(x_1,x_2,s_2)$ and writing (\ref{eq:cone}) as \[ x_1^2 + bx_2^2 + ks_2^2, \mbox{ where } k = \textstyle{\frac{3be-b-d^2}{b}}, \ x_1=s_2+u_2, \ x_2=s_1+\textstyle{\frac{d}{b}}\displaystyle s_2.\] Thus the substitution to use in $\overline{h}$ is $u_2=x_1-s_2, \ s_1=x_2-\textstyle{\frac{d}{b}}\displaystyle s_2$. The equidistant is then plotted as follows. \begin{enumerate} \item If $b>0$ and $C$ is not a single point then $k<0$ (i.e.\ $d^2+b-3be>0$) and we write \[ x_1^2 + bx_2^2 = (-k)s_2^2,\] so that for any $(x_1, x_2)\ne(0,0)$ we have two distinct values for $s_2$: there is no restriction on the values of $x_1,x_2$. We use $x_1,x_2$ as parameters and the two `halves' of $C$ are given by the two values of $s_2$. \item If $b<0, k>0$ (i.e.\ $ d^2+b-3be>0$) then we similarly write $x_1^2 +k s_2^2 = (-b)x_2^2,$ so that for any $(x_1, s_2)\ne(0,0)$ we have two distinct values for $x_2$. Here $x_1,s_2$ are used as parameters. \item Finally if $b<0, k<0$ (i.e.\ $d^2+b-3be<0$) then we write $x_1^2 = (-b)x_2^2 + (-k)s_2^2$ and for any $(x_2, s_2)\ne(0,0)$ we have two distinct values for $x_1$. Here $x_2,s_2$ are used as parameters. \end{enumerate} For values of $(p,q)$ other than $(0,0)$ the equation of $C$ acquires an extra term $-p$ on the right-hand side, thus creating a hyperboloid of one or two sheets (or an ellipsoid when $C$ is a single point). In fact the hyperboloid has one sheet when $bkp >0$, that is $(d^2+b-3be)p<0)$, and two sheets when $bkp<0$, that is $(d^2+b-3be)p>0)$. In the two-sheet situation the same method as above plots the equidistant, without restrictions on the values of the parameters. In the one-sheet situation the points in the parameter plane lie outside an ellipse, the `waist' of the hyperboloid. This ellipse is given in the three situations above by $ x_1^2 + bx_2^2 =-p, \ x_1^2 +k s_2^2=-p$ and $(-b)x_2^2 + (-k)s_2^2=p$ respectively. In the situation where $C$ is a single point, and $p<0$, the points in the parameter plane lie inside an ellipse. In all situations, $q$ does not affect the hyperboloid or ellipsoid, but of course its value affects the function $\overline{h}$. \subsection{Nearby non-special values of $\lambda$}\label{ss:case2nearby} Here, we examine the effect of adding in the term $qs_1^2$ in (\ref{eq:s1u1}). This represents changing $\lambda$ from the value $g_{20}/(g_{20}-f_{20})$ to a nearby value, which will be of the type considered in Generic Case~1.1, provided the coefficient $e$ of $s_2^3$ in (\ref{eq:s1u1}) is nonzero, and to avoid further degeneracy we shall assume this to be true. We determine here, in terms of $b,c,d,e$, which subcase of Proposition~\ref{prop:def-indef} is obtained, and then refer this back to the surfaces $M_0,N_0$. (The subcase does not depend on the sign of $q$ in the added term $qs_1^2$.) To do this we reduce (\ref{eq:s1u1}), with $p=0$ but with $qs_1^2$ present, to the normal form found above for Generic Case~1.1, by making the `left' and `right' changes of coordinates as sketched above. We can restrict attention for this to the terms of (\ref{eq:s1u1}) of degree $\le 3$ since the Generic Case~1.1 germ is 3-$\mathcal A$-determined. Thus we start by redefining $s_1$ (`completing the square') to change the degree 2 terms to $s_1^2$, remove the terms in $u_1, u_2$ only, remove the remaining terms besides $s_1^2$ that are divisible by $s_1$ and then redefine $s_2$ by adding suitable multiples of $u_1$ and $u_2$. The result of this is to reduce the 3-jet of (\ref{eq:s1u1}) by $\mathcal A$-equivalence to the form \[ \left( u_1, \ u_1, \ qs_1^2 + e s_2^3 + \frac{s_2}{12eq^2}\left( (3be-d^2)u_1^2+4qdu_1u_2+4q^2(3e-1)u_2^2 \right)\right). \] The discriminant of the quadratic form in $u_1, u_2$ is $(d^2+b-3be)/3eq^2$, so this form is definite if and only if $e(b+d^2-3be)<0$. Scaling so that the terms in $s_1^2, s_2^3$ have coefficients equal to 1 multiplies the quadratic form in $u_1, u_2$ by $(q^2e)^{-1/3}$, and from this we deduce the following, where (i) and (ii) are derived by direct calculations from the parametrizations of $M_0$ and $N_0$. \begin{prop}\label{prop:case2near} The normal form (\ref{eq:s1u1}) for Degenerate Case~2, with $p=0$ but $q$ nonzero and small, corresponding to a small change in $\lambda$, gives the following subcases of Generic Case~1.1 (general $\lambda$): \\ Subcase 1.1.1 (positive definite, $++$): $e>\frac{1}{3}$ and $d^2+b-3be < 0$,\\ Subcase 1.1.2 (negative definite, $- -$): $e<\frac{1}{3}$ and $e(d^2+b-3be) < 0$,\\ Subcase 1.1.3 (indefinite, $+-$): $e(d^2+b-3be > 0$. \medskip\noindent In terms of the surfaces $M_0, N_0$, \\ {\rm (i)} \ When $f_{030}g_{030}> 0$, so $f_{030}=f_3^2, \ g_{030}=g_3^2$, $e<\frac{1}{3}$ and has the sign of $f_{20}g_3^2-g_{20}f_3^2$ while $d^2+b-3be$ has the sign of $-R$ as in (\ref{eq:R}).\\ {\rm (ii)} \ When $f_{030}g_{030}< 0$, so $f_{030}=f_3^2, \ g_{030}=-g_3^2$, $e>\frac{1}{3}$ and $d^2+b-3be$ has the sign of $R$. \end{prop} \subsection{Invariants distinguishing subcases of Degenerate Case~2}\label{ss:invariants} We shall use the following:\\ \begin{enumerate} \item The number of cuspidal edges on the equidistant for $p=q=0$, which can be 0, 2 or 4 (see below); \item The number of self-intersection curves on the equidistant for $p=q=0$, which can be 0, 1, 2 or 3 (see \S\ref{ss:SIdegen}); \item The subcase of Generic Case~1.1 given in Proposition~\ref{prop:case2near} which is obtained by changing $\lambda$ slightly. \end{enumerate} This might give $3 \times 4 \times 3 = 36$ subcases but fortunately many of these combinations cannot be realized. We shall give values of $b,c,d,e$ realizing of all possible subcases in \S\ref{ss:examples}, Table~\ref{table1} below. For given values of these invariants, the interval in which $e$ lies, either $e<0$ or $0<e<\frac{1}{3}$ or $e>\frac{1}{3}$ could in principle affect the equidistant but so far as we are aware the basic geometrical structure---the qualitative nature of the equidistant---is not affected. \medskip The number of cuspidal edges, that is 1-dimensionial singular sets, on the equidistant, can be calcuated as follows. We can regard $h_{s_2}=0$, as in \S\ref{ss:param} above, as the equation of a quadric cone $C$ in $\mathbb R^3$ with coordinates $(s_1, s_2, u_2)$. The quadric cone $C$ is nondegenerate since $T$ in (\ref{eq:dhds2}) is a nondegenerate quadratic form, and consists of the origin alone if and only if $T$ is negative definite (that is, $d^2<b(3e-1)$ and $b>0$), otherwise it is a real cone, or equivalently a real nonsingular conic in $\mathbb R P^2$. When $T$ is not negative definite, the equidistant therefore has two `branches', which are the images of the two halves of the cone; these branches may intersect (apart from at the origin) and will generally themselves be singular. Writing the equation of $C$ more briefly as $\gamma(s_1,s_2,u_2)=0$, the singular set of the equidistant is the image of certain curves on $C$, given by the additional equation \[ \overline{h}_{s_1}\gamma_{s_2}-\overline{h}_{s_2}\gamma_{s_1}=0.\] (This can be written in terms of $h$ itself as $h_{s_1s_1}h_{s_2s_2}-h_{s_1s_2}^2=0$.) The lowest terms of the left hand side are of degree 2 in $s_1, s_2, u_2$ and therefore give another conic $C_2$ in $\mathbb R P^2$. The equation of $C_2$ is in fact \[ (b^2-3d)s_1^2+(bd-9e)s_1s_2-(cd+3)s_1u_2+(d^2-3be)s_2^2-(3ce+b)s_2u_2-cu_2^2 = 0.\] This meets the nonsingular conic $\gamma = 0$ in 0, 2 or 4 real points. (The conic $C_2$ cannot in fact be a single point: examination of the matrix of the above quadratic form in variables $s_1,s_2,u_2$ defining $C_2$ shows that its determinant is always $\le 0$ so the quadratic form cannot be positive definite, and negative definiteness is also ruled out by examining the signs of the other leading minors. The leading $1\times 1$ minor cannot be $<0$ at the same time as the leading $2\times 2$ minor is $>0$.) There are therefore 0, 2 or 4 curves through the origin on $C$ whose images are the singular points, the cuspidal edges, of the equidistant. These cuspidal edges pass through the origin, lying on both `sheets' of the equidistant. The number of cuspidal edges can be calculated for example by substituting $(s_1,s_2,u_2)= (mt,nt,t)$ in the equations of $C$ and $C_2$, taking out the factor $t^2$ and finding the common solutions of the two resulting quadratic equations in $m,n$. Eliminating one of $m,n$ gives a degree 4 equation in the other and there are standard algebraic techniques for computing the number of real solutions of a quartic equation---or for given $(b,c,d,e)$ we can solve numerically. The results for the Classes~I-X are given in Table~\ref{table1} below. \subsection{Self-intersections of the equidistant in Degenerate Case 2}\label{ss:SIdegen} We start with the normal form (\ref{eq:s1u1}) in \S\ref{s:degen}, namely \[ (u_1,u_2,h) = \] \[ \left(u_1, \ u_2, \ s_1u_1+ s_1^3+s_2^2u_2+ s_2u_2^2+bs_1^2s_2+cs_1^2u_2+ds_1s_2^2+es_2^3+ s_1^4 +ps_2+qs_1^2 \right), \] subject to the critical set conditions $h_{s_1}=h_{s_2}=0$. We include the unfolding terms $ps_2+qs_1^2$ though we are particularly interested in the self-intersections for $p=q=0$. We can immediately solve $h_{s_1}=0$ for $u_1$: \[ u_1= -2bs_1s_2-2cs_1u_2-ds_2^2-3s_1^2-4s_1^3-2qs_1,\] so that the equations which state that two domain points $(s_1,s_2,u_1,u_2)$ and say $(t_1,t_2,u_1,u_2)$ have the same image take the following form. \noindent (SI1): the above formula for $u_1$ gives the same answer for both domain points;\\ (SI2): the formula for $h$ above gives the same answer for both domain points; \\ (SI3): $h_{s_2}(s_1,s_2,u_1,u_2)=0$; and\\ (SI4): $h_{t_2}(t_1,t_2,u_1,u_2)=0.$ It is convenient to make the substitution $s_1=x_1+y_1, t_1 = x_1-y_1, s_2=x_2+y_2, t_2=x_2-y_2$, so that the `trivial solution' $s_1=t_1, s_2=t_2$ becomes $y_1=y_2=0$. Furthermore replacing $y_1$ by $-y_1$ and $y_2$ by $-y_2$ interchanges $(s_1,s_2)$ and $(t_1,t_2)$, that is interchanges the two domain points $(s_1,s_2,u_1,u_2)$ and $(t_1,t_2,u_1,u_2)$ with the same image in $\mathbb R^3$ under the normal form map (\ref{eq:s1u1}) . With this substitution the equations become say (SI1$'$), etc., and we use (SI3$'$)-(SI4$'$) to solve for $u_2$: \[ u_2=-\frac{bx_1y_1+dx_1y_2+dx_2y_1+3ex_2y_2}{y_2}, \] where the denominator $y_2$ is harmless since it is easy to check that if $y_2=0$ then the other equations imply that $y_1=0$ too. {\em Note that this expression does not involve} $p,q$. We can solve (SI1$'$) for $x_2$: \[ x_2 = \frac{bcx_1y_1^2+cdx_1y_1y_2-bx_1y_2^2-6x_1^2y_1y_2-2y_1^3y_2-3x_1y_1y_2-qy_1y_2} {-cdy_1^2-3cey_1y_2+by_1y_2+dy_2^2}.\] This time we may need to investigate the vanishing of the denominator, but assuming the denominator is nonzero and substituting for $x_2$ we find that the equation (SI2$'$)-$y_2$((SI3$'$)+(SI4$'$)) reduces to \begin{equation} \mbox{SI5}: by_1^2y_2+dy_1y_2^2+ey_2^3+4x_1y_1^3+y_1^3=0. \label{eq:SI5} \end{equation} This is to be treated as the equation of a surface in 3-space $(x_1,y_1,y_2)$ which contains the $x_1$-axis, since $(x_1,0,0)$ is always a solution. The surface will have a certain number of `sheets' passing through the origin, equal to the number of values of $k$ which make the first coordinate zero in the following parametriztion of SI5 by $k$ and $y_1$. \begin{equation} \left( -\frac{ek^3+dk^2+bk+1}{4}, \ y_1, \ ky_1\right). \label{eq:SI5param} \end{equation} If $y_1=0$ in (\ref{eq:SI5}), then $y_2=0$ and $x_1$ is arbitrary; and indeed, being cubic in $k$, (\ref{eq:SI5param}) gives all points $(x_1,0,0)$, possibly for more than one (real) $k$. If $y_1\ne 0$ then we solve (\ref{eq:SI5}) for $x_1$ and writing $y_2=ky_1$ produces the given value $-\frac{1}{4}(ek^3+dk^2+bk+1)$ for $x_1$. Conversely, every point (\ref{eq:SI5param}) satisfies (\ref{eq:SI5}) by substitution. Hence (\ref{eq:SI5param}) parametrizes the complete surface (\ref{eq:SI5}). Two examples are shown in Figure~\ref{fig:SI5ab}. Note that the surface (\ref{eq:SI5}) and the parametrization (\ref{eq:SI5param}) are independent of the unfolding parameters $p,q$. \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=1.3in]{s1u1-SI5a.pdf}} \scalebox{1.4}{\includegraphics[width=1.3in]{s1u1-SI5b.pdf}} \end{center} \vspace*{-1cm} \caption{\small The surface given by (\ref{eq:SI5}) or (\ref{eq:SI5param}), for (left) $b=8, c=-4, d=-3, e=-1$, with three smooth sheets through the origin, which is marked by a black dot; (right) $b=-8, c=4, d=-3, e=-1$, with one smooth sheet. (See Proposition~\ref{prop:sheets}.) These are respectively Class~III and Class~IX in Table~\ref{table1} below. Note that in the first of these there are nevertheless only two self-intersection curves of the equidistant for $p=q=0$ , using the criterion of Proposition~\ref{prop:self-int-number}. In fact the picture for Class II is very similar to the left-hand figure, but there is only one self-intersection curve of the equidistant for $p=q=0$.} \label{fig:SI5ab} \end{figure} \begin{prop} The number of smooth real sheets of the surface (\ref{eq:SI5}) through the origin in $(x_1,y_1,y_2)$-space is 1 or 3 according as \[ 27e^2 + 2b(2b^2-9d)e +d^2(4d-b^2)> 0 \mbox{ or } <0 \mbox{ respectively}.\] This number is therefore the maximum number of self-intersection branches of the equidistant, for any $p,q$. If $b^2<3d$ then the displayed expression is $>0$ for all values of $e$. \label{prop:sheets} \end{prop} {\em Proof} \ This is a matter of calculating the discriminant of the cubic polynomial $ek^3+dk^2+bk+1$ in $k$, and the discriminant $16(b^2-3d)^3$ of the displayed quadratic polynomial in $e$. The sheets will be smooth provided the cubic in $k$ has no repeated root, that is provided the discriminant is nonzero. \hfill$\Box$ \begin{rem}\label{rem:D4-contact} {\rm In \S\ref{ss:contact} we noted that, in the current Degenerate Case~2, the sign of a certain polynomial in the coefficients of the two surfaces $M_0, N_0$ determines whether the `scaled contact map' has type $D_4^+$ or $D_4^-$. By reducing to normal form as in \S\ref{s:normal} we can re-express this polynomial in terms of the coefficients $b,c,d,e$ of the normal form. When this is done, we find that the condition for one (resp.\ three) sheets as in the above proposition coincides with the condition for $D_4^+$ (resp.\ $D_4^-$) in the scaled contact map. We do not know the full significance of this fact. } \end{rem} \medskip Substituting $x_1=-\frac{1}{4}(ek^3+dk^2+bk+1)$ and $y_2=ky_1$ in one of the conditions on $x_1, y_1, y_2$ not fully used yet (for example, SI2$'$) we obtain a single equation in $y_1, k$ (involving now $p$ and $q$) which determines the branches of the self-intersection set of the equidistant. We are interested in values of $k$ close to a zero $k_0$ of the polynomial $ek^3+dk^2+bk+1$, so we now write $k=k_0+z$ say where $z$, as well as $y_1, p, q$, will be small. Since $k_0$ satisfies a cubic equation we can express $k_0^3$ in terms of $k_0$ and $k_0^2$, namely as $k_0^3 = (-dk_0^2 - bk_0 - 1)/e$, and therefore all higher powers of $k_0$ can be expressed in terms of $k_0, k_0^2$ as well. \begin{defs}{\rm For a chosen value of $k_0$, the polynomial in $y_1, z, p, q$ just formed, the zero set of which determines the solutions to (SI1)-(SI4) or their equivalents (SI1$'$)-(SI4$'$), and hence determines the points corresponding to self-intersections of the equidistant, will be called $L(k_0)$. In the special case $p=q=0$, we shall write $L_0(k_0)$ for the polynomial in $y_1$ and $z$. } \end{defs} We deduce the following; the statements 2-5 are easily checked by direct calculation. \begin{prop} \begin{enumerate} \item For each real root $k_0$ of $ek^3+dk^2+bk+1=0$ one smooth sheet of the surface (\ref{eq:SI5}) is parametrized by $(y_1,z)$ and the points which correspond to self-intersections on the equidistant for any $p,q$ are given by the additional equation $L(k_0)=0$. \item The polynomials $L(k_0)$ and $L_0(k_0)$ contain only the powers $y_1^2$ and $y_1^4$ of $y_1$. For any $p,q$ the zero-set of $L(k_0)$ is symmetric about the $y_1$-axis in the $(y_1,z)$-plane. \item The other variable $z$ occurs to powers $\le 14$ in $L(k_0)$. The coefficient of $z^{14}$ is in fact $27e^5(3e-1)$ which will not be zero since $e=0, \frac{1}{3}$ are excluded values. \item The linear part of $L(k_0)$ has the form constant $\times p$. The nonzero quadratic terms are in $y_1^2, z^2, zp, zq$ and $q^2$. \item The 2-jet of $L_0(k_0)$ has the form $c_0 y_1^2 + c_2 z^2$. \end{enumerate} \label{prop:L} \end{prop} \medskip The last statement above implies that, for $p=q=0$, a given sheet of the surface (\ref{eq:SI5}), that is a given value of $k_0$, will correspond to a branch of the self-intersection set of the equidistant if and only of $c_0, c_2$ have opposite signs. When $c_0c_2>0$ there is only an isolated point at $y_1=z=0$. When $c_0c_2<0$ the two real branches of the set $L_0(k_0)=0$ (forming a crossing at the origin $y_1=z=0$) will give only one branch of the self-intersection set because, as noted above, replacing $y_1$ by $-y_1$, and hence $y_2=ky_1$ by $-y_2=k(-y_1)$, merely interchanges the domain points contributing to the self-intersection. Each of $c_0, c_2$ is quadratic in $k_0$; multiplying them gives an expression of degree 4 which can be reduced to degree 2 again using the equation $ek^3+dk^2+bk+1=0$. Writing the resulting quadratic expression as $N=N_0(b,c,d,e)+N_1(b,c,d,e)k_0+N_2(b,c,d,e)k_0^2$ we have the following, which is used to determine the number of self-intersection branches of the equidistant in the ten classes of Table~\ref{table1}. \begin{prop} The number of real branches of the self-intersection set of the equidistant for $p=q=0$ is the number of solutions $k=k_0$ of $ek^3+dk^2+bk+1=0$ at which the quadratic $N$ is $<0$. \label{prop:self-int-number} \end{prop} As $(p,q)$ moves away from $(0,0)$ we can still trace the zero set of $L(y_0)$ in the $(y_1,z)$-plane. An isolated point may disappear or open into a symmetric loop, which represents a self-intersection of the equidistant having two endpoints, if the loop crosses the $y_1$-axis, and a closed self-intersection curve if it does not. A crossing will become a `hyperbola'; if it crosses the $y_1$-axis then the corresponding self-intersection curve will have two endpoints and if not then it will be an unbroken arc. This is illustrated in the next section. \subsection{Examples}\label{ss:examples} Considering different realizable values of the three invariants in~\S\ref{ss:invariants}, we have the ten classes of equidistant given in Table~\ref{table1}. It is also possible in some of these classes to allow values of $e$ in different ranges $e<0, \ 0<e<\frac{1}{3}, \ e >\frac{1}{3}$ but this does not appear to affect the equidistant in any qualitative way. We can compute the curves in the $(p,q)$-plane alomg which the cusp edges or the self-intersection curves on the equidistant underfgo a qualitative change. (The ten cases of the table in fact have ten distinct configurations of these curves.) \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|c|r|r|r|r|}\hline Class & Cusp edges & self-int &Subcase &$b$&$c$&$d$&$e$ \\ &&& (Prop.~\ref{prop:case2near}) &&&& \\ \hline\hline I & 0 & 0 & $++$ & 8 & 4 & $-3$ & 1 \\ \hline II & 0 & 1 & $+-$ & 8 & $-4$ & $-3$ & $\frac{1}{6}$ \\ \hline III & 0 & 2 & $- -$ & 8 & $-4$ & $-3$ & $-1$ \\ \hline IV & 2 & 0 &$+-$ & $-13$ & 6 & $-3$ & $-5$ \\ \hline V & 2 & 1 & $- -$ & 1&2&3& $-1$ \\ \hline VI & 2&2& $+-$ & 8 & 4 & $-3$ & $\frac{1}{6}$ \\ \hline VII &2&3&$- -$ & $-13$ & $-6$ & 1 & $\frac{1}{6}$ \\ \hline VIII&2&3&$+-$&$-8$&4&1&$\frac{1}{6}$ \\ \hline IX & 4 &1&$+-$&$-8$&4&$-3$&$-1$ \\ \hline X& 4&3&$+-$&$-8$&6&$-3$&10 \\ \hline \end{tabular} \end{center} \caption{\small Ten distinct classes of Case 2, giving all possible realizations of the three invariants of \S\ref{ss:invariants}, and examples of values of $b,c,d,e$ which realize these invariants. The fourth column refers to the `non-special' type which results from changing $\lambda$ slightly from the degenerate value.} \label{table1} \end{table} \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=3in]{II-III-IV-VI.png}} \end{center} \vspace*{-1cm} \caption{\small Cases II, III, IV and VI from Table~\ref{table1}, for $p=q=0$. The origin is marked for Case VI, where there are two very arrow swallowtails passing through the origin, contributing two cusp edges and one self-intersection, and the other self-intersection is visible where the sheets pass through one-another. } \label{fig:II-III-IV-VI} \end{figure} We shall now give more detail on Case~II of the table, showing how the cuspidal edges and self-intersections of the equidistant evolve as $(p,q)$ in (\ref{eq:s1u1}) makes a circuit of the origin. Figure~\ref{fig:Case9-clock5} shows the transformations in the cuspidal edge as $(p,q)$ moves in such a circuit and Figure~\ref{fig:Case9-WF-clock4.1} gives schematic diagrams of the corresponding equidistants, indicating their self-intersections and cusp edges. We use the following labelling on these figures to indicate transitions (perestroikas) in the structure of the equidistant. \begin{notation}\label{not}\verb++ {\rm \noindent $A_2^{++}, A_2^{- -}, A_2^{+ -}$ refer to Subcases 1.1.1, 1.1.2 and 1.1.3, as in Proposition~\ref{prop:def-indef}. The corresponding transitions have also been desscribed to as `Zeldovich's pancakes' or `flying saucer', the `hyperbolic transformation of an edge', and `the death of a compact component of an edge', respectively. See also \cite{Gory,Gory-S}. \smallskip\noindent $A_3^+, A_3^-$ refer to the `swallowtail-lips' and `swallowtail-beaks' singularity respectively. \smallskip\noindent $D_4^-$ refers to the `pyramid' singularity (and $D_4^+$ would similarly be the `purse' singularity). \smallskip\noindent $TA_1^{3,1}$, called such in \cite{Gory-S,Sul} (see also \cite{Gory}) refers to the situation where three smooth sheets of the equidistant are pairwise transversal to each other, but the curve of intersection of any two of them is tangent to the third sheet at the moment of bifurcation. } \end{notation} \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=3in]{Case9-clock5.png}} \end{center} \vspace*{-1cm} \caption{\small Pre-images of the cuspidal edges on the equidistants in Class II of Table~\ref{table1} for unfolding parameters $(p,q)$ making a circuit of the origin. The colours correspond to either the two parts of a hyperboloid of two sheets as in \S\ref{ss:param} or to the two parts into which a hyperboloid of one sheet is cut by the plane through the `waist'. For the labelling of transitions, see Notation~\ref{not}. } \label{fig:Case9-clock5} \end{figure} \begin{figure}[!ht] \begin{center} \scalebox{1.4}{\includegraphics[width=4in]{Case9-WF-clock4-1.png}} \end{center} \vspace*{-1cm} \caption{\small Schematic diagram of the equidistants for Class II of Table~\ref{table1}, with the unfolding parameters $(p,q)$ making a circuit of the origin. The figure shows cuspidal edges (thick lines) and self-intersections (thin lines) with solid and dashed curves indicating visibility from one direction. For the labelling, see Notation~\ref{not}. } \label{fig:Case9-WF-clock4.1} \end{figure} \section{Conclusion and further work}\label{s:conc} There have been many recent studies of singularities of (affine) equidistants of surfaces. For a single equidistant of a fixed surface, the generic singularities are $A_1, A_2, A_3$ (see for example \cite{GZ,domitrz2}); for a fixed surface, but allowing the ratio $\lambda$ defining the equidistant to vary, the generic singularities are now $A_1$ (smooth surface), $A_2$ (cusp edge), $A_3$ (swallowtail), $A_3^\pm$ (swallowtail beaks/lips transition), $A_4$ (butterfly) and also $D_4^\pm$ (purse/pyramid) (compare \cite{GZ0}). The context of the present paper is to extend this to 1-parameter families of surfaces, the parameter in the family being $\varepsilon$ in our notation, so that there are now two parameters to consider, $\lambda$ and $\varepsilon$. The particular degeneracy in the $\varepsilon$ family studied here comes from a `supercaustic chord', that is a chord joining two parabolic points with parallel tangent planes and parallel asymptotic directions. This occurs generically only in a 1-parameter family of surfaces. Along such a chord there may be special values of $\lambda$ where singularities become more degenerate, depending on the relative local geometry of the surface patches at the ends of the chord. When two such special values exist (our Case~1.2) this corresponds to the intersection of an $A_3$ stratum with the supercaustic. In addition, there always exists a value of $\lambda$, which we call the degenerate Case~2. This corresponds to the intersection of a $D_4$ stratum with the supercaustic, and we elucidate ten geometrically distinct cases. Our paper also gives a natural geometric setting for many singularity types which belong to the list of corank~1 maps from $\mathbb R^3$ to $\mathbb R^3$ (\cite{Marar-Tari,Gory}), with the addition of a quadratic term in the extra variable which does not affect the critical set. The cases where equidistants are defined by $\lambda =0 $ or 1 remain to be studied. A second natural 1-parameter family of surfaces is derived from the `tangential' case in which two surface pieces share a common tangent plane (see for example \cite{GZ}); here boundary singularities occur in the generic case, so that making one contact point parabolic in a 1-parameter family will introduce additional boundary singularities. The full adjacency diagram for singularities of equidistants of 1-parameter families of surfaces, not restricted to the supercaustic case, also remains to be found. \bigskip \noindent {\sc Acknowledgement} \ We are grateful to Aleksandr Pukhlikov for helpful discussions on calculating self-intersections.
2,877,628,089,291
arxiv
\section{Introduction} The two major laws for the earthquake statistics are the Omori-Utsu law \cite{OmoriMain1,Utsu} and the Gutenberg-Richter (GR) law \cite{GR}. The latter describes the frequency of earthquakes with respect to their magnitude and the former describes the rate of aftershocks decreasing in a power-law fashion with the time elapsed from a mainshock: \begin{align}\label{intro1} n(t)=\displaystyle\frac{k}{(t+c)^p}, \end{align} where $n(t)$ is the aftershock rate as a function of the elapsed time $t$ from a mainshock, $p$ and $k$ the positive constants, and $c$ the time constant. The exponent $p$ varies from $0.7-1.6$ \cite{Utsu,OmoriReview}, while the $c$-value may depend on the mainshock magnitude and a magnitude cutoff for aftershocks \cite{Omori1}. Generally, the precise estimate of $c$-value is difficult because it is strongly affected by the detection ability of aftershocks, which is degraded by the mainshock coda. Nevertheless, careful analyses have revealed that the $c$-value takes a definite non-zero value \cite{Peng,Enescu} and exhibits a systematic dependence on the faulting geometry \cite{Narteau}. As the faulting geometry correlates with the differential stress on faults, it also implies the stress dependence of the $c$-value: it decreases for larger stress \cite{Narteau}. Interestingly, the statistics for micro-fracture events in the laboratory scale shares many aspects with those for earthquakes: the GR \cite{Scholz} and the Omori-Utsu laws \cite{Hirata,Schubnel}. Moreover, in creep tests, in which the constant stress is applied to a specimen, the strain rate decreases in a power-law fashion resembling the Omori-Utsu law \cite{Andrade}. This stage is referred to as the primary creep and it is followed by the secondary creep with nearly time-independent strain rate. In the subsequent tertiary creep, the strain rate increases in a power-law fashion, which leads to breakdown of a specimen. This power-law acceleration of the strain rate is described by the inverse Omori law: \begin{equation} \dot{\epsilon}(t) \propto (t_f - t+c')^{-p'}, \end{equation} where $t_f$ is the time of breakdown, $c'$ the time constant, and $p'$ the positive exponent. However, the inverse Omori law is not usually observed for earthquakes \cite{Bouchon}, whereas it is common in material failure. In their original form, the Omori-Utsu and the inverse Omori laws describe the rate of micro-fracture events but they can be reinterpreted in terms of the strain rate. In this study, we refer to these laws in terms of the strain rate. There have been some simple models that can reproduce these power-law behaviors. Most of them are classified into the fiber bundle model \cite{Daniels,RevModPhys82,Book}. This is an assembly of fictitious fibers that support the mechanical load in parallel. Each fiber has its own failure threshold, which is randomly set according to a specific probability distribution function. Aiming at reproducing creep-like behaviors, some studies adopt probabilistic rules for the elementary failure process, which may model thermal activation processes \cite{Ciliberto,Pradhan2003,Shcherbakov,Ben-Zion,Saichev} or introduce additional variables that may correspond to the accumulated damage in the fibers \cite{Danku}. These attempts, which are regarded as the extension of the original model \cite{Daniels}, may be legitimate because creep involves thermal activation processes as its microscopic origin. In contrast, however, Pradhan and Hemmer adopted a simple deterministic model to show that it is sufficient to reproduce creep-like behaviors qualitatively \cite{Pradhan2007}. On the other hand, they did not discuss the power-law behaviors such as the Omori-Utsu and the inverse Omori laws. Additionally, their analysis is limited to the mean-field limit and a particular strength of disorder. In the present study, we investigate the time evolution toward breakdown in a simple fiber bundle model under a constant load for both the mean field and the local stress concentration cases. Although the model does not include any thermal activation process, it resembles most properties observed in creep tests including the above-mentioned three stages. Particularly, the Omori-Utsu and the inverse Omori laws are reproduced and the exponent $p$ and the $c$-value are obtained. We show the dependence of $c$-value on the external load and disorder in the system. In the next section, we provide a brief discussion of the model followed by the analytical results given in section III. The numerical results for the mean field limit as well as the local load sharing model are discussed in details in section IV. In the final section we have discussed our findings and summarized the chances of future works. \section{Description of the model} Here we adopt a fiber bundle model as in the previous studies. Initially, the $L$ intact fibers support the load $F$ in parallel, resulting in the stress of $f=F/L$ on each single fiber. Each fiber has its own fracture strength chosen randomly from a certain distribution. The dispersion of the fracture strength characterizes the disorder in the model. If the applied stress exceeds any of the threshold values, the corresponding fibers break irreversibly. After a rupture event, the load that has been supported by the broken fibers is redistributed among the remaining intact fibers. In this literature, two kinds of redistribution models are commonly used: (i) the global load sharing (GLS) model, in which the load is redistributed equally among all the other surviving fibers \cite{Pierce,Daniels} and (ii) the local load sharing (LLS) model, in which the load is redistributed only to the surviving neighbors \cite{Phoenix,Smith,Newman,Harlow2,Harlow3,Smith2}. The GLS model is regarded as the mean-field model as the range of load redistribution is infinite. In both the models, the stress on the unbroken fibers increases upon redistribution of the load and therefore a single rupture event can cause a cascade of further rupture events. At a given load of $F$, the system eventually comes to a stable state after a cascade of ruptures, otherwise all the fibers fail. In the former case, the load is increased slightly for a system to reach another stable state with less surviving fibers at larger stress. Such an increment process may be repeated until all the fibers break after a series of cascades. The repetition of the force increments enables one to define the critical load $F_c$, at which all the fibers are broken; namely, there are no stable states above the critical stress $F_c$. The value of $F_c$ depends on the disorder in the system \cite{sroyepl}, described by dispersion in the strength of fibers . Note that it also depends on the system size $L$ in the local load sharing limit \cite{Pradhan2003,llssystemsize1,sroy}. In this study, we investigate the time evolution of the model at a constant load, which is slightly above the critical value. The system is eventually led to the breakdown under such a large load but we can still investigate the dynamics towards breakdown by introducing the relaxation time of the load redistribution. Namely, the breaking of fibers and the following load redistribution should take the finite relaxation time $\tau$, which can be regarded as a single time step. Note that this time constant is assumed to be zero in a conventional algorithm \cite{Daniels}. Here we adopt the following algorithm: the total load remains at $F(\approx F_c)$ throughout the time evolution. The initial stress is thus $F/L$ at $t=0$. Then the fibers having the strength lower than $F/L$ should break and as a result the load is redistributed to all the remaining fibers (the GLS model) or only to the neighbors of the broken fibers (the LLS model). In any case, due to the load redistribution, some fibers are overloaded beyond their strength, resulting in further ruptures at the next time step, $t=\tau$. This defines a single time step in our algorithm and is repeated until all the fibers are broken. This relaxation time $\tau$ may depend on any physical ingredients such as the stress, time or the temperature. Here we regard $\tau$ as a constant for simplicity. Then the model is essentially the same as that investigated by Pradhan and Hemmer \cite{Pradhan2007}. They investigated only the GLS but the dynamics of the system depends largely on the nature of load redistribution. In this paper we investigate the time evolution in both models: the GLS and the LLS. \section{Analysis on Global load sharing model} In this section the dynamics of the above mentioned GLS model, which is the mean field limit, is studied analytically. Note that the stress is identical for all the fibers in the GLS model. This allows one to discuss the system behavior analytically for some simple threshold distributions. Writing the threshold distribution as $p(y)$, a general expression for the number of remaining fibers after the $i^{th} (i=1, 2, \cdots)$ redistribution is \begin{equation} L_{i} = L_{0} - \int_0^{f_{i-1}} L_0 p(y)dy. \end{equation} where $L_{0}$ is the initial number of fibers, $L_{i}$ the number survived after the $i^{th}$ redistribution, and $f_{i-1}$ the force per fiber at the previous time step $i-1$. This can be rewritten in terms of fraction $n_i = L_i/L_0$ \begin{equation} n_{i} = 1 - \int_0^{f_{i-1}} p(y)dy. \end{equation} Using $n_i = f_0/f_i$, where $f_0$ being the strain at $t=0$, the above equation is rewritten in terms of $f$. \begin{equation} \label{recursive} f_{i} = \frac{f_0}{1 - \int_0^{f_{i-1}} p(y)dy} = \frac{f_0}{\int_{f_{i-1}}^{\infty} p(y)dy} \end{equation} This is the recursive relation for $f$. One can also consider a differential equation by using $(f_{i+1}-f_{i})/\tau \simeq \dot{f}$, where $\tau$ is the duration of one time step, $i$ to $i+1$: \begin{equation} \label{ODE} \tau\dot{f} = \frac{f_0}{1 - \int_0^{f} p(y)dy} - f. \end{equation} Therefore, the time evolution of the present system is solely determined by the threshold distribution $p(f)$ and the initial condition $f_0$. As $f_0=F/L_0$, choosing $f_0$ is identical to determine the external load $F$. Note that $f$ should be proportional to the strain of the system as the elastic modulus is supposed to be identical to all the fibers. Therefore, $\dot{f}$ should be proportional to the strain rate. \subsection{Uniform threshold distribution} For a uniform threshold distribution defined in $[f_{\rm max}-\delta, f_{\rm max}]$, the integral in Eq. (\ref{ODE}) is easily solved to give \begin{equation} \tau \dot{f} = \frac{f_0\delta}{f_{\rm max} - f} -f. \end{equation} This is rewritten as \begin{equation} \tau \dot{f} =\frac{\left( f-\frac{f_{\rm max}}{2}\right)^2+f_0\delta-\frac{f_{\rm max}^2}{4}}{f_{\rm max} - f}. \end{equation} This is more simplified as \begin{equation} \label{nondim} \tau \dot{x} =\frac{\left( x-\frac{1}{2}\right)^2+\zeta}{1-x}, \end{equation} where \begin{eqnarray} x &:=& \frac{f}{f_{\rm max}}\\ \label{epsilon} \zeta &:=& \frac{f_0\delta}{f_{\rm max}^2}-\frac{1}{4}. \end{eqnarray} By choosing $\tau$ as the time unit, we realize that there is only one non-dimensional parameter $\zeta$ that controls the time evolution of $x$. \begin{itemize} \item If $\zeta$ is non-positive, Eq. (\ref{nondim}) has steady-state solutions of $x=1/2\pm \sqrt{-\zeta}$: $x=1/2 - \sqrt{-\zeta}$ is the stable fix point and the other is the unstable fix point. Starting from the initial condition $x_0 < 1/2 - \sqrt{-\zeta}$, the system relaxes to the stable fixed point exponentially. Although the time derivative of $x$ is negative for $x$ between these two fix points, it should be interpreted as $\dot{f}=0$ because the system is essentially irreversible. \item At $\zeta=0$, the saddle-node bifurcation occur. Namely, these two fixed points merge together and annihilate. This bifurcation is actually present for more general threshold distributions, and therefore they yield common behaviors near the bifurcation point. \item For positive $\zeta$, there is no fixed point and the system undergoes breakdown. The exact solution of Eq. (\ref{nondim}) is given as \begin{align} \label{solution_uniform} t_m-t = &\frac{1}{2}\log\left[\left(\frac{1}{2}-x\right)^2 + \zeta\right] \nonumber \\ &+\frac{1}{2\sqrt{\zeta}} \tan^{-1}\left(\frac{1/2-x}{\sqrt{\zeta}}\right), \end{align} where $t_m$ is an integral constant. Here we consider a system close to the bifurcation point, $\zeta \ll 1$. Then the first term is negligible and one gets the following expression. \begin{equation} \label{solution_uniform_tan} x \simeq \frac{1}{2} + \sqrt{\zeta}\tan[2\sqrt{\zeta}(t-t_m)]. \end{equation} Because the above equation should give $x=x_0$ at $t=0$, \begin{equation} t_m \simeq \frac{1}{2\sqrt{\zeta}}\tan^{-1} \left(\frac{1/2-x_0}{\sqrt{\zeta}}\right). \end{equation} The time evolution of the system is fully described by Eqs. (\ref{solution_uniform}) or (\ref{solution_uniform_tan}). A practically important quantity is the time of breakdown, $t_f$, where the surviving fibers vanish: Namely, the force per fiber diverges. Considering Eq. (\ref{solution_uniform}), the time of the breakdown is given by \begin{equation} \label{tf} 2\sqrt{\zeta}(t_f-t_m)=\pi/2. \end{equation} \end{itemize} Importantly, Eq. (\ref{solution_uniform}) implies both the Omori-Utsu and inverse Omori laws for the primary and the tertiary stages, respectively.\\ \textit{\textbf{The Omori law}} : The primary stage is characterized by $x_0 < 1/2$. In this case $(1/2-x_0)/\sqrt{\zeta} \gg 1$ and therefore $t_m\simeq \pi/4\sqrt{\zeta}$. We can thus write \begin{equation} \label{expand_a} 2\sqrt{\zeta} t_m = \frac{\pi}{2} - f(x_0), \end{equation} where $f(x_0) > 0$. Inserting Eq. (\ref{expand_a}) into Eq. (\ref{solution_uniform}), \begin{equation} \label{solution_uniform2} x \simeq \frac{1}{2} - \frac{\sqrt{\zeta}}{\tan[2\sqrt{\zeta}t+f(x_0)]} \simeq \frac{1}{2} - \frac{1}{2t+f(x_0)/\sqrt{\zeta}} \end{equation} for small $t$. Taking the initial condition into account, this leads to \begin{equation}\label{ref1} x \simeq \frac{1}{2} - \frac{1}{2[t+1/(1-2x_0)]}. \end{equation} Therefore, \begin{equation}\label{MainOmori} \dot{x} \simeq \frac{1}{2[t+1/(1-2x_0)]^2}. \end{equation} This is the Omori law with $p=2$ and \begin{equation}\label{c-value} c=\displaystyle\frac{1}{1-2x_0}. \end{equation} We are thus led to the concrete expression for the $c$-values.\\ \textit{\textbf{The inverse Omori law}} : Inserting Eq. (\ref{tf}) into Eq. (\ref{solution_uniform}) and rewriting the time $t$ as $t= t_f - t'$, one can get \begin{equation} x \simeq \frac{1}{2} + \sqrt{\zeta}\tan[\frac{\pi}{2}-2\sqrt{\zeta}t'] \simeq \frac{1}{2} + \frac{1}{2(t_f -t)}. \end{equation} This leads to the accelerating creep in the tertiary regime. \begin{equation}\label{MainInvOmori} \dot{x}\simeq \frac{1}{2(t_f -t )^2}. \end{equation} Note that the $c$-value is not visible in this tertiary stage, whereas a non-zero $c$-value is obtained in the primary stage. \subsection{General relation between saddle-node bifurcation and the power-law behaviors} We discuss more general threshold distributions taking advantage of saddle-node bifurcation. First we discuss the nature of fixed points, which are the solutions of the following equation. \begin{eqnarray} \label{fixedpoints} f &=& \Phi(f),\\ \label{Phi} \Phi(f) &=& \frac{f_0}{\int_{f}^{\infty} p(y)dy}. \end{eqnarray} Since $p(y)$ is positive, $\Phi(f)$ is a monotonically increasing function of $f$. Because $\Phi(0) = f_0 > 0$, Eq. (\ref{fixedpoints}) may have some solutions. \begin{figure}[ht] \centering \includegraphics[width=7cm, keepaspectratio]{schematic.eps} \caption{Variation of the function $\Phi(f)$ with increasing force per fiber, $f$. The time evolution is given by $f_{i+1} = \Phi(f_i)$ and $f_0$ is the initial condition for $f$. Note that $\Phi(0)=f_0$. The plots are shown for $f_0<f_0^{\ast}$, $f_0=f_0^{\ast}$ and $f_0>f_0^{\ast}$ (from the bottom to the top), respectively.} \label{schematic} \end{figure} Let us suppose that saddle-node bifurcation occurs as a critical initial stress, $f_0=f_0^*$. Then, because $\Phi(f)$ is tangent to $f$ at the bifurcation point, one can consider an expansion of $\Phi(f)$ around $f=f_c$. \begin{equation} \label{expansion} \Phi(f) \simeq f_c + (f-f_c) + a (f-f_c)^2 +\cdots , \end{equation} where $a := \frac{1}{2}\partial^2\Phi/\partial f^2|_{f_c}$ is assumed to be positive. If $f_0$ is only slightly larger than $f_0^*$, one may write \begin{equation} \label{expansion2} \Phi(f) \simeq \zeta + f_c + (f-f_c) + a (f-f_c)^2 +\cdots , \end{equation} where $\zeta > 0$. Truncating the above expansion at the second order, one can write an approximate time evolution equation. \begin{equation} \label{truncation} \tau\dot{f} = \Phi(f) - f \simeq \zeta + a (f-f_c)^2. \end{equation} This equation is integrated by separating the variables and gives \begin{equation} \sqrt{a\zeta}(t-t_1) = \arctan\left[\sqrt{\frac{a}{\zeta}}(f -f_c)\right] - \arctan\left[\sqrt{\frac{a}{\zeta}}(f_1-f_c)\right], \end{equation} where $f$ and $f_1$ denote $f(t)$ and $f(t_1)$, respectively. Choosing $t_1=t_m$ and $f(t_m)=f_c$, this equation reduces to \begin{equation} \label{solution_general} f(t) = f_c + \sqrt{\frac{\zeta}{a}}\tan\left[\sqrt{a\zeta}(t-t_m) \right], \end{equation} which is identical to Eq. (\ref{solution_uniform}). Therefore, one can obtain the Omori-Utsu and the inverse Omori laws in the same manner as shown in the previous subsection. Particularly, \begin{equation} \dot{f} = \frac{\tau/a}{\left[ t + \tau/a (f_c-f_0) \right]^2}. \end{equation} Therefore, the exponent $2$ should be robust for a wide class of systems that undergoes saddle-node bifurcation. The time constant $c$ for the Omori law is given by \begin{equation} \label{c-value_general} c = \frac{\tau}{a (f_c-f_0)}. \end{equation} In the above discussion, the positiveness of $a$ is crucial. Note also that these power-law behaviors are realized only in a finite range of $f$ where the expansion of Eq. (\ref{expansion}) can be truncated at the second order. Namely, Eq. (\ref{truncation}) must hold in a sufficiently wide range of $f$ for the realization of power-law behaviors. This implies that the saddle-node bifurcation itself is not a sufficient condition for the power-law behaviors. This valid range of the quadratic approximation and the sign of $a$ depend on the detail of $p(f)$, and therefore we discuss this for some examples in the next subsection. Importantly, in some special cases the inverse Omori law can be observed without saddle-node bifurcation. Therefore, the bifurcation is indeed not a necessary condition for a power-law behavior. \subsection{Power-law distribution}\label{Powerlaw_Analytical} As the uniform threshold distribution may be a little bit artificial, one should consider some other threshold distributions. Among them, the power law distribution is particularly instructive as the system exhibits more complex behaviors than the uniform distribution case. It is also important in view of the geophysical systems as the heterogeneities in solid earth systems are often fractal. The threshold distribution $p(f)$ is proportional to $f^{-\alpha}$ ($\alpha>0$) within the range of $[f_{\min}, f_{\max}]$ and vanishes otherwise. For $\alpha\neq 1$, the distribution reads \begin{equation} p(f) = \frac{1-\alpha}{f_{\max}^{1-\alpha}- f_{\min}^{1-\alpha}}f^{-\alpha}. \end{equation} This leads to \begin{equation} \Phi (f) = \left\{ \begin{array}{l} \frac{f_{\min}^{1-\alpha}- f_{\max}^{1-\alpha}}{f^{1-\alpha}- f_{\max}^{1-\alpha}} f_0, \ (f \ge f_{\min})\\ \\ f_0. \hspace{2cm} (f < f_{\rm min}) \end{array} \right. \end{equation} Then the time evolution equation is given by \begin{equation} \label{TE_powerlaw} \dot{f} = \frac{f_{\min}^{1-\alpha}- f_{\max}^{1-\alpha}}{f^{1-\alpha}- f_{\max}^{1-\alpha}} f_0 - f, \end{equation} and therefore the fixed points are the solution of \begin{equation} \label{FP_powerlaw} f (f^{1-\alpha}- f_{\max}^{1-\alpha}) = f_0 (f_{\min}^{1-\alpha}- f_{\max}^{1-\alpha}), \end{equation} where $ f_{\min}\le f < f_{\max}$. Noting that the right hand side of the above equation is a constant, the nature of the fixed points depends on the behavior of the left hand side, which largely depends on the exponent $\alpha$. As explained below, the saddle-node bifurcation occurs only for $\alpha <2$, and therefore the power-law behavior is not expected for $\alpha \ge 2$. Nevertheless, the inverse Omori law is observed at $\alpha=2$. This illustrates that the bifurcation is not a necessary condition of the power-law behaviors. \begin{itemize} \item For $\alpha>2$, the left hand side of Eq. (\ref{FP_powerlaw}) is a monotonically decreasing function from the infinity to the negative infinity as $f$ varies from zero to the infinity. Therefore, in view of Eq. (\ref{TE_powerlaw}), there must be one unstable fixed point. In this case the system fails quickly starting from $f_0$ that is larger than a critical value, $f_{0}^*$. This is given by inserting $f=f_0=f_{0}^*$ in Eq. (\ref{FP_powerlaw}). Therefore, the power law behavior is not observed for $\alpha>2$. \item For $\alpha<2$, the left hand side of Eq. (\ref{FP_powerlaw}) is a concave function of $f$ for $0<\alpha<1$, or convex for $1<\alpha<2$. Therefore, there must be two fixed points at sufficiently small $f_0$, and they should merge at a critical value of $f_0$. Namely, saddle-node bifurcation occurs. This bifurcation point is determined by combining Eq. (\ref{FP_powerlaw}) and \begin{equation} \label{CP_powerlaw} \Phi'(f) = (\alpha-1)\frac{f_{\min}^{1-\alpha}- f_{\max}^{1-\alpha}}{(f^{1-\alpha}- f_{\max}^{1-\alpha})^2} f_0 f^{-\alpha}=1. \end{equation} Equations (\ref{FP_powerlaw}) and (\ref{CP_powerlaw}) lead to \begin{equation} f (f^{1-\alpha}- f_{\max}^{1-\alpha})\left[2-\alpha-\left(\frac{f}{f_{\max}}\right)^{\alpha-1}\right]=0, \end{equation} which gives \begin{equation} f = (2-\alpha)^{1/(\alpha-1)}f_{\max}. \end{equation} The critical initial condition is given by inserting the above equation to Eq. (\ref{FP_powerlaw}). \begin{equation} f_0 = \frac{(\alpha-1) (2-\alpha)^{(2-\alpha)/(\alpha-1)}}{f_{\min}^{1-\alpha}- f_{\max}^{1-\alpha}}f_{\max}^{2-\alpha}. \end{equation} We can thus expect the power law behaviors of $f(t)$ for $0<\alpha<1$ and $1<\alpha<2$. This is confirmed by numerical simulation as shown in the next section. \item For $\alpha=2$, the left hand side of Eq. (\ref{FP_powerlaw}) is a monotonically decreasing function, and therefore there exists one unstable fixed point as in the case of $\alpha>2$. Nevertheless one may observe power-law behavior. The time evolution equation reads \begin{equation} \label{FP_powerlaw_alpha2} \dot{f} = \frac{f_{\min}^{-1}- f_{\max}^{-1}}{f^{-1}- f_{\max}^{-1}} f_0 - f. \end{equation} If we choose $f_0 = (f_{\min}^{-1}- f_{\max}^{-1})^{-1}$, the above equation reduces to \begin{equation} \label{FP_powerlaw_alpha2_critical} \dot{f} = \frac{f^2/ f_{\max}}{1- f/f_{\max}}, \end{equation} which have a solution of $f\propto (t_f-t)^{-1}$ unless $f/f_{\max}\simeq 1$. In this case, however, the power law behavior is observed even if the bifurcation does not occur. \item For $\alpha=1$, the distribution reads \begin{equation} p(f) = \frac{1}{f \log\left(\frac{f_{\max}}{f_{\min}}\right)}. \end{equation} By computing $\Phi(f)$, the fixed points are given by the following equation. \begin{equation} f \log\left(\frac{f}{f_{\max}}\right) = f_0 \log\left(\frac{f_{\min}}{f_{\max}}\right). \end{equation} We can also show that the saddle-node bifurcation occurs at $f_0=f_{\max}/e\log(f_{\max}/f_{\min})$ and $f=e^{-1}f_{\max}$, and therefore we can expect power-law behaviors. \end{itemize} \section{Numerical results: Global load sharing model} In the following, numerical results are shown for both the GLS and the LLS models. In each case, the system comprises $10^5$ fibers and the results are averaged over $10^4$ configurations. The main aim for the numerical study is to confirm the analytical results given in the previous section as well as to understand the time evolution of the system with more general threshold distributions. In case of the GLS model, we have used three different threshold distributions: uniform, Weibull and power laws. The numerical results are compared with the analytical ones. In case of the LLS model, the numerical results shown here are restricted to the uniform threshold distribution only. For both the systems (GLS and LLS), we show the strain rate as a function of time in the system under a constant load, which is slightly above the critical value. Using the number of unbroken fibers after the $t^{th}$ redistribution (at time $t$), which is denoted by $L_t$, the strain $\epsilon(t)$ is represented as $F/L_t$ with the force $F$ applied to the system. This is indeed identical to the force per fiber $f$ in the GLS model. However, in the LLS model, this strain may be interpreted as the average value (averaged over all surviving fibers), as the force per fiber is inhomogeneous. In the same manner, the strain rate is given as the time derivative of the average strain: $\dot{\epsilon}_t=F/L_{t+1}-F/L_t$. Again, this is identical to $\dot{f}$ in the GLS model, whereas it is the averaged value in the LLS model. \subsection{Uniform threshold distribution: Comparison of analytical and numerical results} Here the threshold of each fiber is chosen from a uniform distribution defined on the interval of $[0.5-\delta, 0.5+\delta]$. Fig.\ref{Omori_Analytic}(a) shows the creep-like behavior observed under such critically stressed condition. The behavior shows all three stages: primary (red), secondary (green) and tertiary (blue). The strain rate at primary and tertiary stage is observed closely and shown in Fig.\ref{Omori_Analytic}(b) and Fig.\ref{Omori_Analytic}(c) respectively. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Analytic.eps} \caption{(Color online) (a) Behavior of strain, showing the primary, secondary and tertiary stage in the time evolution normalized by failure time $t_f$. (b) \& (c) Variation of the strain rate with time respectively in the primary and tertiary stage, along with the comparison with the analytical findings (lines with no points). (d) Variation of c-value with $\delta$, the strength of disorder, in the primary stage.} \label{Omori_Analytic} \end{figure} The time evolution of strain rate in the primary stage (Fig.\ref{Omori_Analytic}b) follows the Omori-Utsu law and matches satisfactorily with the analytical expression (solid lines drawn according to Eq.\ref{MainOmori}). Also in the tertiary stage, the numerical results match with the inverse Omori law given by Eq.(\ref{MainInvOmori}). We will discuss this in details later in this paper. Also Fig.\ref{Omori_Analytic}d compares the analytical (see Eq.\ref{c-value}) and numerical $c$-values at different degrees of disorder, under the condition $f=f_c$. A probable reason for the discrepancies between analytical and numerical results will be the assumption made in Eq.(\ref{ref1}). \subsection{Weibull distribution} If the constituent fibers themselves are sufficiently macroscopic objects, the strength of a single fiber may obey extreme statistics. In this case one can consider the Weibull distribution for the fiber strength. \begin{equation} \int_f ^{\infty} p(y)dy = \exp\left[ -\left(\frac{f}{\widetilde{f}}\right)^{\beta}\right], \end{equation} where $\beta > 0$ and $\widetilde{f}$ is a constant. This leads to the time evolution equation: \begin{equation} \label{ODE_Weibull} \tau\dot{x}=x_0 \exp\left( x^{\beta}\right) -x, \end{equation} where $x=f/\widetilde{f}$. Whereas the uniform distribution case is controlled by the only one dimensionless parameter, the Weibull distribution case involves two dimensionless parameter, $\beta$ and $x_0$. Note that very large or very small $\beta$ values correspond to less heterogeneity and the intermediate values of $\beta$ may represent disordered systems. However, the parameter $\beta$ does not affect the qualitative behavior of the system as shown below. \subsubsection{Saddle-node bifurcation} First, we show saddle-node bifurcation also occurs for the Weibull distribution case irrespective of the value of $\beta$. The fixed points of Eq. (\ref{ODE_Weibull}) must satisfy \begin{equation} x_0 \exp\left( x^{\beta}\right) = x. \end{equation} Since $x>0$, it is more convenient to take the logarithm. \begin{equation} x^{\beta}-\log x = -\log x_0. \end{equation} As the left-hand side is a simple concave function of $x$ for positive $\beta$, the above equation must have two solutions at sufficiently small $x_0$. One can show that the smaller solution is the stable fixed point, whereas the larger one is unstable. At a critical value of $x_0 = (e\beta)^{-1/\beta}$, these two fixed points merge and disappear. This is the saddle-node bifurcation as in the case of the uniform threshold distribution. For $x_0 > (e\beta)^{-1/\beta}$, there is no fixed point and the force per fiber increases rapidly with time. Therefore, we can expect the power-law behaviors with the exponent $2$ as discussed in the previous section. \subsubsection{Dependence on disorder $\beta$} To check the response of the model to the disorder introduced, we have studied the variation of strain rate ($\dot{\epsilon}$) with time at different disorder values while the applied stress is kept constant at the critical value. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Numerical_Disorder.eps} \caption{(Color online) Time evolution of the model with Weibull threshold distribution. (a) Omori-like behavior in the primary stage with a continuous variation of disorder. (b) Variation of $c$-value with disorder at different loading conditions. (c) Inverse Omori like behavior close to the failure point.} \label{Omori_Numerical_Disorder} \end{figure} Figure \ref{Omori_Numerical_Disorder} shows this $\dot{\epsilon}$ v/s $t$ behavior with a continuous change in $\beta$ (and hence change in disorder). Here also we observe the Omori and inverse Omori laws in the primary and tertiary stages, respectively. \begin{itemize} \item Primary stage : $\dot{\epsilon}=\displaystyle\frac{k}{(t+c)^p}$, $p\approx1.8$. \vspace{-0.07cm} \item Tertiary stage : $\dot{\epsilon}=\displaystyle\frac{k^{\prime}}{(t_f-t)^{p^{\prime}}}$, $p^{\prime}\approx2.0$. \end{itemize} where $t_f$ is the time of breakdown of the system. Both exponents $p$ and $p^{\prime}$ show satisfactory match with the analytical results. In the primary stage, the value of $c$ changes with a continuous variation of $\beta$. The applied stress is kept constant here at the critical value. Strain rate produced in the model at critical stress and corresponding to different $\beta$ values (see Fig.\ref{Omori_Numerical_Disorder}a) in the primary stage are fitted with the Omori law for different $c$ values. The variation of $c$-value with $\beta$ is shown in Fig.\ref{Omori_Numerical_Disorder}(b) at different loading condition. $\Delta f$ shows the deviation in applied stress from the critical value $f_c$. $\Delta f=0$ corresponds to the critically loading condition. A positive $\Delta f$ tells us that the model is overloaded, while a negative value of $\Delta f$ leads to situation where the applied load is less than the critical one. $c$ attains a higher value at both low and high $\beta$ value and hence at low disorder limit. This non-monotonic behavior is very prominent where the system is more and more overloaded ($\Delta f>0$). For $\Delta f \approx0$ or $\Delta f<0$, we have to go to relatively higher value of $\beta$ to observe this increment in $c$-value. Finally Fig.\ref{Omori_Numerical_Disorder}(c) shows the strain rate, close to the failure point. When $t$ approaches $t_f$, $\dot{\epsilon}$ increases in a scale-free manner with an exponent $-2$, independent of the disorder introduced in the model. Also, as discussed before, we have a zero c-value here. \subsubsection{Dependence on the applied stress} Next we have investigated the effect of applied stress more closely focusing on the primary stage only. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Numerical_Sigma.eps} \caption{(Color online) Omori like behavior at different loading condition. (a) \& (b) $\dot{\epsilon}$ v/s $t$ for two different disorder values $\beta=2.0$ and $\beta=1.0$. (c) Variation of $c$-value with $\Delta f$ for $0.5<\beta<2.0$.} \label{Omori_Numerical_Sigma} \end{figure} Fig.\ref{Omori_Numerical_Sigma}a and Fig.\ref{Omori_Numerical_Sigma}b shows the Omori like behavior under different loading conditions. At high $\beta$ value the system responses quite well with varying applied stress. On the other hand at low $\beta$ value, c changes very slowly with $\Delta f$. These different responses can be expressed through a continuous variation of $c$-value with $\Delta f$. Fig.\ref{Omori_Numerical_Sigma}c shows the $c$-value v/s $\Delta f$ variation at different $\beta$ (hence at different disorder values). As previously discussed, at low $\beta$, $c$ starts with a relatively higher value and gradually increases with $\Delta f$. For higher $\beta$ value, $c$ attains a lower value initially but increases rather faster with $\Delta f$. Hence, the rate of change of such c-value is relatively higher for high disorder values. \subsubsection{Variation of c-value on $\Delta f-\delta$ plane} Finally we have reached a point where we can explain the behavior of the $c$-value with respect to the parameters $\beta$ and $\Delta f$. In Fig.\ref{3D_Diagram}a and \ref{3D_Diagram}b we have shown the $c$-value in the primary stage as function of both disorder and applied stress in case of the uniform and the Weibull distributions. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{3D_Diagram.eps} \caption{(Color online) Variation of $c$-value when both $\Delta f$ and $\beta$ are continuously varying parameters. Results are shown for (a) Uniform and (b) Weibull distribution.} \label{3D_Diagram} \end{figure} The value of $c$ is higher at less disorder (low $\delta$ for uniform distribution and very low or very high $\beta$ for Weibull distribution) and gradually deceases when we go to higher disorder. At any particular $\delta$ or $\beta$, the $c$-value increases with increasing $\Delta f$. \\ \subsection{Power-law threshold distribution} Next we have carried out the numerical simulation where the thresholds are chosen randomly from a power law distribution with a variable exponent. Following the analytical results, we introduce some cut-off values for the distribution depending on the value of $\alpha$. Irrespective of the value of $\alpha$, we must choose a sufficiently large width that remains constant throughout the numerical simulation. Here it is chosen to be [1,$10^3$], where the exponent $\alpha$ varies between $0.5$ and $2.5$. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Powerlaw.eps} \caption{(Color online) Strain rate vs time in the (a) primary and (b) tertiary stage of the creep process. The threshold distribution considered here is $p(f)=f^{-\alpha}$ within the window [1,$10^3$].} \label{Omori_Powerlaw_Initial} \end{figure} This variation in the exponent essentially covers all the three situations discussed in the analytical study (see \ref{Powerlaw_Analytical}). Fig.\ref{Omori_Powerlaw_Initial} illustrates the behaviors of strain rate in the primary and the tertiary stages, respectively, where the qualitative difference is apparent for different values of the exponent $\alpha$: \begin{enumerate}[label=(\roman*)] \item For $\alpha<2$, we obtain a power law decrease of strain rate $\dot{\epsilon}$ in the primary stage. At the same time in the tertiary stage $\dot{\epsilon}$ increases obeying the inverse Omori law until it reaches global failure. The $p$ values at primary and tertiary stages are respectively $1.8$ and $2.0$. \item For $\alpha>2$, the model shows brittle response. In this limit $\dot{\epsilon}$ increases exponentially in the primary stage and reaches to global failure much more rapidly and does not exhibit the inverse Omori law in the tertiary stage. \item At $\alpha = 2$, the power law behavior is observed only in the form of inverse Omori law while the strain rate is almost constant in the primary regime. \end{enumerate} All these behaviors are consistent with the analytical results. \section{Numerical results: Local load sharing model} To include the effect of local stress concentration, we assume that the close neighborhood of a broken fiber is affected much more than the other parts of the model. For this purpose, we redistribute the stress of a broken fiber over a finite distance, known as the stress release range $R$. A recent study has already shown that there exits the critical range value $R_c$, above which the model shifts to the mean-field limit. The critical value depends on the system size $L$ as $R_c \sim L^{2/3}$ \cite{Biswas}. In this paper, instead of $R$, we have used $\rho=R/R_c$ as the parameter, and thus $\rho\ge1$ corresponds to the mean field limit of the model. Here the uniform threshold distribution is adopted and therefore we investigate the effects of disorder by changing $\delta$, as well as the effects of the interaction range by changing $\rho$. Again, the external load remains to be slightly above the critical value. \subsubsection{Role of disorder} Figure \ref{Omori_Disorder1} shows the behavior of strain rate in the primary stage with different values of local stress concentration parameter $\rho$: $0.93$, $0.46$ and $0.05$. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Disorder1.eps} \caption{(Color online) Variation of strain rate with time (in the primary stage) for disorder $\delta$, ranging in between $0.2$ and $0.5$, while a critical stress is applied on it. The study is repeated for $\rho=0.93$, $0.46$ and $0.05$.} \label{Omori_Disorder1} \end{figure} In case of $\rho=0.05$, the stress is redistributed up to a small range, whereas $\rho=0.93$ is close to the mean-field limit. As a result, it is expected to obtain the Omori-Utsu law in the primary stage when the $\rho$ is close to 1 (see Fig.\ref{Omori_Disorder1}c). Interestingly, we observe this behavior to sustain even for lower $\rho$, namely more stress localization. The $c$-value also changes with disorder in this limit. The extra feature that we get with stress localization is a varying exponent $p$ in the Omori-Utsu law. \subsubsection{Role of stress release range} Here we have studied the model at a constant disorder but with varying stress release range. By changing a variable $\rho$ the model shifts from the mean-field limit to another limit where stress redistribution is extremely localized. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Omori_Range.eps} \caption{(Color online) Plot for $\dot{\epsilon}$ vs $t$ in the primary stage at three different disorder values ($\delta=0.3$, 0.4 and 0.5). The stress is kept constant at the critical value while $\rho$ is continuously varied.} \label{Omori_Range} \end{figure} Figure \ref{Omori_Range} shows the time evolution of the strain rate with several values of $\rho$. The study is repeated for three different disorder values. The slope in the Omori like behavior clearly shows an increment while $\rho$ is decreased. Also at very low $\rho$, the exponent $p$ changes with disorder. This variation of $p$ with disorder was absent in the mean field limit. \subsubsection{Variation of c-value and exponent p} To understand the effect of such stress localization, we have studied the $c$-value and the exponent $p$ with a continuous variation of stress release range $\rho$ between 0 and 1.5. As we have already mentioned, for $\rho\ge1$ the model enters its mean-field limit. \begin{figure}[ht] \centering \includegraphics[width=6cm, keepaspectratio]{Exponents_Range1.eps} \caption{(Color online) Variation of the $c$-value and and the exponent $p$ with increasing stress localization. The model approaches the mean-field limit toward $\rho=1$, and $p$ approaches its mean-field value $1.8$ gradually with increasing $\rho$.} \label{Exponents_Range1} \end{figure} For smaller interaction range ($\rho <1$), both $p$ and $c$ take large values. As $\rho$ is increased, these two quantities decrease gradually. Throughout the stage $0<\rho\le1.5$, the $c$-value remains a function of disorder $\delta$ and increases as we go to lower $\delta$ values. So, $c$ can be expressed as follows: $c=\Phi(\rho,\delta)$ for $0<\rho\le1.5$, where $\Phi$ is an decreasing function of both stress release range $\rho$ and disorder $\delta$. On the other hand, the exponent of $p$ is function of both disorder $\delta$ and stress localization $\rho$ for $\rho<1.0$. For $\rho>1.0$, $p$ takes a value 1.8 independent of $\delta$ and $\rho$, which is the mean field exponent for Omori law we obtained previously. The results in the mean field limit are already shown for three different distributions: uniform, Weibull and power law. An universal behavior is observed in the time evolution of strain rate (or force per fiber) for all these three distributions. With local stress concentration, we have shown the results with uniform distribution. The universality of these results are also checked with a Weibull distribution with shape parameter $\beta$ and a power-law distribution with exponent $-1$ ranging from $10^{-\eta}$ to $10^{\eta}$. The parameters $\eta$ and $\beta$ controls the disorder here. \section{Discussions} Here we discuss the relevance of the Omori-Utsu and the inverse Omori laws in a more general context. Although our results are presented in terms of the strain rate, they should apply to more general cases if there is a relation between the rate of micro-fracture events and the strain rate, such as $n(t) \propto \dot{\epsilon}(t)^q$, where $n(t)$ is the rate of micro-fracture events and $q$ a positive exponent. As the rupture of a single fiber may correspond to a single micro-fracture event in the present system, $n(t) \propto \dot{\epsilon}(t) L^2(t)$, where $L(t)$ is the number of remaining fibers at time $t$. Noting that aftershocks are caused by the abrupt stress change caused by a mainshock, the algorithm adopted here, in which a finite stress is applied to the system at $t=0$, may model such a stress change caused by a mainshock. In this sense, the Omori-Utsu law obtained in the present model mimics the dynamics after a mainshock for earthquakes to some extent. We obtain the exponent $p\simeq 2$ in the mean-field model irrespective of the other details such as the threshold distribution, whereas $p$ ranges from $0.6$ to $0.8$ in creep test and from $0.7$ to $1.6$ for earthquakes. The difference is significant but the quantitative agreement is not necessarily here because of the simplicity of the mean-field model. In contrast, the difference is even larger for the LLS model. Noting that the LLS model is generally more unstable than the GLS model, we may obtain smaller exponent for more stable systems. For instance, introducing a probabilistic rule for the elementary fracture process might lead to smaller exponent because it can inhibit the cascade-like instability of fracture caused by the load redistribution to slow down the relaxation. The $c$-value is a characteristic time for the power-law relaxation that results from the abrupt stress loading and therefore it is regarded as a relaxation time for stress. The elementary stress relaxation time in our model is $\tau$, which makes a single time step. It is indeed the only intrinsic time constant in our model and therefore the $c$-value should be scaled with $\tau$ from dimensional analysis. The $c$-value is thus mostly dominated by the nature of $\tau$. For instance, if the stress relaxation time depends on the total load $F$, the analysis given in this study still applies and yields the load-dependent $c$-value. In the GLS model, the analytical expression for the $c$-value is obtained for a class of threshold distributions. Apart from the trivial dependence on $\tau$, the $c$-value depends on three parameters: $a$, $f_c$, and $f_0$. Among them, $a$ and $f_c$ are determined mostly by the threshold distribution via Eqs. (\ref{Phi}) and (\ref{expansion2}) but they also depend on $f_0$ because $\Phi(f)$ is proportional to $f_0$. Therefore, $a$ should be proportional to $f_0$. Noting that $f_0$ is proportional to the total load $F$ as $f_0 = F/L_0$, Eq. (\ref{c-value_general}) implies the load dependence of the $c$-value. Although the $c$-value is found to increase with the load in this study, in view of Eq. (\ref{c-value_general}), it can be a decreasing function of the external load if $f_c > 2 f_0$. This actually means $f_c > 2f_0^*$ and therefore it depends on the threshold distribution. This condition is not satisfied for the distribution functions investigated here and hence the $c$-value exhibits only positive dependence on the external load. \section{Conclusions} We have studied the time evolution of fiber bundle model under a constant external load being slightly above the critical value with some variations in the load redistribution process: the global-load sharing and the local-load-sharing models. The strain rate in the primary and the tertiary stages follows the Omori-Utsu and the inverse Omori laws respectively. In the local-load-sharing model, both the exponent $p$ and the $c$-value are decreasing functions of disorder and the interaction range. Above a certain stress release range ($\rho>1$), the local-load-sharing model exhibits essentially the same behavior as that of the mean filed limit; namely, the exponent for the Omori-Utsu law attains a constant value ($\approx 1.8$) and $c$ is still a decreasing function of disorder. Despite the simplicity of the model and the absence of any thermal activation process, the system exhibits creep-like behaviors with all the three stages: primary, secondary and tertiary. This in turn implies that the probabilistic rule is not essential for a power-law behavior in creep deformation.
2,877,628,089,292
arxiv
\section{Introduction} The discoveries of huge amounts of dust grains in high-redshift quasars (Bertoldi et al.\ 2003; Priddey et al.\ 2003) have posed the fundamental problems on the origin of dust in the early universe. At such an early epoch, core-collapse supernovae (CCSNe) arising from massive stars are considered to be the most promising sources of dust (e.g., Dwek et al.\ 2007). On the other hand, the contribution from asymptotic giant branch stars evolving from intermediate-mass ($M_{\rm ZAMS} \simeq$ 3--8 $M_\odot$) stars has also been invoked to explain a large content of dust in high-redshift objects (Valiante et al.\ 2009; Dwek \& Cherchneff 2011). What stellar mass range can mainly contribute to the dust budget in the early universe strongly depends on the initial mass function (IMF) of the stars (Valiante et al.\ 2011; Gall et al.\ 2011a, 2011b). Numerical simulations of the formation of metal-free stars have shown that the IMF of the first generation of stars, so-called Population III (Pop III) stars, would be weighted towards much higher mass than those in the present universe (Bromm \& Larson 2004; Hirano et al.\ 2014). However, a characteristic mass of Pop III stars remains to be clarified, spanning from $\sim$40 $M_\odot$ (Hosokawa et al.\ 2011; Susa 2013) up to more than 300 $M_\odot$ (Omukai \& Palla 2003; Ohkubo et al.\ 2009). In particular, Pop III stars with the masses exceeding $\sim$250 $M_\odot$ emit numerous ionizing photons and finally collapse into black holes (BHs), serving as seeds of supermassive BHs. Thus, such very massive Pop III stars would have crucial impacts on the reionization of the universe and dynamical evolution of galaxies. Even though most of very massive Pop III stars are not supposed to explode as supernovae (SNe), they are likely to play an important role in the chemical enrichment of the early universe. Yoon et al.\ (2012) found that non-rotating models with $M_\mathrm{ZAMS} > 250~M_\odot$ undergo convective dredge-up of large amounts of carbon and oxygen from the helium-burning core to the hydrogen-rich envelope during the red-supergiant (RSG) phase. This may lead to enrichment of the surrounding medium with CNO elements via RSG winds. More importantly, such CNO-enriched RSG winds can serve as formation sites of dust in the early universe. In this Letter, we elaborate this new scenario of dust formation by Pop III stars, using an exemplary model with $M_{\rm ZAMS} = 500$ $M_\odot$. We show that C grains can form efficiently in the stellar wind with a constant velocity for a reasonable range of mass-loss rates and wind velocities. We also discuss the effect of the wind acceleration on dust formation. \begin{deluxetable*}{cclcccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Chemical Reactions for Formation of C Clusters Considered in This Paper} \tablehead{ & \colhead{key molecule} & \colhead{chemical reaction} & \colhead{$A/10^4$K} & \colhead{$B$} & \colhead{$a_0$ (\AA)} & \colhead{$\sigma$ (erg cm$^{-2}$)}} \startdata \\ (1) Model A & C & C$_{n-1}$ + C $\rightleftharpoons$ C$_{n}$ ~~~ ($n \ge 2$) & 8.3715 & 22.1509 & 1.281 & 1400 \\ \\ (2) Model B & C$_2$H & 2(C$_2$H + H) $\rightleftharpoons$ C$_{2n}$ + 2H$_2$ ~~~~~~~~~~~~~ ($n = 2$) & 8.6425 & 18.9884 & 1.614 & 1400 \\ & & C$_{2(n-1)}$ + C$_2$H + H $\rightleftharpoons$ C$_{2n}$ + H$_2$ ~~~ ($n \ge 3$) & & & \\ \enddata \tablecomments{ The key molecule is defined as the gas species whose collisional frequency is the least among the reactants. The Gibbs free energy ${\it \Delta} \mathring{g}$ for the formation of the condensate from reactants per key molecule is approximated by ${\it \Delta} \mathring{g} / k T = - A/T + B$ with the numerical values $A$ and $B$ derived by least-squares fittings of the thermodynamics data (Chase et al.\ 1985). The radius of the condensate per key molecule and the surface tension of bulk grains are $a_0$ and $\sigma$, respectively.} \end{deluxetable*} \section{THE MODEL} For the properties of a 500 $M_\odot$ RSG, we refer to the model sequence m500vk00 without rotation in Yoon et al.\ (2012); the average luminosity and effective temperature of this RSG are $L_* = 10^{7.2}$ $L_\odot$ and $T_* = 4,440$ K, respectively, with a stellar radius of $R_* = 6,750$ $R_\odot$. This very massive RSG undergoes convective dredge-up during helium-core burning, enriching the hydrogen envelope with a large amount of carbon and oxygen; the average number fractions of the major elements in the envelope are $A_{\rm H} = 0.701$, $A_{\rm He} = 0.294$, $A_{\rm C} = 3.11 \times 10^{-3}$, and $A_{\rm O} = 1.75 \times 10^{-3}$, leading to a high C/O ratio (C/O = 1.78). \subsection{Hydrodynamic Model of the Outflowing Gas} As the first step to assess the possibility of dust formation in a Pop III RSG wind, we consider a spherically symmetric gas flow with a constant wind velocity. In this case, the density profile of the gas flow is given by \begin{eqnarray} \rho(r) = \frac{\dot{M}}{4 \pi r^2 v_{\rm w}} = \rho_* \left( \frac{r}{R_*} \right)^{-2}, \end{eqnarray} where $\dot{M}$ is the mass-loss rate, $v_{\rm w}$ is the wind velocity, and $r$ is the distance from the center of the star. The radial profile of the gas temperature is assumed to be \begin{eqnarray} T(r) = T_* \left( \frac{r}{R_*} \right)^{-\frac{1}{2}}, \end{eqnarray} following the previous studies on dust formation in stellar winds (e.g., Gail et al.\ 1984). Mass loss from a RSG was not considered in Yoon et al.\ (2012), and both the mass-loss rate and the wind velocity are hardly known for Pop III RSGs. Given that the underlying physics of mass-loss mechanisms is not well understood, modeling elaborately the mass-loss history is beyond the scope of this letter. Instead, to cover various physical conditions of the mass-loss winds, we treat $\dot{M}$ and $v_{\rm w}$ as free parameters and examine how these quantities affect the formation process of dust. In what follows, we take as fiducial values $v_{\rm w} =$ 20 km s$^{-1}$ and $\dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$; the latter corresponds to the constant mass-loss rate with which the 500 $M_\odot$ star loses 90\% (208 $M_\odot$) of the envelope during the last $7 \times 10^4$ yr of the RSG phase.\footnote{ We note that, even if there were no wind, floating-off of the loosely bound RSG envelope as a result of the collapse of the core into a BH at the end of its life may also lead to mass ejection of CNO elements into the interstellar medium and production of dust grains (Zhang et al.\ 2008; Kochanek 2014).} \subsection{Model of Dust Formation} The calculations of dust formation are performed by applying the formulation of non-steady-state dust formation in Nozawa \& Kozasa (2013). The formulae self-consistently follow the formation of small clusters and the growth of grains under the consideration that the collisions of key molecules, defined as the gas species with the least collisional frequency among the reactants, control the kinetics of dust formation process. The formulae enable us to evaluate the size distribution and condensation efficiency of newly formed grains for given temporal evolutions of gas temperature and density, chemical composition of the gas, and chemical reactions for the formation of clusters. In a carbon-rich cool gas, all oxygen atoms are bound to carbon atoms to form CO molecules, and carbon atoms and/or carbon-bearing molecules left after the CO formation can participate in the formation of C clusters and grains. The chemical equilibrium calculations along the gas flow (e.g., Kozasa et al.\ 1996) show that, for the physical and chemical conditions given above, the major carbon-bearing gas species, other than CO, is atomic carbon at $T \ga 1,750$ K and C$_2$H at $T \simeq$ 1,400--1,700 K. Thus, the formation of C clusters is expected to proceed at high temperatures through successive attachment of carbon atoms as given in the reaction (1) of Table 1, which we call Model A. Independently of this, we consider another dust formation path involving C$_2$H, for which the possible chemical reactions are given under (2) of Table 1, hereafter referred to as Model B. In the calculations, we assume that a fraction $f_{\rm C}$ of the carbon that is not locked up in CO molecules exists as carbon atoms in Model A and as C$_2$H molecules in Model B. Since $f_{\rm C}$ linearly changes the number density of carbon available for dust formation, decreasing $f_{\rm C}$ is identical with reducing the mass-loss rate or the C/O ratio in the envelope by the same factor. The time evolutions of the gas density and temperature are calculated by substituting $r = R_* + v_{\rm w} t$ into Equations (1) and (2). The sticking probability of gas species is assumed to be unity, and C clusters that contain more carbon atoms than $n_* = 100$ are treated as bulk grains. We refer the readers to Nozawa \& Kozasa (2013) for the formulation of dust formation process and the detailed prescription of the calculations. \begin{figure} \epsscale{1.1} \plotone{f1a.eps} \vspace{0.2 cm} \plotone{f1b.eps} \caption{ (a) Formation rate of seed clusters with $n_* = 100$, divided by the nominal concentration of the key molecules ($I_*$, solid), condensation efficiency ($f_{\rm con}$, dotted), and average grain radius ($a_{\rm ave}$, dashed) as a function of distance from the center of the star ($r/R_*$), and (b) final size distribution spectrum by mass $a^4 f(a)$ of newly formed C grains for a mass-loss rate $\dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$, a wind velocity $v_{\rm w} = 20$ km s$^{-1}$, and $f_{\rm C} = 1$. The thick lines are for the Model A where the chemical reaction (1) in Table 1 is considered for the formation of clusters, while the thin lines are for the Model B with the chemical reactions (2). \label{fig1}} \end{figure} \section{RESULTS OF DUST FORMATION CALCULATIONS} Figure 1 shows the results of the calculations for the fiducial case with $f_{\rm C} \dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$ and $v_{\rm w} = 20$ km s$^{-1}$; Figure 1(a) plots the formation rate of seed clusters with $n_* = 100$ divided by the concentration of key species without depletion due to cluster/grain formation ($I_*$), condensation efficiency ($f_{\rm con}$), and average grain radius ($a_{\rm ave}$) as a function of distance from the center of the star ($r/R_*$). Here, the condensation efficiency $f_{\rm con}(t)$ is defined as the fraction of free carbon atoms that are locked up in grains. In Model A (thick lines) and Model B (thin lines), dust grains start to form at 7.2 $R_*$ and 10.8 $R_*$, respectively, with $I_*$ being peaked around 7.5 $R_*$ and 12.3 $R_*$. In both of the models, the final condensation efficiency $f_{{\rm con}, \infty} = f_{\rm con} (t \rightarrow \infty)$ is unity. The final average grain radius is only a little higher for Model A ($a_{{\rm ave},\infty} = 0.025$ $\mu$m) than for Model B ($a_{{\rm ave},\infty} = 0.021$ $\mu$m); for Model B, the concentration of key molecules at the time of dust formation is lower than for Model A by a factor of $\simeq$5, but the resulting decrease in the formation rate of seed clusters is compensated with the decrease in the growth rate. As a result, the final average radius is similar in both Model A and Model B, although lower rates (longer timescales) of both processes for Model B lead to a broader lognormal-like size distribution of grains, as seen from Figure 1(b). These results demonstrate that the final condensation efficiency and average grain radius are almost independent of the chemical reactions for the formation of C clusters in the context of this study. \begin{figure} \epsscale{1.1} \plotone{f2a.eps} \vspace{0.2 cm} \plotone{f2b.eps} \caption{ Final average radius $a_{{\rm ave},\infty}$ and final condensation efficiency $f_{{\rm con}, \infty}$ of C grains formed in the outflowing gas; (a) as a function of product $f_{\rm C} \dot{M}$ for $v_{\rm w} = 20$ km s$^{-1}$, and (b) as a function of $v_{\rm w}$ for $f_{\rm C} \dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$. The thick solid lines are for Model A, while the thin dashed lines are for Model B. \label{fig1}} \end{figure} Figure 2(a) indicates the final condensation efficiency and final average radius of newly formed C grains as a function of product $f_{\rm C} \dot{M}$ for $v_{\rm w} =$ 20 km s$^{-1}$. For both Model A and Model B, $f_{{\rm con}, \infty} = 1$ at $f_{\rm C} \dot{M} \ga 10^{-4}$ $M_\odot$ yr$^{-1}$, where the average grain radius scales as $a_{{\rm ave}, \infty} \propto (f_{\rm C} \dot{M})^{0.88}$. On the other hand, $a_{{\rm ave}, \infty}$ is more sensitive to the wind velocity, as seen from Figure 2(b); $a_{{\rm ave}, \infty}$ is smaller for a higher $v_{\rm w}$ and scales as $a_{{\rm ave}, \infty} \propto v_{\rm w}^{-1.75}$ for $v_{\rm w} =$ 1--100 km s$^{-1}$. The increase in $v_{\rm w}$ leads to a lower gas density for a fixed $\dot{M}$ and causes more rapid cooling of the gas, both of which favor producing a number of smaller grains. The results of the calculations show that the final condensation efficiency of C grains is unity if the following condition is met; \begin{eqnarray} \left( \frac{f_{\rm C} \dot{M}}{3 \times 10^{-3} ~ M_\odot ~ {\rm yr}^{-1}} \right) \left( \frac{v_{\rm w}}{20 ~ {\rm km} ~ {\rm s}^{-1}} \right)^{-2} \ga 0.04. \end{eqnarray} Thus, as an example, for $v_{\rm w} =$ 20 km s$^{-1}$ and $f_{\rm C} = 1$, the total mass of C grains produced over the lifetime of the RSG with $M_{\rm ZAMS} = 500$ $M_\odot$ is estimated as $M_{\rm dust}/M_\odot = 1.7$ ($\dot{M} / 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$) for $1 \times 10^{-4}$ $M_\odot$ yr$^{-1}$ $\le \dot{M} \le$ $3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$. It should be emphasized here that these newly formed grains could not be destroyed by the blast wave resulting from a SN explosion because such a very massive Pop III star would finally collapse into a BH (Heger \& Woosley 2002; Yoon et al.\ 2012, but see also Ohkubo et al.\ 2006). The ratio of dust mass to the initial stellar mass ($X_{\rm VMS} = M_{\rm dust} / M_{\rm ZAMS} \le 3.4 \times 10^{-3}$) and the dust-to-metal ratio ($M_{\rm dust} / M_{\rm metal} \le 0.24$) are in the ranges of those supplied by Pop III CCSNe, for which $X_{\rm CCSN} =$ (0.1--30)$\times 10^{-3}$ and $M_{\rm dust} / M_{\rm metal} =$ 0.01--0.25, depending on the destruction efficiency of newly formed dust by the reverse shocks (Bianchi \& Schneider 2007; Nozawa et al.\ 2007). This implies that, if very massive Pop III stars had really formed, they could be one of rapid and efficient sources of C grains in the early universe. \begin{figure} \epsscale{1.086} \plotone{f3a.eps} \epsscale{1.1} \plotone{f3b.eps} \caption{ Wind velocity $v_{\rm w}$ (upper panel), and formation rate of seed clusters $I_*$, condensation efficiency $f_{\rm con}$, and average grain radius $a_{\rm ave}$ (lower panel) as a function of $r/R_*$ for Model A with the wind acceleration. The initial wind velocity and mass-loss rate are set to be $v_{{\rm w}, 0} = 30$ km s$^{-1}$ and $\dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$, respectively. The dashed lines in the lower panel are the results without the wind acceleration.\\ \label{fig3}} \end{figure} \section{Effects of Wind Acceleration on Dust Formation} In the previous section, we have considered the formation of dust in stellar winds with constant velocities. However, the radiation pressure acting on newly formed grains will drive the wind to higher outflow velocities, which may suppress the growth of the dust grains. Here, according to Ferarroti \& Gail (2006), we examine the effect of the wind acceleration on dust formation by solving the following simplified momentum equation: \begin{eqnarray} v_{\rm w} \frac{d v_{\rm w}}{dr} = - \frac{G M_*}{r^2} \left[ 1 - \frac{L_* \langle \kappa_{\rm ext}(T)\rangle} {4 \pi c G M_*} D \right], \end{eqnarray} where $G$ is the gravitational constant, $c$ is the light speed, $D$ is the dust-to-gas mass ratio, and $\langle \kappa_{\rm ext}(T) \rangle$ is the Planck-averaged mass extinction coefficient of C grains. We take $\langle \kappa_{\rm ext} \rangle = 2.1 \times 10^4$ cm$^2$ g$^{-1}$ (for $T=4,440$ K, Zubko et al.\ 1996) and $M_* = 400$ $M_\odot$ as a representative value. Figure 3 shows the acceleration of the wind and the formation process of dust for the initial outflow velocity of $v_{{\rm w},0} =$ 30 km s$^{-1}$ and the mass-loss rate of $\dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr $^{-1}$ in Model A. Because of the high stellar luminosity, the wind is rapidly accelerated to $\ge$100 km s$^{-1}$ once $f_{\rm con}$ is above $\simeq 2 \times 10^{-3}$. The resulting rapid dilution of the gas largely decreases both the growth rate of grains and the formation rate of seed clusters but still allows the dust grains to grow slowly. Furthermore, the expansion of the gas reduces the gas temperature, so very small grains continue condensing from carbon atoms that were not locked up in dust grains, as seen from the later increase in $I_*$. As a consequence, $a_{{\rm ave}, \infty}$ becomes very small as a whole, and $f_{{\rm con}, \infty}$ increases to 0.45. \begin{figure} \epsscale{1.1} \plotone{f4.eps} \caption{ The solid lines show the dependence of $a_{{\rm ave},\infty}$ and $f_{{\rm con}, \infty}$ on $v_{{\rm w} ,0}$ in the case with the wind acceleration for Model A with $f_{\rm C} \dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$. The dashed lines plot the average radius $a_{{\rm ave},1}$ and condensation efficiency $f_{{\rm con},1}$ just before the grain growth is depressed by the wind acceleration. \label{fig1}} \end{figure} The dashed lines in Figure 4 plot the average radius ($a_{{\rm ave},1}$) and condensation efficiency ($f_{{\rm con},1}$) at the time ($t=t_1$) just before the grain growth is suppressed by the wind acceleration, as a function of $v_{{\rm w},0}$ for Model A with $f_{\rm C} \dot{M} = 3 \times 10^{-3}$ $M_\odot$ yr$^{-1}$. For a lower $v_{{\rm w},0}$ with which dust grains form in the region closer to the star, the gas outflow is more efficiently accelerated, and the condensation efficiency of (large) grains formed before the wind acceleration is smaller. Nevertheless, the formation of small grains at later phases, as well as the gradual growth of large grains, enhances $f_{{\rm con}, \infty}$ up to the range of 0.45--0.95 with the very small $a_{{\rm ave}, \infty}$ (see the solid lines in Figure 4). Thus, the wind acceleration influences the size distribution of dust but is not likely to affect the final condensation efficiency significantly. Note that, in these calculations, we consider the acceleration of the winds by assuming the position coupling between the dust and the gas. In reality, dust grains pushed by the radiation pressure move outwards relative to the gas, then the drag force between them drives the acceleration of the outflowing gas. Thus, the wind acceleration must be less efficient than that in this study. On the other hand, the high-velocity motion of dust relative to the gas can cause the erosion of dust by sputtering (Tielens et al.\ 1994; Nozawa et al.\ 2006). In particular, dust grains are accelerated above 100 km s$^{-1}$ in the present case, so the processing of dust by sputtering is expected to have considerable impacts on the final condensation efficiency. These processes will be explored in the future work. \section{CONCLUSION AND DISCUSSION} We have investigated the formation of C grains in a mass-loss wind of a Pop III RSG with $M_{\rm ZAMS} = 500$ $M_\odot$. We find that, in a stellar wind with a constant velocity, the condensation efficiency of C grains is unity under the condition in Equation (3), and that at most 1.7 $M_\odot$ of C grains can be produced during the lifetimes of Pop III RSGs. We also find that the wind acceleration caused by newly formed dust can change the final size distribution of the dust, but still leads to the high final condensation efficiency ($f_{{\rm con}, \infty} \ga 0.5$). Such dust masses would be high enough to have an impact on the dust enrichment history in the early universe if the IMF of Pop III stars was top-heavy. Recent sophisticated simulations of the first star formation (Hirano et al.\ 2014) have suggested that the number of very massive stars (VMSs) with $M_{\rm ZAMS} \ga 250$ $M_\odot$ ($N_{\rm VMS}$) is likely to be as large as that of massive stars exploding as CCSNe ($N_{\rm CCSN}$). If this is true and if all of the VMSs lead to $X_{\rm VMS} = 3.4 \times 10^{-3}$, the contribution of the interstellar dust from VMSs is comparable with, or even higher ($N_{\rm VMS} X_{\rm VMS}/N_{\rm CCSN} X_{\rm CCSN} \ga 1$) than that from CCSNe in the case that the destruction of dust by the reverse shock is efficient ($X_{\rm CCSN} \la 1.0 \times 10^{-3}$).\footnote{ For pair-instability SNe occurring from stars with $M_{\rm ZAMS} \simeq$ 130--250 $M_\odot$, $X_{\rm PISN} \la$ 0.05 and $M_{\rm dust} / M_{\rm metal} \la$ 0.15, depending on the destruction efficiency of dust by the reverse shocks (Nozawa et al.\ 2007). We also note that pair-instability SNe might be inefficient sources of C grains (Nozawa et al.\ 2003).} Thus, very massive Pop III stars could be potentially dominant sources of dust grains at very early times of the universe. Our results also have important indications on the formation scenario of carbon-rich ultra-metal-poor (UMP) stars with [Fe/H] $< -4$, which would record the chemical imprints of Pop III stars (Beers \& Christlieb 2005). The formation of such low-mass metal-poor stars is considered to be triggered through the cooling of gas by dust ejected from Pop III SNe (Schneider et al.\ 2012a, 2012b; Chiaki et al.\ 2013). Ji et al.\ (2014) suggested that the formation of carbon-rich UMP stars relies on the cooling by fine structure lines of C and O atoms, assuming that the first SNe produced no C grain. Here we propose another possible channel for the formation of carbon-rich UMP stars. As shown in this study, very massive Pop III RSGs are efficient sources of C grains as well as CNO elements. Thus, in the gas clouds enriched by these Pop III RSGs, C grains enable the formation of low-mass stars whose chemical compositions are highly enhanced in carbon and oxygen. As the investigated 500 $M_\odot$ model undergoes mild hot-bottom burning, some nitrogen is also produced, giving rise to [N/C] = $-4.2$ to $-1.3$ depending on the assumed mass-loss history, where observations of carbon-rich UMP stars indicate [N/C] $\ge -1.7$ (Christlieb et al.\ 2002; Norris et al.\ 2007; Frebel et al.\ 2008). From our zero-metallicity model, we do not predict the presence of any heavier metals. Further observations and more quantitative theoretical studies are needed to show whether any UMP stars have formed through our scenario. \acknowledgments We are grateful to the anonymous referees for critical comments. This research has been supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and by the Grant-in-Aid for JSPS Scientific Research (22684004, 23224004, 23540262, 26400222). \newpage
2,877,628,089,293
arxiv
\section{Introduction} The main goal of this paper is to prove interior Lipschitz regularity for continuous viscosity solutions of fully nonlinear, conformally invariant, degenerate elliptic equations arising from conformal geometry. Let $\mathbb{R}^{n}$ denote the Euclidean space of dimension $n$, \begin{equation} \Gamma\subset \mathbb{R}^{n}\mbox{ be an open convex symmetric cone with vertex at the origin},\label{cone1} \end{equation} satisfying \begin{equation} \Gamma \supset \{\lambda\in\mathbb{R}^{n}:\lambda_{i}>0,i=1,\cdots,n\}. \label{wb} \end{equation} Let $f\in C^{1}(\Gamma)\cap C^{0}(\overline{\Gamma})$ be a symmetric function satisfying \begin{equation} f = 0 \text{ on } \partial\Gamma \text{ and } f>0,\quad \frac{\partial f}{\partial \lambda_{i}}>0 \mbox{ in }\Gamma \text{ for } i = 1, \ldots, n.\label{fGonzalez06} \end{equation} In the above, the symmetricity of $(f,\Gamma)$ is understood in the sense that if $\lambda \in \Gamma$, then $\tilde \lambda \in \Gamma$ and $f(\tilde\lambda) = f(\lambda)$ for any permutation $\tilde\lambda$ of $\lambda$. For a function $u$ defined on a Euclidean domain, let $A^u$ denote its conformal Hessian matrix, i.e. \[ A^u:=-\frac{2}{n-2}u^{-\frac{n+2}{n-2}}\nabla^2u+\frac{2n}{(n-2)^{2}}u^{-\frac{2n}{n-2}} \nabla u \otimes \nabla u -\frac{2}{(n-2)^{2}}u^{-\frac{2n}{n-2}}|\nabla u|^{2}I, \] where here and below $I$ denotes the $n\times n$ identity matrix, and, for $p, q \in {\mathbb R}^n$, $p \otimes q$ denotes the $n\times n$ matrix with entries $(p \otimes q)_{ij} = p_i\,q_j$. Let $\lambda(A^u)$ denote the eigenvalues of $A^u$. In recent years, there has been a growing literature on the following two equations: \begin{equation} f\left(\lambda(A^{u})\right)=1,\quad u>0 \mbox{ and } \quad \lambda(A^u) \in \Gamma,\label{yamabe'} \end{equation} and \begin{equation} \lambda(A^{u})\in\partial\Gamma,\quad \mbox{ and } \quad u>0.\label{main equation} \end{equation} Note that equation \eqref{main equation} is equivalent to \begin{equation*} f\left(\lambda(A^{u})\right)=0,\quad u>0\quad\mbox{and} \quad\lambda(A^{u})\in\overline{\Gamma}. \end{equation*} Equation \eqref{yamabe'} and \eqref{main equation} are second order fully nonlinear elliptic and degenerate elliptic equations, respectively. Fully nonlinear elliptic equations involving $f(\lambda(\nabla^{2}u))$ was investigated in the classic paper \cite{C-N-S-Acta}. The equations \eqref{yamabe'} and \eqref{main equation} arose from conformal geometry. On a Riemannian manifold $(M,g)$ of dimension $n\geq3$, consider the Schouten tensor \begin{equation*} A_{g}=\frac{1}{n-2}(\mbox{Ric}_{g}-\frac{1}{2(n-1)}R_{g}g), \end{equation*} where $\mbox{Ric}_{g}$ and $R_{g}$ denote, respectively, the Ricci tensor and the scalar curvature. Let $\lambda(A_{g})=(\lambda_{1},\cdots,\lambda_{n})$ denote the eigenvalues of $A_{g}$ with respect to $g$. It is well known that, in a conformal change of the metric, the ``main contribution'' to the curvature tensor is captured in the change of the Schouten tensor. One is thus naturally led to study, in the hope of finding some sort of ``best metric'' in a conformal class of metrics, the problem (see e.g. \cite{CGY02-AnnM,Viac00-Duke}) \begin{equation} f\left(\lambda(A_{u^{\frac{4}{n-2}}g})\right)=1,\quad u>0,\quad\mbox{and}\quad \lambda(A_{u^{\frac{4}{n-2}}g})\in\Gamma \mbox{ on }M.\label{yamabe} \end{equation} This problem is sometimes referred to in the literature as a fully nonlinear version of the Yamabe problem. When $M$ is a Euclidean domain and $g=g_{\rm flat}$ is the flat metric, equation \eqref{yamabe} is exactly equation \eqref{yamabe'}. Furthermore, both equation \eqref{yamabe'} and equation \eqref{main equation} appear naturally in the study of blow-up sequences of solutions of \eqref{yamabe} on manifolds. Important examples of $(f,\Gamma)$ are $(f,\Gamma)=(\sigma_{k}^{\frac{1}{k}},\Gamma_{k})$, $1 \leq k \leq n$, where $\sigma_{k}(\lambda):=\sum\limits_{1\leq i_{1}<\cdots<i_{k}\leq n}\lambda_{i_{1}}\cdots\lambda_{i_{k}}$ is the $k$-th elementary symmetric function, and $\Gamma_{k}$ is the connected component of $\{\lambda\in\mathbb{R}^{n}:\sigma_{k}(\lambda)>0\}$ containing the positive cone $\{\lambda\in\mathbb{R}^{n}:\lambda_{i}>0,i=1,\cdots,n\}$. When $(f,\Gamma)=(\sigma_{1},\Gamma_{1})$, \eqref{yamabe} is the classical Yamabe problem in the so-called positive case. In this paper, we establish the following regularity result for continuous viscosity solutions of \eqref{main equation}. See \cite[Definition 1.1]{Li09-CPAM} and Definition \ref{Def:ViscositySolution} below for the definition of viscosity solutions. \begin{thm}[Local Lipschitz regularity] For $n\geq 3$, let $\Omega$ be an open subset of $\mathbb{R}^{n}$, and $\Gamma$ satisfy \eqref{cone1} and \eqref{wb}. Assume that $u$ is a continuous viscosity solution of \eqref{main equation} in $\Omega$. Then $u\in C^{0,1}_{loc}(\Omega)$. \label{thm:regularity} \end{thm} \begin{rem} As a consequence of Theorem \ref{thm:regularity}, several previously known results for Lipschitz continuous solutions of \eqref{main equation} hold for continuous solutions. This includes the Liouville-type Theorem 1.4, the symmetry results Theorem 1.18 and Theorem 1.23 in \cite{Li09-CPAM}; the B\^ocher-type Theorems 1.2 and 1.3, the Harnack-type Theorem 1.5, and the asymptotic behavior results Corollary 1.7 and Theorem 1.8 in \cite{LiNgBocher}. \end{rem} Although there have been many works on a priori estimates for solutions to \eqref{yamabe'} and \eqref{main equation} and closely related issues (see e.g. \cite{CGY02-AnnM, Chen05,GeWang06, Gonzalez05, GW03-IMRN,GV07,HLT10, J,J-L-L, LiLi03,LiLi05,Li09-CPAM,LiNgBocher,LiNgSymmetry,NadirashviliVladuts, STW07, TW09, Viac02-CAG, Wang06}), our theorem above appears to be the first regularity result for viscosity solutions in this context. The regularity obtained in Theorem \ref{thm:regularity} is in a sense sharp: In \cite{NadirashviliVladuts}, Nadirashvili and Vl\u{a}du\c{t} showed that, for any $\epsilon \in (0,1)$, there exists a solution to a uniformly elliptic and conformally invariant equation in a ball $B \subset {\mathbb R}^5$ which belongs to $C^{1,\epsilon}(B) \setminus C^{1,\epsilon+}(B)$. It is sometimes more convenient to write $u = e^{-\frac{n-2}{2}\psi}$ or $u = w^{-\frac{n-2}{2}}$. An easy computation gives $A^u = A_w = e^{2\psi} A[\psi]$ where \begin{align*} A_{w} &=w\nabla^{2}w-\frac{1}{2}|\nabla w|^{2}I,\\ A[\psi] &= \nabla^2 \psi + \nabla\psi \otimes \nabla\psi - \frac{1}{2} |\nabla \psi|^{2}I. \end{align*} In addition to Theorem \ref{thm:regularity}, we also study the Dirichlet boundary value problem for a class of degenerate elliptic equations which includes the conformal operator $A[\psi]$. Consider operators of the form \begin{equation} F[\psi] = \nabla^2 \psi + \alpha\,\nabla \psi \otimes \nabla \psi - \beta |\nabla \psi|^2\,I \label{Eq:FConstab} \end{equation} where $\alpha$ and $\beta$ are constant, and the equation \[ F[\psi] \in \partial U, \] where $U$ is a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying a degenerate ellipticity condition: \begin{equation} \text{ if }A\in U, B\in{\mathcal{S}^{n \times n}}\mbox{ and }B>0, \text{ then } A+B\in U. \label{Eq:UCondPos} \end{equation} (Note that \eqref{Eq:UCondPos} implies that $\partial U$ is Lipschitz.) In the context of Theorem \ref{thm:regularity}, $U$ is the set of symmetric matrices whose eigenvalues belong to $\Gamma$, as equation \eqref{main equation} can be written equivalently as $A^u \in \partial U$. We note for future use that our results below apply also to the setting of fully nonlinear Yamabe problem of ``negative type'' by considering the set $U$ of symmetric matrices whose eigenvalues belong to ${\mathbb R}^n \setminus (-\bar\Gamma)$, where \[ -\bar\Gamma = \{\lambda \in {\mathbb R}^n: -\lambda \in \bar\Gamma\}. \] In both cases, \eqref{Eq:UCondPos} holds thanks to \eqref{cone1} and \eqref{wb}. For any set $S \subset\mathbb{R}^{n}$, we use $\mbox{USC}(S)$ to denote the set of functions $\psi:S\rightarrow\mathbb{R}\cup\{-\infty\}$, $\psi \not\equiv -\infty$ in $S$, satisfying \begin{equation*} \limsup\limits_{x\rightarrow\bar{x}}\psi(x)\leq \psi(\bar{x}),\quad \forall \bar{x}\in S. \end{equation*} Similarly, we use $\mbox{LSC}(S)$ to denote the set of functions $\psi: S\rightarrow\mathbb{R}\cup\{+\infty\}$, $\psi \not\equiv +\infty$ in $S$, satisfying \begin{equation*} \liminf\limits_{x\rightarrow\bar{x}}\psi(x)\geq \psi(\bar{x}),\quad \forall \bar{x}\in S. \end{equation*} We now give the definition of viscosity subsolutions, supersolutions and solutions to the degenerate elliptic equation $F[\psi] \in \partial U$. \begin{Def}\label{Def:ViscositySolution} Let $\Omega\subset\mathbb{R}^{n}$, $n \geq 2$, be an open set, and $U$ be a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos}. For a function $\psi$ in $USC(\Omega)$ ($LSC(\Omega)$), we say that \begin{equation*} F[\psi] \in \overline{U}\quad \left(F[\psi]\in{\mathcal{S}^{n \times n}}\setminus U\right) \quad\mbox{in }\Omega \quad\mbox{in the viscosity sense} \end{equation*} if for any $x_{0}\in\Omega$, $\varphi\in C^{2}(\Omega)$, $(\psi-\varphi)(x_{0})=0$ and \begin{equation*} \psi-\varphi\leq0\quad(\psi-\varphi\geq0),\quad\mbox{near }x_{0}, \end{equation*} there holds \begin{equation*} F[\varphi](x_{0})\in \overline{U}\quad \left(F[\varphi](x_{0})\in{\mathcal{S}^{n \times n}}\setminus U\right). \end{equation*} We say that a function $\psi \in C^0(\Omega)$ satisfies \begin{equation} F[\psi]\in \partial U \text{ in the viscosity sense} \label{Eq:FpsiEq} \end{equation} in $\Omega$ if $F[\psi]$ belongs to both $\overline{U}$ and ${\mathcal{S}^{n \times n}}\setminus U$ in $\Omega$ in the viscosity sense. When $F[\psi] \in \overline{U}\quad \left(F[\psi]\in{\mathcal{S}^{n \times n}}\setminus U\right)$ in $\Omega$ in the viscosity sense, we also say interchangeably that $\psi$ is a viscosity subsolution (supersolution) to \eqref{Eq:FpsiEq} in $\Omega$. \end{Def} Our next result is a uniqueness statement for \eqref{Eq:FpsiEq} when $U$ satisfies \begin{equation} A \in U \text{ and } c > 0 \Rightarrow cA \in U. \label{Eq:UCone} \end{equation} \begin{thm}[Uniqueness for the Dirichlet Problem] Let $\Omega\subset\mathbb{R}^{n}$ ($n \geq 2$) be a non-empty bounded open set, and $U$ be a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos} and \eqref{Eq:UCone}. Assume that $F$ is of the form \eqref{Eq:FConstab}. Then, for any $\varphi \in C^0(\partial\Omega)$, there exists at most one solution $\psi \in C^{0}(\bar\Omega)$ of \eqref{Eq:FpsiEq} satisfying $\psi = \varphi$ on $\partial\Omega$. \label{thm:Uniq} \end{thm} We also prove the following existence theorem using Perron's method (see \cite{Ishii89-CPAM}). \begin{thm}[Existence by sub- and supersolution method]\label{thm:Perron} Let $\Omega$ and $(F,U)$ be as in Theorem \ref{thm:Uniq}. Let $w\in LSC(\overline \Omega)$ and $v\in USC(\overline\Omega )$ be respectively supersolution and subsolution of \eqref{Eq:FpsiEq} in $\Omega$ such that $w\geq v$ in $\Omega$ and $w=v$ on $\partial\Omega$. Then there exists a viscosity solution $u\in C^0(\overline \Omega)$ of \eqref{Eq:FpsiEq} in $\Omega$ satisfying \begin{align*} v\le u\le w &\qquad\mbox{in}\ \overline\Omega,\\ u=w=v &\qquad \mbox{on}\ \partial\Omega. \end{align*} \end{thm} One main ingredient of the proofs of Theorems \ref{thm:regularity}, \ref{thm:Uniq} and \ref{thm:Perron} is a comparison principle. In recent years, comparison principles (for viscosity solutions) have been very successfully applied to derive estimates and symmetry properties for solutions to (both degenerate and non-degenerate elliptic) equations in conformal geometry; see \cite{Li09-CPAM} and the references therein. Our paper can be viewed as a continuation in this line of work. In fact, we will establish a variant of the comparison principle for more general operators of the form \begin{equation} F[\psi] = \nabla^2 \psi + \alpha(\cdot, \psi)\,\nabla \psi \otimes \nabla \psi - \beta(\cdot, \psi) |\nabla \psi|^2\,I \label{Eq:FIntroDef} \end{equation} where $\alpha, \beta: \Omega \times {\mathbb R} \rightarrow {\mathbb R}$ and $\Omega$ is an open subset of ${\mathbb R}^n$. Throughout the paper, we will assume that \begin{equation} \text{the function $L(x,s,p) := \alpha(x,s) p \otimes p - \beta(x,s)|p|^2 I$ is non-decreasing in $s$.} \label{Eq:LMonotone} \end{equation} Note that this condition is consistent with both $A[\psi]$ and $\frac{1}{w}A_w$ defined above. In the sequel, we say that \emph{the principle of propagation of touching points} holds for $(F,U)$ if for any supersolution $w \in LSC(\bar\Omega)$ and subsolution $v\in USC(\bar\Omega)$ of \eqref{Eq:FpsiEq} in $\Omega$ one has \[ w\geq v \mbox{ in } \Omega \text{ and } w > v \text{ on } \partial\Omega \qquad\Rightarrow \qquad w > v \text{ in } \Omega. \] (In other words, if $w \geq v$ in $\Omega$ then every non-empty connected component of the set $\{x \in \bar\Omega: w(x) = v(x)\}$ contains a point of $\partial\Omega$.) This principle can be viewed as a weak version of the strong comparison principle. We say that \emph{the comparison principle} holds for $(F,U)$ if for any supersolution $w \in LSC(\bar\Omega)$ and subsolution $v\in USC(\bar\Omega)$ of \eqref{Eq:FpsiEq} in $\Omega$ one has \[ w \geq v \text{ on } \partial\Omega \qquad \Rightarrow \qquad w \geq v \text{ in } \Omega. \] It should be noted that, for general degenerate elliptic equations, $w \geq v$ in $\Omega$ does not imply the dichotomy that $w > v$ or $w \equiv v$ in $\Omega$. (This is in contrast with the uniformly elliptic case.) \begin{rem}\label{rem:SCP=>CP} If $L(x,s,p)$ is independent of $s$, then the principle of propagation of touching points is equivalent to the comparison principle. \end{rem} We prove that the principle of propagation of touching points holds when $(F,U)$ satisfies, in addition to \eqref{Eq:UCondPos}, \eqref{Eq:UCone} and \eqref{Eq:LMonotone}, the following structural conditions: \begin{equation} \begin{array}{ll} \text{ either }& \text{$|\beta(x, s)| > \beta_0 > 0$ for some constant $\beta_0$}\\ \text{ or } & \text{ both $\alpha$ and $\beta$ are constant}. \end{array} \label{Eq:betaStruct} \end{equation} \begin{thm}[Principle of propagation of touching points]\label{thm:CPQuad} Let $F$ be of the form \eqref{Eq:FIntroDef} where $\alpha, \beta \in C^{0,1}_{loc}(\bar\Omega \times {\mathbb R})$ satisfy \eqref{Eq:LMonotone} and \eqref{Eq:betaStruct}. Let $\Omega\subset\mathbb{R}^{n}$ ($n \geq 2$) be a non-empty bounded open set, and $U$ be a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos} and \eqref{Eq:UCone}. Assume that $w \in LSC(\bar\Omega)$ and $v\in USC(\bar\Omega)$ are respectively a supersolution and a subsolution of \eqref{Eq:FpsiEq} in $\Omega$. \begin{enumerate}[(a)] \item If $w \geq v$ in $\Omega$ and $w > v$ on $\partial\Omega$, then $w > v$ in $\Omega$. \item In case $\alpha$ and $\beta$ are constant, if $w \geq v$ on $\partial\Omega$, then $w \geq v$ in $\Omega$. \end{enumerate} \end{thm} When $w$ and $v$ are locally Lipschitz and $F[\psi] = A[\psi]$, Theorem \ref{thm:CPQuad} was established in \cite{Li09-CPAM}. The proof of Theorem \ref{thm:CPQuad} yields the propagation principle for an even larger class of operators; see Theorem \ref{thm:CPNUE} (where the assumption on the quadratic dependence of $F[\psi]$ on $\nabla\psi$ is somewhat relaxed to a super-linear dependence). One ingredient of the proof is a first variation result which, roughly speaking, allows one to perturb a given function $\psi$ to another function $\tilde\psi$ such that $F[\tilde\psi]$ are either ``more inside'' or ``more outside'' the set $U$ than $F[\psi]$ in a detailed controlled fashion. There are two delicate points of this process. On the one hand, one needs to ensure that the gain obtained is strong enough to counter-balance the error accrued in either regularization or handling the difficulties created by degenerate ellipticity. On the other hand, the whole process is carried out in such a way that it depends only on an upper bound and a lower bound of $\psi$, and not on $\nabla \psi$ or $\nabla^2 \psi$. It is in this first variation argument that the assumptions that $\beta$ does not change sign and $L$ is non-decreasing are crucially used. See subsection \ref{Sec:CPCounterex} for examples which hint that these assumptions cannot simply be dropped. Comparison principles for different classes of (degenerate) elliptic operators are available in the literature. See \cite{AmendolaGaliseVitolo13-DIE, BardiDaLio99-AM, HBDolcettaPorretaRosi15-JMPA, BirindelliDemengel04-AFSTM, BirindelliDemengel07-CPAA, UserGuide, DolcettaVitolo07-MC, DolcettaVitolo-preprint, HartmanNirenberg, HarveyLawsonSurvey2013, Ishii89-CPAM, IshiiLions90-JDE,Jensen88-ARMA, KawohlKutev98-AM, KawohlKutev00-FE, KawohlKutev07-CPDE, KoikeKosugi15-CPAA, KoikeLey11-JMAA, Trudinger88-RMI} and the references therein. Most of these works assumed a kind of ``properness/non-degeneracy'' of the operator with respect to the unknown $\psi$ (e.g. $L$ is decreasing with respect to $s$) which is not applicable to our setting (see condition \eqref{Eq:LMonotone}). In the present paper, we exploit instead some non-degeneracy with respect to the derivatives $\nabla \psi$ of the unknown. It is natural to ask if the method we follow here can be tweaked together with the more familiar treatment for proper operators to treat a broader class of operators, but this goes beyond the scope of the present paper. We however note that (non-strict) properness of the operator is far from ensuring the validity of a comparison/propagation principle; see Proposition \ref{prop:CtexNonDec}. Using results on removable singularities in \cite{CafLiNir11}, we obtain the following comparison principle on domains with singularities when $U$ satisfies in addition the condition \begin{equation} U\subset\{M\in{\mathcal{S}^{n \times n}}:\mbox{tr}(M)>0\},\label{gooo} \end{equation} where $\mbox{tr}(M)$ denotes the trace of $M$. For the proof, see Section \ref{sec:CP}. \begin{cor} \label{cor:CPSing} Let $\Omega \subset {\mathbb R}^n$ ($n \geq 2$) be a bounded non-empty open set, $E\subset \Omega$ be a closed set with zero Newtonian capacity, and $(F,U)$ be as in Theorem \ref{thm:CPQuad} with constant $\alpha$ and $\beta$. If $w\in LSC(\bar\Omega)$ and $v\in USC(\bar\Omega \setminus E)$ satisfy \begin{align*} w &\text{ is a supersolution to \eqref{Eq:FpsiEq} in $\Omega$},\\ v &\text{ is a subsolution to \eqref{Eq:FpsiEq} in $\Omega \setminus E$}, \end{align*} $w \geq v$ in $\Omega \setminus E$ and $w > v$ on $\partial\Omega$, and if \begin{align} \text{either } & \sup_{\Omega \setminus E} v < + \infty,\label{Eq:anb1}\\ \text{or } & \alpha - n \beta < 0,\label{Eq:anb2} \end{align} then $\inf_{\Omega \setminus E} (w - v) > 0$. \end{cor} \begin{rem} It is interesting to identify the set $S_k$ of $(\alpha,\beta)$ for which one cannot drop the assumption that $v$ is bounded from above when $U$ is the set of symmetric matrices whose eigenvalues belong to $\Gamma_k$ with $2 \leq k \leq n$. Note that by the above result, $S_k \subset \{\alpha - n\beta \geq 0\}$. For $k = 1$, equation \eqref{Eq:FpsiEq} becomes $\Delta\psi + (\alpha - n\beta)|\nabla \psi|^2 = 0$, from which one can see that $S_1 \subset \{\alpha - n\beta > 0\}$. In fact, $S_1 = \{\alpha - n\beta > 0\}$. To see this, note that the functions $\psi_\mu(x)=\frac{1}{\alpha - n\beta} \ln(|x|^{2-n} + \mu)$ with $\mu \geq 0$ are solutions of \eqref{Eq:FpsiEq} in $B_1(0) \setminus \{0\}$. In particular, $w = \psi_1$ is a supersolution of \eqref{Eq:FpsiEq} in $B_1(0)$, $v = \psi_0$ is a subsolution of \eqref{Eq:FpsiEq} in $B_1(0) \setminus \{0\}$, $w \geq v$ in $B_1(0) \setminus \{0\}$, and $w > v$ on $\partial B_1(0)$, but $\inf_{B_1(0) \setminus \{0\}} (w - v) = 0$. \end{rem} When $F[\psi]$ is the conformal operator $A[\psi]$, Corollary \ref{cor:CPSing} was proved by the first named author in \cite{Li09-CPAM} under the assumption that $E\subset \Omega$ containing at most finitely many points, $U=\{M\in{\mathcal{S}^{n \times n}}:\lambda(M)\in\Gamma\}$, $w\in C^{0,1}(\bar\Omega)$ and $v\in C^{0,1}(\bar\Omega\setminus E)$. A related issue of interest is whether the strong maximum principle and the Hopf lemma holds. It turns out that, in this degenerate elliptic setting, both fail for a large class of operators. See \cite{CafLiNir11, LiNir-misc} for further discussion. Last but not least, the proof of Theorem \ref{thm:regularity} uses not only the comparison principle (Theorem \ref{thm:CPQuad}) but also conformal invariance properties of $A[\psi]$ (i.e. of $A^u$). We remark that, for general $F$ and $U$, the comparison principle itself is far from ensuring (Lipschitz) regularity of viscosity solutions. See Section \ref{Sec:LipRegVS} for further discussion. The rest of the paper is structured as follows. We start in Section \ref{Sec:Prelim} with some preliminaries about regularizations of semi-continuous functions by lower and upper envelops. In Section \ref{sec:CP}, we prove a generalization of Theorem \ref{thm:CPQuad} for more general operators and give counterexamples to highlight the importance of the conditions in the theorem. In Section \ref{Sec:Perron}, we prove the uniqueness Theorem \ref{thm:Uniq} and the existence result Theorem \ref{thm:Perron}. Finally, in Section \ref{Sec:LipRegVS}, we prove the regularity result Theorem \ref{thm:regularity} together with some generalization. \section{Preliminaries}\label{Sec:Prelim} We briefly recall a well-known regularization of semi-continuous functions which will be used later in the paper. Assume $n\geq1$ and let $\Omega$ be an open bounded set in $\mathbb{R}^{n}$. For a function $v \in USC(\bar \Omega)$ and $\epsilon > 0$, we define the $\epsilon$-upper envelop of $v$ by \begin{equation} v^{\epsilon}(x) :=\max\limits_{y\in\bar\Omega}\Big\{v(y) -\frac{1}{\epsilon}|y-x|^{2}\Big\}, \qquad \forall x \in \bar \Omega. \label{Eq:UpEnvDef} \end{equation} Likewise, for a function $w \in LSC(\bar \Omega)$, its $\epsilon$-lower envelop is defined by \begin{equation} w_{\epsilon}(x):=\min\limits_{y\in\bar\Omega}\Big\{w(y) +\frac{1}{\epsilon}|y-x|^{2}\Big\},\qquad\forall x\in\bar\Omega. \label{Eq:LowEnvDef} \end{equation} Although our definition of upper and lower envelops is slightly different from the definition in \cite{CabreCaffBook}, all relevant properties established in \cite[Lemma 5.2]{CabreCaffBook} remain valid with minor modification. We collect below some useful properties. \begin{enumerate}[(i)] \item\label{UpLowPropi} $v^\epsilon, w_\epsilon$ belong to $C(\bar\Omega)$, are monotonic in $\epsilon$ and \begin{equation} \text{$v_\epsilon \rightarrow v$, $w_\epsilon \rightarrow w$ pointwise as $\epsilon \rightarrow 0$.} \label{Eq:UpLowConv} \end{equation} \item\label{UpLowPropii} $v^{\epsilon}$ and $w_{\epsilon}$ are punctually second order differentiable (see e.g. \cite{CabreCaffBook} for a definition) almost everywhere in $\Omega$ and \begin{equation} \nabla^{2}v^{\epsilon}\geq-\frac{2}{\epsilon}I,\quad\nabla^{2}w_{\epsilon}\leq \frac{2}{\epsilon}I,\quad\mbox{ a.e. in }\Omega.\label{utr} \end{equation} \item\label{UpLowPropiii} For any $x \in \Omega$, there exists $x^* = x^*(x) \in \bar \Omega$ such that $$v^\epsilon(x) = v(x^*) - \frac{1}{\epsilon}|x^* - x|^2 \text{ and } |x^* - x|^2 \leq \epsilon(\max_{\bar\Omega} v - v(x)).$$ Likewise, for any $x \in \Omega$, there exists $x_* = x_*(x) \in \bar \Omega$ such that $$w_\epsilon(x) = w(x_*) + \frac{1}{\epsilon}|x_* - x|^2 \text{ and } |x_* - x|^2 \leq \epsilon(w(x) - \min_{\bar\Omega} w).$$ \item \label{UpLowPropiv} If it holds for some non-empty open subset $\omega$ of $\Omega$ that $\inf_{\omega} v > -\infty$ and $\sup_{\omega} w < +\infty$, then \begin{equation} |\nabla v^\epsilon| \leq \frac{2}{\epsilon^{\frac{1}{2}}} \big[\max_{\bar\Omega} v - \inf_{\omega} v\big]^{\frac{1}{2}} \text{ and } |\nabla w_\epsilon| \leq \frac{2}{\epsilon^{\frac{1}{2}}} \big[\sup_{\omega} w - \min_{\bar\Omega} w\big]^{\frac{1}{2}} \label{Eq:UpLowGradEst} \end{equation} almost everywhere in $\omega$. \item \label{UpLowPropv} The bounds for $|x^* - x|$ and $|x_* - x|$ in \eqref{UpLowPropiii} can be improved when $v$ and $w$ are more regular. In fact, if $|v(x) - v(y)| \leq m(|x - y|)$ for all $x, y \in \bar\Omega$ and for some non-negative continuous non-decreasing function $m: [0,\infty) \rightarrow [0,\infty)$ satisfying $m(0) = 0$, then \[ |x^* - x| \leq \big[\epsilon\,m((2\epsilon\,\sup_{\bar\Omega} |v|)^{1/2})\big]^{1/2}. \] Analogously, if $|v(x) - v(y)| \leq m(|x - y|)$ for all $x, y \in \bar\Omega$, then \[ |x_* - x| \leq \big[\epsilon\,m((2\epsilon\,\sup_{\bar\Omega} |w|)^{1/2})\big]^{1/2}. \] Nevertheless, the bounds for $|x^* - x|$ and $|x_* - x|$ in \eqref{UpLowPropiii} are generally sharp for semi-continuous functions. \end{enumerate} Properties \eqref{UpLowPropi}-\eqref{UpLowPropiii} can be found in \cite{CabreCaffBook}. To see Property \eqref{UpLowPropiv}, we let $x_0 \in \omega$ be a point of differentiablity of $v^\epsilon$, and estimate, for $x_1 \in \Omega$, \begin{align*} v^\epsilon(x_0) &\geq v((x_1)^*) - \frac{1}{\epsilon}|(x_1)^* - x_0|^2\\ &\geq v((x_1)^*) - \frac{1}{\epsilon}|(x_1)^* - x_1|^2 - \frac{2}{\epsilon}|(x_1)^* - x_1||x_1 - x_0| - \frac{1}{\epsilon}|x_1 - x_0|^2\\ &= v^\epsilon(x_1) - \frac{2}{\epsilon}|(x_1)^* - x_1||x_1 - x_0| - \frac{1}{\epsilon}|x_1 - x_0|^2, \text{ for all } x_0, x_1 \in \Omega, \end{align*} which implies, in view of Property \eqref{UpLowPropiii}, that \[ \frac{v^\epsilon(x_1) - v^\epsilon(x_0)}{|x_1 - x_0|} \leq \frac{2}{\epsilon^{\frac{1}{2}}}|\max_{\bar\Omega} v - v(x_1)| + \frac{1}{\epsilon}|x_1 - x_0|. \] Sending $x_1 \rightarrow x_0$ and recalling Property \eqref{UpLowPropii}, we obtain the assertion. Property \eqref{UpLowPropv} follows from \eqref{UpLowPropiii} and the estimate \begin{align*} \frac{1}{\epsilon}|x^* - x|^2 &= v(x^*) - v^\epsilon(x) \leq v(x^*) - v(x) \leq m(|x^* - x|). \end{align*} The sharpness of the estimates for $|x^* - x|$ and $|x_* - x|$ in \eqref{UpLowPropiii} is demonstrated by the following example. Consider $\Omega = (-1,1)$. For $x \in[-1,1]$, define \[ w(x) = \left\{\begin{array}{ll} 1 &\text{ if } 2^{-(k+2)} < |x| \leq 2^{-(k+1)} \text{ for some } k \geq 0,\\ 2 - 2^{k+1}|x| &\text{ if } 2^{-(k+1)} < |x| \leq 2^{-k} \text{ for some } k \geq 0,\\ 0 &\text{ if } x = 0. \end{array}\right. \] Then $w \in LSC([-1,1]) \cap L^\infty(-1,1)$. For $k > 1$, let $\epsilon_k = 2^{-2(2k+1)}$ and $x_k = 2^{-(2k+3)}$. We have \[ w_{\epsilon_k}(x_k) \leq w(0) + \frac{1}{\epsilon_k}|x_k|^2 = \frac{1}{16}. \] On the other hand, for $|y - x_k| < 2^{-(2k+4)} = \frac{1}{8} \sqrt{\epsilon_k}$, we have $w(y) > \frac{1}{2}$ and \[ w(y) + \frac{1}{\epsilon_k}|y - x_k|^2 \geq \frac{1}{2} . \] It follows that $|(x_k)_* - x_k| \geq \frac{1}{8} \sqrt{\epsilon_k}$. We conclude the section with a simple lemma about the stability of envelops with respect to semi-continuity. \begin{lem}\label{lem:EquiSC} Assume that $v \in USC(\bar\Omega)$ and $\inf_{\bar\Omega} v > -\infty$. Then for all sequences $\epsilon_j \rightarrow 0$ and $x_j \rightarrow x \in \Omega$, there holds \[ \limsup_{j \rightarrow \infty} v^{\epsilon_j}(x_j) \leq v(x). \] Likewise, if $w \in LSC(\bar\Omega)$ and $\sup_{\bar\Omega} w < +\infty$, then \[ \liminf_{j \rightarrow \infty} w_{\epsilon_j}(x_j) \geq w(x). \] \end{lem} \begin{proof} We will only show the first assertion. Assume by contradiction that there exist some sequences $\epsilon_j \rightarrow 0$, $x_j \rightarrow x \in \Omega$ such that \[ v^{\epsilon_j}(x_j) \geq v(x) + 2\delta \text{ for some } \delta > 0. \] By the semi-continuity of $v$, there exists $\theta > 0$ such that \[ v(y) \leq v(x) + \delta \text{ for all } |y - x| < \theta. \] By property \eqref{UpLowPropiii}, there exists $\hat x_j$ such that \[ v^{\epsilon_j}(x_j) = v(\hat x_j) - \frac{1}{\epsilon_j}|x_j - \hat x_j|^2 \text{ and } |x_j - \hat x_j|^2 \leq \epsilon_j (\sup_{\bar\Omega} v - v(x_j)) \rightarrow 0, \] where we have used $\inf_{\bar\Omega} v > -\infty$. It then follows that $|\hat x_j - x| < \theta$ for all sufficiently large $j$ and so \[ v^{\epsilon_j}(x_j) \leq v(\hat x_j) \leq v(x) + \delta, \] which amounts to a contradiction. \end{proof} \section{The principle of propagation of touching points}\label{sec:CP} In this section, we prove Theorem \ref{thm:CPQuad}. We will establish the propagation principle for more general operators of the form \begin{equation} F[\psi] = \nabla^2 \psi + L(\cdot,\psi,\nabla\psi), \label{Eq:FDef} \end{equation} where $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow \mathcal{S}^{n \times n}$, under some structural assumptions on $L$ and $U$ which we will detail below. (Clearly, Definition \ref{Def:ViscositySolution} extends to this general setting.) It is natural to require that $L$ be locally Lipschitz continuous. When $L$ is only H\"older continuous, the propagation principle fails in a manner similar to the non-uniqueness of first order ODE with non-Lipschitz right hand side. The following example is well-known: Consider the equation $\Delta \psi = |\nabla \psi|^\gamma$ with $\gamma \in (0,1)$, i.e. $F[\psi] = \nabla^2 \psi - |\nabla \psi|^\gamma I$ and $U = \{M \in {\mathcal{S}^{n \times n}}: tr(M) > 0\}$. This equation admits $\psi(x) \equiv 0$ and $\hat\psi(x) = \frac{1}{(\lambda + 1)(\lambda + n - 1)^{\lambda}} |x|^{\lambda+1}$ as classical solutions, where $\lambda = \frac{1}{1 - \gamma}$. As $\hat \psi \geq \psi$ on ${\mathbb R}^n$ and equality holds only at $x = 0$, the propagation principle fails. We note that the degenerate ellipticity condition \eqref{Eq:UCondPos} and local Lipschitz regularity of $L$ are far from enough to ensure the correctness of the propagation principle (even for rotationally symmetric and \emph{proper} operators); see subsection \ref{Sec:CPCounterex} for counterexamples. The following structural conditions on $(F,U)$ are directly motivated by the conformal operator $A[\psi]$. First, we assume that $U$ satisfies \begin{equation} A\in U, c\in(0,1)\Rightarrow cA\in U. \label{Eq:UCond*S} \end{equation} Second, we assume that, for every $R > 0$ and $\Lambda > 0$, there exist $m \geq 0$, $\bar \theta > 0$ and $C > 0$ such that, for $x \in \Omega$ and $p \in {\mathbb R}^n$, \begin{align} |\nabla_x L(x,s,p)| &\leq C|p|^m \qquad \forall~ |s| \leq R, \label{Eq:Lcond0}\\ 0 \leq L(x,s', p) - L(x,s,p) &\leq C(s' - s)\,|p|^m\,I \qquad \forall~ -R \leq s \leq s' \leq R, \label{Eq:Lcond1} \end{align} \begin{align} &p \cdot \nabla_p L(x, s,p) - L(s,x,p ) \nonumber\\ &\qquad + \theta\Lambda |\nabla_{p} L(x, s,p)|\,I - \theta\,I \leq C p \otimes p - \frac{1}{C}\, |p|^m\,I \qquad\forall~\theta \in [0,\bar\theta], |s| \leq R. \label{Eq:Lcond2} \end{align} Note that, \eqref{Eq:Lcond1} and \eqref{Eq:Lcond2} should be understood as inequalities between real symmetric matrices: $M \leq N$ if and only if $N - M$ is non-negative definite. Also, \eqref{Eq:Lcond1} implies that $L$ is non-decreasing in $s$. \begin{example}\label{Ex:QuadF} For all $m \geq 2$ and $\alpha, \beta \in C^{0,1}_{loc}({\mathbb R})$ such that $\beta(s) > \beta_0 > 0$ for some constant $\beta_0$, $\alpha$ is non-decreasing and $\beta$ is non-increasing, the operator \[ F[\psi] = \nabla^2 \psi + \alpha(\psi)\,\nabla\psi \otimes \nabla\psi - \beta(\psi)\,|\nabla \psi|^m\,I \] satisfies conditions \eqref{Eq:Lcond0}-\eqref{Eq:Lcond2}. \end{example} We now state our principle of propagation of touching points for operators of the form \eqref{Eq:FDef}. \begin{thm} Let $\Omega\subset\mathbb{R}^{n}$ ($n \geq 2$) be a non-empty bounded open set, $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow {\mathcal{S}^{n \times n}}$ be locally Lipschitz continuous and satisfy \eqref{Eq:Lcond0}, \eqref{Eq:Lcond1} and \eqref{Eq:Lcond2} for some $m > 1$, $F$ be given by \eqref{Eq:FDef} and $U$ be a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos} and \eqref{Eq:UCond*S}. If $w \in LSC(\bar\Omega)$ and $v\in USC(\bar\Omega)$ are respectively a supersolution and a subsolution of \eqref{Eq:FpsiEq} in $\Omega$, and if $w \geq v$ in $\Omega$ and $w > v$ on $\partial\Omega$, then $w > v$ in $\Omega$. \label{thm:CPNUE} \end{thm} Interchanging the role of $\psi$ and $-\psi$ and of $U$ and $\mathcal{S}^{n \times n} \setminus (- \bar U)$ (where $-\bar U = \{-M: M \in \bar U\}$), we see that an analogous result holds if one replaces \eqref{Eq:UCond*S} by \begin{equation} A\in U, c\in(1,\infty) \Rightarrow cA\in U, \label{Eq:UCond*SCat} \end{equation} and \eqref{Eq:Lcond2} by: for every $R > 0$ and $\Lambda > 0$, there exist positive constants $\bar \theta, C > 0$ such that, for $0 < \theta \leq \bar\theta$, $x \in \Omega$, $|s| \leq R$ and $p \in {\mathbb R}^n$, \begin{align} &p \cdot \nabla_p L(x, s,p) - L(x, s, p ) \nonumber\\ &\qquad\qquad - \theta\Lambda |\nabla_{p} L(x, s,p)|\,I + \theta\,I \geq -C p \otimes p + \frac{1}{C}\, |p|^m\,I . \label{Eq:Lcond2Cat} \end{align} We then obtain an equivalent statement of Theorem \ref{thm:CPNUE}: \begin{thm} Let $\Omega\subset\mathbb{R}^{n}$ ($n \geq 2$) be a non-empty bounded open set, $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow {\mathcal{S}^{n \times n}}$ be locally Lipschitz continuous and satisfy \eqref{Eq:Lcond0}, \eqref{Eq:Lcond1} and \eqref{Eq:Lcond2Cat} for some $m > 1$, $F$ be given by \eqref{Eq:FDef} and $U$ be a non-empty open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos} and \eqref{Eq:UCond*SCat}. If $w \in LSC(\bar\Omega)$ and $v\in USC(\bar\Omega)$ are respectively a supersolution and a subsolution of \eqref{Eq:FpsiEq} in $\Omega$ and if $w \geq v$ in $\Omega$ and $w > v$ on $\partial\Omega$, then $w > v$ in $\Omega$. \label{thm:CPNUECat} \end{thm} Assuming the correctness of the above theorem for the moment, we proceed with the \begin{proof}[Proof of Theorem \ref{thm:CPQuad}] If $\beta > \beta_0 > 0$, the result is covered by Theorem \ref{thm:CPNUE}. If $\beta < -\beta_0 < 0$, the result is covered by Theorem \ref{thm:CPNUECat}. It remains to consider the case $\beta \equiv 0$ and $\alpha$ is constant. The operator $F$ then takes the form \[ F[\psi] = \nabla^2 \psi + \alpha\,\nabla\psi \otimes \nabla \psi. \] When $\alpha \neq 0$, we note that the functions $\tilde w = \frac{\alpha}{|\alpha|} e^{\alpha w}$ and $\tilde v = \frac{\alpha}{|\alpha|} e^{\alpha v}$ satisfy $\tilde w \in LSC(\bar\Omega)$, $\tilde v\in USC(\bar\Omega)$ and, in view of \eqref{Eq:UCone}, \[ \nabla^2 \tilde w = |\alpha|\,|\tilde w|\,F[w] \in {\mathcal{S}^{n \times n}} \setminus U \text{ and } \nabla^2 \tilde v = |\alpha|\,|\tilde v| F[v] \in \bar U. \] Therefore, we can assume without loss of generality that $\alpha = 0$, i.e. \[ F[\psi] = \nabla^2 \psi. \] In this case, note that \begin{equation} F[\psi + \mu\,|x|^2] = F[\psi] + 2\mu\,I. \label{Eq:FVab=0} \end{equation} An easy adaption of the proof of Theorem \ref{thm:CPNUE} below (but using \eqref{Eq:FVab=0} instead of Lemma \ref{Lem:FVSub}) yields the result. \end{proof} We turn now to the proof of Theorem \ref{thm:CPNUE}. \subsection{Error in regularizations} The following result is a direct adaption of \cite[Theorem 5.1]{CabreCaffBook} which estimates the error to \eqref{Eq:FpsiEq} when making regularizations by lower and upper envelops. \begin{prop}\label{key lemma} Assume $n \geq 2$. Let $\Omega\subset\mathbb{R}^{n}$ be a bounded open set, $U$ be an open subset of ${\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos}, $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow {\mathcal{S}^{n \times n}}$ be a locally Lipschitz continuous function satisfying \eqref{Eq:Lcond0} and the second inequality in \eqref{Eq:Lcond1} for some $m \geq 0$, and $F$ be given by \eqref{Eq:FDef}. For any $M > 0$, there exists $a > 0$ such that if $w \in LSC(\Omega)$ is a supersolution of \eqref{Eq:FpsiEq} in $\Omega$ and if $w_\epsilon$ is punctually second order differentiable at a point $x \in \Omega$ and $|w_\epsilon(x)| + |w(x_*)| \leq M$, then \begin{align*} &F[w_\epsilon](x) - a|x_* - x|( 1 + \frac{1}{\epsilon}|x_* - x|)\,|\nabla w_\epsilon(x)|^m\,I \in {\mathcal{S}^{n \times n}} \setminus U. \end{align*} Analogously, if $v \in USC(\Omega)$ is a subsolution of \eqref{Eq:FpsiEq} in $\Omega$, and if $v^\epsilon$ is punctually second order differentiable at a point $x \in \Omega$ and $|v^\epsilon(x)| + |v(x^*)| \leq M$, then \begin{align*} &F[v^\epsilon](x) + a|x^* - x|( 1 + \frac{1}{\epsilon}|x^* - x|)\,|\nabla v_\epsilon(x)|^m\,I \in U. \end{align*} \end{prop} \begin{proof} We only give the proof of the first assertion. The second assertion can be proved in a similar way. We have \begin{equation} w_{\epsilon}(x+z)\geq w_{\epsilon}(x)+\nabla w_{\epsilon}(x)\cdot z+\frac{1}{2}z^{T}\nabla^{2}w_{\epsilon}(x)z+o(|z|^{2}),\quad\mbox{ as }z\rightarrow 0.\label{yre2} \end{equation} By the definition of $w_{\epsilon}$, we have \begin{equation*} w_{\epsilon}(x+z)\leq w(x_{*}+z) +\frac{1}{\epsilon}|x_{*}-x|^{2},\label{yree} \end{equation*} and therefore, in view of (\ref{yre2}), \begin{align*} w(x_{*}+z)&\geq w_{\epsilon}(x+z) - \frac{1}{\epsilon}|x_{*}-x|^{2}\\ &\geq P_{\epsilon}(x_* + z)+o(|z|^{2}),\quad\mbox{ as }z\rightarrow 0, \end{align*} where $P_{\epsilon}$ is a quadratic polynomial with \begin{align*} P_{\epsilon}(x_*)&=w_{\epsilon}(x) -\frac{1}{\epsilon}|x_{*}-x|^{2} = w(x_*), \nonumber \\ \nabla P_{\epsilon}(x_*)&=\nabla w_{\epsilon}(x),\\ \nabla^{2} P_{\epsilon}(x_*)&=\nabla^{2} w_{\epsilon}(x). \end{align*} Since $w$ is a supersolution of \eqref{Eq:FpsiEq}, we thus have \[ \nabla^2 w_\epsilon(x) + L(x_*, w(x_*),\nabla w_{\epsilon}(x)) = F[P_{\epsilon}](x_*)\in {\mathcal{S}^{n \times n}}\setminus U.\label{fanyang} \] On the other hand, in view of \eqref{Eq:Lcond0}, \eqref{Eq:Lcond1} and $w(x_*) = w_{\epsilon}(x) -\frac{1}{\epsilon}|x_{*}-x|^{2} \leq w_\epsilon(x)$, \[ L(x, w_\epsilon(x),\nabla w_{\epsilon}(x)) - L(x_*, w(x_*),\nabla w_{\epsilon}(x)) \leq C(|x - x_*| + \frac{1}{\epsilon}|x - x_*|^2)|\nabla w_\epsilon(x)|^m\,I. \] The conclusion is readily seen thanks to \eqref{Eq:UCondPos}. \end{proof} \subsection{First variation of $F[\psi]$} As mentioned in the introduction, we would like to perturb a given function $\psi$ to another function $\tilde\psi$ in such a way that $F[\tilde\psi] $ is bounded from above/below by a multiple of $F[\psi]$ and with a favorable excess term. This will be important in controlling error accrued in other parts of the proof of Theorem \ref{thm:CPNUE} (e.g. in regularizations). \begin{lem}\label{Lem:FVSub} Let $\Omega$ be an open bounded subset of ${\mathbb R}^n$, $n \geq 2$, $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow {\mathcal{S}^{n \times n}}$ be a locally Lipschitz continuous function satisfying \eqref{Eq:Lcond1} and \eqref{Eq:Lcond2} for some $m > 1$, $F$ be given by \eqref{Eq:FDef}, and $\psi: \Omega \rightarrow {\mathbb R} \cup \{\pm \infty\}$. For any $M > 0$, there exist positive constants $\mu_0, \alpha, \beta, \delta, K_0 > 0$, depending only on an upper bound of $M$, $L$ and $\Omega$, such that $$ \mu_0\,\beta\,\sup_\Omega e^{-\beta\psi} \leq \frac{1}{2}, $$ and, for any $0 < \mu < \mu_0$, $\tau \in {\mathbb R}$, the function $\tilde\psi_{\mu,\tau} = \psi + \mu \,(e^{\alpha|x|^2} + e^{-\beta\psi} - \tau)$ satisfies \[ F[\tilde\psi_\mu] \geq (1 - \mu\,\beta\,e^{-\beta\psi})F[\psi] + \mu\,K_0[(1 + |\nabla\psi|^m)\,I + \nabla \psi \otimes \nabla\psi] \] in the set \begin{align} \Omega^{M,\delta} &:= \Big\{x \in \Omega: \text{$\psi$ is punctually second order differentiable at $x$},\nonumber \\ &\qquad\qquad |\psi(x)| \leq M, \text{ and } e^{\alpha|x|^2} + e^{-\beta\psi(x)} - \tau \geq -\delta\Big\}.\label{Eq:OmegaMDef} \end{align} \end{lem} \begin{proof} In the proof, $C$ will denote some large positive constant which may become larger as one moves from lines to lines but depends only on an upper bound for $M$, $L$ and $\Omega$. Eventually, we will choose large $\beta = \beta(C) > 0$, small $\alpha = \alpha(\beta, M, C) > 0$, and finally small $\mu_0 = \mu_0(\alpha, \beta,M, C) > 0$. We set $\varphi(x) = e^{\alpha|x|^2}$, $f(\psi) = - e^{-\beta\psi}$ and abbreviate $\tilde\psi_\mu = \tilde\psi_{\mu,\tau} = \psi + \mu\,(\varphi - f(\psi) - \tau)$. Note that $f'(\psi) > 0$. We assume in the sequel that $\alpha < 1$, $\delta < 1$ and \begin{align} \mu_0 \sup_\Omega [1 + f'(\psi)] &\leq \frac{1}{C} < \frac{1}{2}. \label{Eq:mu0Req1} \end{align} The following computation is done at a point in $\Omega^{M,\delta}$. We have \begin{align*} F[\tilde\psi_\mu] &\geq (1 - \mu\,f'(\psi)) F[\psi] - \mu\,f''(\psi) \nabla\psi \otimes \nabla\psi + 2\mu \alpha\,\varphi\, I \nonumber\\ &\qquad\qquad + L(x,\tilde\psi_\mu,\nabla \tilde\psi_\mu) - (1 - \mu\,f'(\psi)) L(x,\psi,\nabla\psi). \end{align*} Noting that $\varphi - f(\psi) - \tau \geq -\delta$ in $\Omega^{M,\delta}$, we deduce from \eqref{Eq:Lcond1} and \eqref{Eq:mu0Req1} that \[ L(x,\tilde\psi_\mu,\nabla \tilde\psi_\mu) \geq L(x,\psi,\nabla \tilde\psi_\mu) - C\,\mu\,\delta\,(|\nabla\psi|^m + \mu^m\,\alpha^m\,\varphi^m)\,I. \] Therefore, \begin{align} F[\tilde\psi_\mu] &\geq (1 - \mu\,f'(\psi)) F[\psi] - \mu\,f''(\psi) \nabla\psi \otimes \nabla\psi\nonumber\\ &\qquad\qquad + 2\mu \alpha\,(1 - C\delta\mu^m\alpha^{m-1}\varphi^{m-1})\varphi\, I - C\,\mu\,\delta|\nabla\psi|^m\,I \nonumber\\ &\qquad\qquad + L(x,\psi,\nabla \tilde\psi_\mu) - (1 - \mu\,f'(\psi)) L(x,\psi,\nabla\psi). \label{Eq:FtpsiEst1} \end{align} We proceed to estimate $L(x,\psi,\nabla \tilde\psi_\mu) - (1 - \mu\,f'(\psi)) L(x,\psi,\nabla\psi)$. For $0 \leq t \leq \mu$, let \[ g(t) = \frac{L(x,\psi,\nabla \tilde\psi_t)}{1 - t f'(\psi)}. \] We have \begin{align*} \frac{d}{dt} g(t) &\geq \frac{f'(\psi)}{(1 - t f'(\psi))^2} \Big[L(x,\psi,\nabla \tilde\psi_t) - \nabla\tilde\psi_t \cdot \nabla_p L(x,\psi,\nabla\tilde\psi_t)\\ &\qquad\qquad - \frac{C\alpha \varphi}{f'(\psi)} |\nabla_p L(x,\psi,\nabla\tilde\psi_t)|\,I \Big]. \end{align*} Thus, in view of \eqref{Eq:Lcond2} and \eqref{Eq:mu0Req1}, if $\alpha, \beta$ and $\delta$ satisfy \begin{align} \alpha \sup_\Omega \varphi[\frac{1}{f'(\psi)} + 1]\,&\leq \frac{1}{C},\label{Eq:alphaReq} \end{align} then, with $\Lambda = 8C$ and $\theta = \frac{\alpha \varphi}{8f'(\psi)}$ in \eqref{Eq:Lcond2}, \begin{align*} \frac{d}{dt} g(t) &\geq f'(\psi) \Big[-C \nabla\tilde\psi_t \otimes \nabla \tilde\psi_t + \frac{1}{C} |\nabla\tilde\psi_t|^m\,I\Big] - \frac{1}{2}\alpha\,\varphi \,I\\ &\geq f'(\psi) \Big[-C \nabla\psi \otimes \nabla \psi + \frac{1}{C} |\nabla\psi|^m\,I\Big] - \alpha\,\varphi \,I. \end{align*} This implies \begin{align} &L(x,\psi,\nabla \tilde\psi_\mu) - (1 - \mu\,f'(\psi)) L(x,\psi,\nabla\psi)\nonumber\\ &\qquad\qquad= (1 - \mu\,f'(\psi))[ g(\mu) - g(0)] \nonumber\\ &\qquad\qquad\geq \mu\,f'(\psi) \Big[-C \nabla\psi \otimes \nabla \psi + \frac{1}{C} |\nabla\psi|^m\,I\Big] - \mu\,\alpha\,\varphi\,I. \label{Eq:FtpsiEst2} \end{align} Combining \eqref{Eq:FtpsiEst1} and \eqref{Eq:FtpsiEst2} and using \eqref{Eq:alphaReq}, we obtain \begin{align} F[\tilde\psi_\mu] &\geq (1 - \mu\,f'(\psi)) F[\psi] + \mu\, \alpha\,\varphi I + \frac{1}{C}\, \mu\,(f'(\psi) - C\delta) |\nabla\psi|^m\,I \nonumber\\ &\qquad\qquad + \mu\,\big[-f''(\psi) - Cf'(\psi)\big]\,\nabla\psi \otimes \nabla\psi . \label{Eq:FtpsiEst2Y} \end{align} We now fix $C$ and proceed with the choice of $\alpha, \beta, \delta$ and $\mu_0$. First, choosing $\beta \geq 2C$ and recalling the definition of $f$, we have \[ -f''(\psi) - Cf'(\psi) = \beta(\beta - C)e^{-\beta \psi} \geq \frac{1}{2}\beta\,f'(\psi). \] Next, choose $\alpha$ such that \eqref{Eq:alphaReq} is satisfied and choose $\delta$ such that $f'(\psi) - C\delta \geq \frac{1}{2}f'(\psi)$. Finally, choose $\mu_0$ such that \eqref{Eq:mu0Req1} holds. We hence obtain from \eqref{Eq:FtpsiEst2Y} that \[ F[\tilde\psi] \geq (1 - \mu\,f'(\psi)) F[\psi] + \mu \,\alpha\,\varphi\, I + \frac{1}{C}\, \mu\,f'(\psi) |\nabla\psi|^m\,I + \frac{1}{2}\beta\,\mu\,f'(\psi) \nabla \psi \otimes \nabla \psi. \] This completes the proof. \end{proof} \begin{lem} Let $\Omega$ be an open bounded subset of ${\mathbb R}^n$, $n \geq 2$, $L: \Omega \times {\mathbb R} \times {\mathbb R}^n \rightarrow {\mathcal{S}^{n \times n}}$ be a locally Lipschitz continuous function satisfying \eqref{Eq:Lcond1} and \eqref{Eq:Lcond2} for some $m > 1$, $F$ be given by \eqref{Eq:FDef}, and $\psi: \Omega \rightarrow {\mathbb R} \cup \{\pm \infty\}$. There exist positive constants $\mu_0, \alpha, \beta, \delta, K_0 > 0$, depending only on an upper bound of $\sup_\Omega |\psi|$, $L$ and $\Omega$, such that, for any $0 < \mu < \mu_0$, $\tau \in {\mathbb R}$, the function $\hat\psi_{\mu,\tau} = \psi - \mu \,(e^{\alpha|x|^2} + e^{-\beta\psi} - \tau)$ satisfies \[ F[\hat\psi_\mu] \leq (1 + \mu\,\beta\,e^{-\beta\psi})F[\psi] - \mu\,K_0[(1 + |\nabla \psi|^m)\,I + \nabla\psi \otimes \nabla \psi] \] in the set $\Omega^{M,\delta}$ defined by \eqref{Eq:OmegaMDef}. \end{lem} \begin{proof} The proof is similar to that of Lemma \ref{Lem:FVSub} and is omitted. \end{proof} \subsection{Proof of Theorem \ref{thm:CPNUE}} Arguing by contradiction, we suppose that there exists $\gamma>0$ such that \begin{equation*} \max\limits_{\bar\Omega}(v-w) = 0\quad \text{ and }\quad (v-w)(x)\leq-\gamma,\quad\forall x\in\overline{\Omega\setminus \Omega_{\gamma}}\label{lpl} \end{equation*} where $\Omega_\gamma = \{x \in \Omega: \textrm{dist}(x, \partial\Omega) > \gamma\}$. For $\epsilon > 0$, let $v^\epsilon$ and $w_\epsilon$ be the $\epsilon$-upper and $\epsilon$-lower envelops of $v$ and $w$ respectively (see Section \ref{Sec:Prelim}). We note that \[ v \leq v^\epsilon \leq \max_{\bar\Omega} v < +\infty \text{ and } w \geq w_\epsilon \geq \min_{\bar\Omega} w > -\infty. \] In the sequel, we use $C$ to denote some positive constant which depends on $\max_{\bar\Omega} v$, $\min_{\bar\Omega} w$, $L$ and $\Omega$ but is always independent of $\epsilon$. By Lemma \ref{Lem:FVSub}, we can find $\bar \mu > 0$, $\delta > 0$ and a smooth positive function $f: {\mathbb R}^n \times {\mathbb R} \rightarrow (0,\infty)$, depending only on $\max_{\bar\Omega} v$, $\min_{\bar\Omega} w$, $L$ and $\Omega$, such that $f$ is decreasing with respect to the $s$-variable, $\bar\mu \sup_\Omega |\partial_s f(\cdot, v^\epsilon)| \leq\frac{1}{2}$ and, for $\mu \in (0,\bar\mu)$, $\tau \in {\mathbb R}$ and $\tilde v_{\epsilon,\tau} = v^\epsilon + \mu (f(\cdot,v^\epsilon) - \tau)$, there holds \begin{equation} F[\tilde v_{\epsilon,\tau}] \geq (1 - \mu|\partial_s f(\cdot, v^\epsilon)|)F[v^\epsilon] + \frac{\mu}{C}(1 + |\nabla v^\epsilon|^m)\,I \label{Eq:Ftw} \end{equation} in the set \begin{multline*} \tilde\Omega_{\epsilon} := \Big\{x \in \Omega_{\gamma/2}: \text{$v^\epsilon$ is punctually second order differentiable at $x$},\\ v^\epsilon(x) \geq \min_{\bar\Omega} w- 1 \text{ and } f(x,v^\epsilon(x)) - \tau \geq -\delta \Big\}. \end{multline*} Note that $\bar\mu$ and $\delta$ are independent of $\epsilon$. Furthermore, in view of \eqref{Eq:UpLowConv}, there exists $\bar \eta > 0$ independent of $\epsilon$ such that, for all small $\epsilon$ and $\eta \in (0,\bar\eta)$, one can (uniquely) find $\tau = \tau(\epsilon,\eta)$ such that the function $\xi_{\epsilon,\eta} := \tilde v_{\epsilon,\tau} - w_\epsilon$ satisfies \[ \max_{\bar\Omega} \xi_{\epsilon,\eta} = \eta \text{ and } \xi_{\epsilon,\eta} < -\frac{\gamma}{2} \text{ in }\overline{\Omega\setminus \Omega_{\gamma}}. \] Let $\Gamma_{\xi_{\epsilon,\eta}^{+}}$ denote the concave envelope of $\xi_{\epsilon,\eta}^{+}:=\max\{\xi_{\epsilon,\eta},0\}$ on $\bar\Omega$. Then by \eqref{utr}, we have \begin{equation*} \nabla^{2}\xi_{\epsilon,\eta}\geq-\frac{4}{\epsilon}I\quad\mbox{ a.e. in }\Omega_{\gamma}. \end{equation*} By \cite[Lemma 3.5]{CabreCaffBook}, we have \begin{equation*} \int_{\{\xi_{\epsilon,\eta}=\Gamma_{\xi_{\epsilon,\eta}^{+}}\}}\mbox{det}(-\nabla^{2}\Gamma_{\xi_{\epsilon,\eta}^{+}})>0, \end{equation*} which implies that the Lebesgue measure of $\{\xi_{\epsilon,\eta}=\Gamma_{\xi_{\epsilon,\eta}^{+}}\}$ is positive. Then there exists $x_{\epsilon,\eta}\in\{\xi_{\epsilon,\eta}=\Gamma_{\xi_{\epsilon,\eta}^{+}}\}\cap\Omega_{\gamma}$ such that both of $v^\epsilon$ and $w_\epsilon$ are punctually second order differentiable at $x_{\epsilon,\eta}$, \begin{equation} 0<\xi_{\epsilon,\eta}(x_{\epsilon,\eta})\leq\eta,\label{Eq:29Dec16b} \end{equation} \begin{equation} |\nabla\xi_{\epsilon,\eta}(x_{\epsilon,\eta})| = |\nabla \tilde v_{\epsilon,\tau}(x_{\epsilon,\eta})- \nabla w_{\epsilon}(x_{\epsilon,\eta})| \leq C\eta,\label{Eq:29Dec16c} \end{equation} and \begin{equation} \nabla^{2}\xi_{\epsilon,\eta}(x_{\epsilon,\eta})=\nabla^2 \tilde v_{\epsilon,\tau}(x_{\epsilon,\eta})- \nabla^2 w_{\epsilon}(x_{\epsilon,\eta})\leq 0.\label{Eq:29Dec162m} \end{equation} From \eqref{Eq:29Dec16b} and the definition of $\tilde v_{\epsilon,\tau}$, we have \begin{equation} f(x_{\epsilon,\eta},v^\epsilon(x_{\epsilon,\eta})) - \tau > \frac{1}{\mu}(w_\epsilon(x_{\epsilon,\eta}) - v^\epsilon(x_{\epsilon,\eta})). \label{Eq:f-tau->0} \end{equation} Note that, as $w \geq v$ in $\Omega$, Lemma \ref{lem:EquiSC} implies that \[ \liminf_{\epsilon \rightarrow 0, \eta \rightarrow 0}[w_\epsilon(x_{\epsilon,\eta}) - v^\epsilon(x_{\epsilon,\eta})] \geq 0. \] Hence, by shrinking $\mu$ and $\bar\eta$ if necessary, we may assume for all small $\epsilon$ that \[ f(x_{\epsilon,\eta},v^\epsilon(x_{\epsilon,\eta})) - \tau \geq -\delta, \qquad v^\epsilon(x_{\epsilon,\eta}) \geq \min_{\bar\Omega} w - 1, \quad \text{ and } w_\epsilon(x_{\epsilon,\eta}) \leq \max_{\bar\Omega} v + 1. \] We deduce that $x_{\epsilon,\eta} \in \tilde\Omega_{\epsilon,\delta}$ and thus obtain from \eqref{Eq:Ftw} that \begin{equation} F[\tilde v_{\epsilon,\tau}](x_{\epsilon,\eta}) \geq (1 - \mu|\partial_s f(x_{\epsilon,\eta}, v^\epsilon(x_{\epsilon,\eta}))|)F[v^\epsilon](x_{\epsilon,\eta}) + \frac{\mu}{C}(1 + |\nabla v^\epsilon(x_{\epsilon,\eta})|^m)\,I. \label{Eq:FtwInAction} \end{equation} Next, the proof of \eqref{Eq:UpLowGradEst} implies that, for any unit vector $e$, \[ \partial_e v^{\epsilon}(x_{\epsilon,\eta}) \geq - \frac{C}{\sqrt{\epsilon}} \text{ and } \partial_e w_{\epsilon}(x_{\epsilon,\eta}) \leq \frac{C}{\sqrt{\epsilon}}. \] This together with \eqref{Eq:29Dec16c} implies that, for all sufficiently small $\eta$, \[ |\nabla \tilde v_{\epsilon,\tau}(x_{\epsilon,\eta})| + |\nabla w_{\epsilon}(x_{\epsilon,\eta})| \leq \frac{C}{\sqrt{\epsilon}}. \] Thus, by the local Lipschitz regularity of $L$, \[ L(x_{\epsilon,\eta}, w_\epsilon(x_{\epsilon,\eta}),\nabla w_{\epsilon}(x_{\epsilon,\eta})) - L(x_{\epsilon,\eta}, \tilde v_{\epsilon,\tau}(x_{\epsilon,\eta}), \nabla \tilde v_{\epsilon}(x_{\epsilon,\eta})) \geq -C(\epsilon)\eta\,I. \] This together with \eqref{Eq:29Dec162m} implies that \begin{equation} F[w_{\epsilon}](x_{\epsilon,\eta}) \geq F[\tilde v_{\epsilon,\tau}](x_{\epsilon,\eta}) - C(\epsilon)\,\eta\,I. \label{Eq:CPFwtv>} \end{equation} Recalling \eqref{Eq:FtwInAction}, we can find $\hat\eta = \hat\eta(\epsilon)$ such that, for $0 < \eta < \hat \eta(\epsilon)$, there holds \begin{equation} F[w_{\epsilon}](x_{\epsilon,\eta}) \geq (1 - \mu|\partial_s f(x_{\epsilon,\eta}, v^\epsilon(x_{\epsilon,\eta}))|)F[v^\epsilon](x_{\epsilon,\eta}) + \frac{\mu}{C}(1 + |\nabla v^\epsilon(x_{\epsilon,\eta})|^m)\,I. \label{Eq:Fweve>} \end{equation} We next claim that \begin{equation} \liminf_{\epsilon \rightarrow 0, \eta \rightarrow 0} \frac{1}{\epsilon}\Big[|(x_{\epsilon,\eta})_* - x_{\epsilon,\eta}|^2 + |(x_{\epsilon,\eta})^* - x_{\epsilon,\eta}|^2\Big] \leq C\mu^2. \label{Eq:30Rep} \end{equation} Assuming this claim for now, we use Proposition \ref{key lemma} to find $a > 0$ independent of $\epsilon$ and $\eta$ such that one has, in $\Omega_\gamma$, \begin{align} F[w_{\epsilon}](x_{\epsilon,\eta})- a|(x_{\epsilon,\eta})_* - x_{\epsilon,\eta}|(1+\frac{1}{\epsilon}|(x_{\epsilon,\eta})_* - x_{\epsilon,\eta}|)\, |\nabla w_\epsilon(x_{\epsilon,\eta})|^m\,I &\in {\mathcal{S}^{n \times n}}\setminus U, \label{Eq:Fwe}\\ F[v^{\epsilon}](x_{\epsilon,\eta}) + a|(x_{\epsilon,\eta})^* - x_{\epsilon,\eta}|(1 + \frac{1}{\epsilon}|(x_{\epsilon,\eta})^* - x_{\epsilon,\eta}|)\, |\nabla v^\epsilon(x_{\epsilon,\eta})|^m\,I &\in \overline{U}, \label{Eq:Fve} \end{align} where $x_*$ and $x^*$ are as in Section \ref{Sec:Prelim}. The relations \eqref{Eq:Fweve>}, \eqref{Eq:Fwe} and \eqref{Eq:Fve} amount to a contradiction for sufficiently small $\mu$ thanks to \eqref{Eq:UCondPos} and \eqref{Eq:UCond*S}. Therefore, to conclude the proof it suffices to prove the claim \eqref{Eq:30Rep}. Pick some $\eta(\epsilon) < \hat\eta(\epsilon)$ such that $\eta(\epsilon) \rightarrow 0$ as $\epsilon \rightarrow 0$. Pick a sequence $\epsilon_m \rightarrow 0$ such that, for $x_m := x_{\epsilon_m,\eta(\epsilon_m)}$, the sequence $\frac{1}{\epsilon_m}[|(x_m)^* - x_m|^2 + |(x_m)_* - x_m|^2]$ converges to a limit which we will show to be no larger than $C\mu^2$. We will abbreviate $\tau_m := \tau(\epsilon_m, \eta(\epsilon_m))$, $v^m = v^{\epsilon_m}$, $w_m = w_{\epsilon_m}$. Without loss of generality, we may also assume that $x_m \rightarrow x_0 \in \Omega$, $f(x_m, v^m(x_m)) \rightarrow f_0$ and $\tau_m \rightarrow \tau_0$. As $\max_{\bar\Omega} \xi_{\epsilon,\eta} = \eta$, we have in view of \eqref{Eq:UpLowConv} that \begin{multline} v(x_0) - w(x_0) + \mu(f(x_0, v(x_0)) -\tau_0) \\ = \lim_{m \rightarrow \infty} \big\{v^{m}(x_0) - w_{m}(x_0) + \mu(f(x_0, v^{m}(x_0)) -\tau_m) \big\} \leq 0. \label{Eq:vwmftLim} \end{multline} On the other hand, by \eqref{Eq:29Dec16b} and the fact that $f$ is decreasing in $s$, we have \begin{align*} f(x_0, \limsup_{m \rightarrow \infty} v^m(x_m)) &\leq f_0 = \lim_{m \rightarrow \infty} f(x_{m}, v^{m}(x_m))\\ & \leq \limsup_{m \rightarrow \infty} f(x_m, w_m(x_m) - \mu(f(x_m, v^m(x_m)) - \tau_m))\\ & \leq f(x_0, \liminf_{m \rightarrow \infty} w_m(x_m) - \mu(f_0 - \tau_0)), \end{align*} which implies, in view of Lemma \ref{lem:EquiSC} and the fact that $w \geq v$, that \[ f(x_0, w(x_0)) \leq f(x_0, v(x_0)) \leq f_0 \leq f(x_0, w(x_0) - \mu(f_0 - \tau_0)), \] which further implies that \[ 0 \leq f_0 - f(x_0, v(x_0)) \leq C\mu. \] Together with \eqref{Eq:vwmftLim}, this implies that \[ v(x_0) - w(x_0) + \mu(f_0 -\tau_0) \leq C\mu^2. \] We are now ready to wrap up the argument. As $(x_\epsilon)^* - x_\epsilon \rightarrow 0$ and $(x_\epsilon)_* - x_\epsilon \rightarrow 0$ as $\epsilon \rightarrow 0$, we have $(x_m)_* \rightarrow x_0$ and $(x_m)^* \rightarrow x_0$. As $v$ is upper semi-continuous and $w$ is lower semi-continuous, we have \[ \limsup_{m \rightarrow \infty} v((x_m)^*) \leq v(x_0) \text{ and } \liminf_{m \rightarrow \infty} w((x_m)_*) \geq w(x_0). \] Thus, by the left half of \eqref{Eq:29Dec16b}, \begin{align*} 0 &\leq \limsup_{m \rightarrow \infty} \frac{1}{\epsilon_m}[|(x_{\epsilon_m})^* - x_{\epsilon_m}|^2 + |(x_{\epsilon_m})_* - x_{\epsilon_m}|^2]\\ &\leq \limsup_{m \rightarrow \infty} \big\{v((x_{\epsilon_m})^*) - w((x_{\epsilon_m})_*) + \mu(f(x_{\epsilon_m},v^{\epsilon_m}(x_{\epsilon_m})) - \tau({\epsilon_m},\eta({\epsilon_m}))]\big\}\\ &\leq v(x_0) - w(x_0) + \mu(f_0 - \tau_0) \leq C\mu^2. \end{align*} We have proved \eqref{Eq:30Rep}, and thus concluded the proof. \hfill$\Box$ \subsection{Proof of Corollary \ref{cor:CPSing}} We will use a result from \cite{CafLiNir11}. \begin{thm}[\cite{CafLiNir11}]\label{thm:CLNRemSing} Let $n \geq 1$, $\Omega \subset {\mathbb R}^n$ be a non-empty open set, and let $a,b \in C^0(\Omega \times {\mathbb R} \times {\mathbb R}^n)$ satisfy $$a(x,z,p) \geq 0 \text{ for all } x \in \Omega, z \in {\mathbb R}, p \in {\mathbb R}^n,$$ and $U \subset {\mathcal{S}^{n \times n}}$ be a non-empty open set satisfying \eqref{Eq:UCondPos}. If $u \in LSC(\bar \Omega)$ satisfies % \[ \Delta u \leq C \text{ in } \Omega \text{ in the viscosity sense}, \] % and, for some subset $E$ of $\Omega$ of zero Lebesgue measure, \[ a(x,u,Du)\nabla^2 u + b(x,u,\nabla u) \in {\mathcal{S}^{n \times n}} \setminus U \text{ in } \Omega \setminus E \text{ in the viscosity sense}, \] then \[ a(x,u,Du)\nabla^2 u + b(x,u,\nabla u) \in {\mathcal{S}^{n \times n}} \setminus U \text{ in } \Omega \text{ in the viscosity sense}. \] \end{thm} \begin{rem} This result was not stated as above in \cite{CafLiNir11}. However, the proof of \cite[Theorem 1.2]{CafLiNir11} in effect yields the above result. \end{rem} \begin{proof}[Proof of Corollary \ref{cor:CPSing}] Note that constant functions are solutions of \eqref{Eq:FpsiEq} and the max of two subsolutions is a subsolution. It thus suffices to consider the case when \[ \inf_{\bar\Omega} v > -\infty. \] By \eqref{gooo}, \[ \Delta v + (\alpha - n\beta)|\nabla v|^2 \geq 0 \text{ in } \Omega \setminus E \text{ in the viscosity sense}. \] Note that when \eqref{Eq:anb2} holds, then the function $\tilde u = e^{-\frac{1}{|\alpha - n\beta|}v}$ satisfies \[ \Delta \tilde u \leq 0 \text{ in } \Omega \setminus E \text{ in the viscosity sense}. \] As $\tilde u > 0$ in $\Omega\setminus E$ and $E$ has zero capacity, the maximum principle then implies that $\tilde u > \frac{1}{c} > 0$ in $\bar \Omega \setminus E$, and hence $\sup_{\bar\Omega \setminus E} v < +\infty$. Thus we can assume without loss of generality that \eqref{Eq:anb1} holds. In view of the comparison principle Theorem \ref{thm:CPQuad}(b), it suffices to show that \begin{equation} F[v] \in \bar U \text{ in } \Omega \text{ in the viscosity sense}, \label{Eq:CPSv+E} \end{equation} where we define, for $x \in E$, \[ v(x) = \limsup_{y \rightarrow x, y \in \Omega \setminus E} v(y) < +\infty. \] Indeed, we note that, for $C > |\alpha - n\beta|$, the function $u = -e^{Cv} \in LSC(\bar\Omega)$ satisfies $\inf_{\bar\Omega} u > -\infty$, $\sup_{\bar\Omega} u < 0$ and \[ \Delta u = Cu(\Delta v + C|\nabla v|^2) \leq 0 \text{ in } \Omega \setminus E \text{ in the viscosity sense}. \] Since $E$ has zero capacity, it follows that \[ \Delta u \leq 0 \text{ in } \Omega \text{ in the viscosity sense}. \] An application of Theorem \ref{thm:CLNRemSing} (to the set $\tilde U = {\mathcal{S}^{n \times n}} \setminus (-\bar U)$) then implies that \[ F[v] = \frac{1}{C}\frac{\nabla^2 u}{u} - \frac{1}{C} \frac{\nabla u \otimes \nabla u}{u^2} + L\Big(x,\frac{1}{C}\ln(-u), \frac{\nabla u}{Cu}\Big) \in \bar U \text{ in } \Omega \text{ in the viscosity sense}, \] which proves \eqref{Eq:CPSv+E}, and hence the assertion. \end{proof} \subsection{Counterexamples to the propagation principle}\label{Sec:CPCounterex} It this section, we give examples to illustrate that \eqref{Eq:UCondPos}, i.e. degenerate ellipticity, the properness and regularity of $L$ is insufficient to ensure the correctness of the propagation principle. These examples will also illustrate the importance of various technical assumptions in Theorems \ref{thm:CPQuad} and \ref{thm:CPNUE}. Let $a, b \in C^1_{loc}([0,\infty))$ and consider for now a rotationally invariant operator $F$ of the form \begin{equation} F[\psi] = \nabla^2 \psi + a(|\nabla \psi|)\nabla\psi \otimes \nabla \psi + b(|\nabla\psi|)I. \label{Eq:FpiRotInv} \end{equation} In other words, we have \[ L(p) = a(|p|) p \otimes p - b(|p|)\,I. \] Note that although $a, b$ are locally differentiable, $L$ is in general only locally Lipschitz. $L$ is locally differentiable if and only if $b'(0) = 0$. The following example suggests that some delicate attention should be paid if one allows $m = 1$ in condition \eqref{Eq:Lcond2} (in the context of Theorem \ref{thm:CPNUE}). \begin{prop}\label{prop:Ctex} Let $a, b \in C^1([0,\infty))$ and $F$ be of the form \eqref{Eq:FpiRotInv}. If \[ b(0) = 0 \text{ and } b'(0) \neq 0, \] then the propagation principle does not hold for $F$, namely there exist a bounded domain $\Omega \in {\mathbb R}^n$, a non-empty open set $U \subset {\mathcal{S}^{n \times n}}$ satisfying \eqref{Eq:UCondPos}, and a supersolution $w \in C^2(\bar\Omega)$ and a subsolution $v \in C^2(\bar\Omega)$ of \eqref{Eq:FpsiEq} in $\Omega$ such that $w > v$ on $\partial\Omega$, but $\min_{\bar\Omega} (w - v) = 0$. \end{prop} \begin{proof} Considering $F[-\psi]$ instead of $F[\psi]$ if necessary, we can assume without loss of generality that \begin{equation} b'(0) < 0. \label{Eq:b'0<0} \end{equation} Let $U$ be the set of positive definite symmetric $n \times n$ matrices. Note that $v \equiv 0$ is a solution of \eqref{Eq:FpsiEq} on ${\mathbb R}^n$. Since $L$ is independent of $\psi$, it suffices to exhibit a bounded domain $\Omega$, and a supersolution $w \in C^2(\bar\Omega)$ of \eqref{Eq:FpsiEq} in $\Omega$ such that $w > 0$ on $\partial\Omega$, but $\min_{\bar\Omega} w = 0$. In view of \eqref{Eq:b'0<0} and the fact that $b(0) = 0$, there exists some $r_0 > 0$ and $\delta > 0$ such that \begin{equation} b(s) < 0 \text{ and } r_0 > \frac{s}{|b(s)|} \text{ for all } s \in (0,\delta). \label{Eq:Ctexr0} \end{equation} Let $\Omega = \{r_0 -1 < |x| < r_0 + 1\}$ and $w(x) = w(|x|)$ for some $w \in C^2([r_0 - 1 ,r_0 + 1])$ satisfying $w(r_0) = w'(r_0) = 0$ and \begin{align} w'(r) &\in (-\delta,0) \text{ for } r \in [r_0-1,r_0),\label{Eq:Ctexw'left}\\ w'(r) &\in (0,\delta) \text{ for } r \in (r_0, r_0+1].\label{Eq:Ctexw'right} \end{align} Then $w > 0$ on $\partial\Omega$ and $\min_{\bar\Omega} w = 0$. To conclude the proof, we check that $F[w]$ is not positive definite. Indeed, the eigenvalues of $F[w]$ are given by $$ \lambda(F[w]) = (\mu, \nu, \ldots, \nu) \text{ where } \mu = w'' + a(|w'|)|w'|^2 + b(|w'|) \text{ and } \nu = \frac{1}{r}w' + b(|w'|). $$ Now, for $r < r_0$, we have $w' < 0$ (thanks to \eqref{Eq:Ctexw'left}) and $b(|w'|) < 0$ and so $\nu < 0$. For $r > r_0$, we have, in view of \eqref{Eq:Ctexr0} and \eqref{Eq:Ctexw'right}, \[ \nu = w'\Big(\frac{1}{r} - \frac{|b(w')|}{w'}\Big) < w'\Big(\frac{1}{r} - \frac{1}{r_0}\Big) < 0. \] Also, $\nu = 0$ when $r = r_0$. It thus follows that $\nu \leq 0$ in $(r_0 - 1, r_0 + 1)$, i.e. $F[w]$ is not positive definite. The proof is complete. \end{proof} The previous result show that the propagation principle does not hold for general operators of the form \eqref{Eq:FDef}. However, it is easy to see that the function $L$ in Proposition \ref{prop:Ctex} is Lipschitz but not $C^1$. We will next construct some counterexamples with smooth $L$. For $\alpha \in {\mathbb R}$, consider the rotationally invariant operator \begin{equation} F[\psi] = \nabla^2 \psi - (\psi^3\,|\nabla\psi|^{10} + \alpha\,\psi\,|\nabla\psi|^6 + |\nabla \psi|^4)I, \label{Eq:F2/3Form} \end{equation} i.e. \[ L(s,p) = -(s^3|p|^{10} + \alpha\,s |p|^6 + |p|^4)I, \] which is an analytic function of $s$ and $p$. Note that neither condition \eqref{Eq:Lcond2} nor condition \eqref{Eq:Lcond2Cat} is satisfied for this function $L$. Note also that the leading part of $L(s,p)$ changes sign as $s$ varies -- this should be compared the assumption that $\beta(w)$ is of one sign in Theorem \ref{thm:CPQuad}. \begin{prop}\label{prop:beta<>0} Let $n \geq 2$, $U$ be the set of positive definite symmetric $n \times n$ matrices, and $F$ be of the form \eqref{Eq:F2/3Form} for some $\alpha < -\frac{5}{2}$. Then the propagation principle does not hold: there exists a bounded domain $\Omega$, a supersolution $w$ and a subsolution $v$ of \eqref{Eq:FpsiEq} in $\Omega$ such that $w > v$ on $\partial\Omega$ but $\min_{\bar\Omega} (w - v) = 0$. \end{prop} \begin{proof} Fix some $r_0 > 0$. For $t \in {\mathbb R}$, let \begin{equation} \psi_t(x) = \psi_t(r) = t^{\frac{1}{3}}\,|r - r_0|^{\frac{2}{3}}, \qquad \text{ where }r = |x|. \label{Eq:psitbeta<>0} \end{equation} The eigenvalues of $F[\psi_t]$ are $(\lambda_{1,t}, \lambda_{2,t}, \ldots, \lambda_{2,t})$ where \begin{align*} \lambda_{1,t} &= \psi_t'' - \psi_t^3|\psi_t'|^{10} - \alpha\,\psi_t|\psi_t'|^6 - |\psi_t'|^4\\ &= - \frac{2}{59049}\frac{t^{\frac{1}{3}}(8P_4(t) + 6561)}{|r-r_0|^{\frac{4}{3}}},\\ \lambda_{2,t} &= \frac{1}{r} \psi_t' - \psi_t^3|\psi_t'|^{10} - \alpha\,\psi_t|\psi_t'|^6 - |\psi_t'|^4\\ &= -\frac{2}{59049}\frac{t^{\frac{1}{3}}(8 P_4(t) -19683\frac{r-r_0}{r})}{|r-r_0|^{\frac{4}{3}}}, \end{align*} and where $P_4(t) = 64\,t^4 + 324\,\alpha\,t^2 + 729\,t$. Note that $P_4(0) = 0$, and, as $\alpha < -\frac{5}{2}$, \begin{align*} P_4(-2) &= -434 + 1296\,\alpha< -3674,\\ P_4\Big(\frac{9}{4}\Big) &= \frac{6561}{4}(2 + \alpha) < - \frac{6561}{8}. \end{align*} It follows that the equation $8P_4(t) + 6561 = 0$ has four roots $t_1, \ldots, t_4$ satisfying $t_1 < -2 < t_2 < 0 < t_3 < \frac{9}{4} < t_4$. In particular, we have $\lambda_{1,t_i}(r) = 0$ for $r \neq r_0$, $i = 1, \ldots, 4$. Also, from the expression of $\lambda_{2,t}$, we can find some small $\delta > 0$ such that \[ t_i\,\lambda_{2,t_i}(r) > 0 \text{ for } r \neq r_0, |r - r_0| \leq \delta. \] In addition, there exists $t_0 < t_1$ such that \[ \lambda_{1,t_0}(r) >0 \text{ and } \lambda_{2,t_0}(r) > 0\text{ for } r \neq r_0, |r - r_0| \leq \delta. \] We define, \begin{align*} w(x) &= w(r) = \left\{\begin{array}{ll} \psi_{t_4}(r) & \text{ for } r_0 \leq r \leq r_0 + \delta,\\ \psi_{t_2}(r) & \text{ for } r_0 -\delta \leq r < r_0, \end{array}\right.\\ v(x) &= v(r) = \left\{\begin{array}{ll} \psi_{t_3}(r) & \text{ for } r_0 \leq r \leq r_0 + \delta,\\ \psi_{t_0}(r) & \text{ for } r_0 -\delta \leq r < r_0. \end{array}\right. \end{align*} It is readily seen that $w$ and $v$ are respectively a supersolution and a subsolution of \eqref{Eq:FpsiEq} in $\Omega = \{|r - r_0| < \delta\}$, $w \geq v$ in $\Omega$ and $\{w = v\} = \{r = r_0\}$. This finishes the proof. \end{proof} The previous example can be modified to give a counterexample to the propagation principle with $L$ being non-increasing in $s$. \begin{prop}\label{prop:CtexNonDec} Let $n \geq 2$ and $U$ be the set of positive definite symmetric $n \times n$ matrices. There exists a smooth function $L: {\mathbb R} \times {\mathbb R}^n \rightarrow \mathcal{S}^{n \times n}$ such that $L$ is non-increasing in $s$ but the propagation principle does not hold for $F$ of the form \eqref{Eq:FDef}: there exists a bounded domain $\Omega$, a supersolution $w$ and a subsolution $v$ of \eqref{Eq:FpsiEq} in $\Omega$ such that $w > v$ on $\partial\Omega$ but $\min_{\bar\Omega} (w - v) = 0$. \end{prop} \begin{proof} For $\alpha \in {\mathbb R}$ to be fixed, consider \[ \tilde L(s,p) = -(s^3|p|^{10} + \alpha\,s |p|^6 + \frac{1}{100}|p|^4)I. \] We first show that the propagation principle does not hold for $\tilde F = \nabla^2 + \tilde L$ as in the proof of Proposition \ref{prop:beta<>0}. Fix some $r_0 > 0$. For $t \in {\mathbb R}$, define $\psi_t$ by \eqref{Eq:psitbeta<>0}. The eigenvalues of $\tilde F[\psi_t]$ are $(\lambda_{1,t}, \lambda_{2,t}, \ldots, \lambda_{2,t})$ where \begin{align*} \lambda_{1,t} &= \psi_t'' - \psi_t^3|\psi_t'|^{10} - \alpha\,\psi_t|\psi_t'|^6 - \frac{1}{100}|\psi_t'|^4\\ &= - \frac{2}{1476225}\frac{t^{\frac{1}{3}}(2\tilde P_4(t) + 164025)}{|r-r_0|^{\frac{4}{3}}},\\ \lambda_{2,t} &= \frac{1}{r} \psi_t' - \psi_t^3|\psi_t'|^{10} - \alpha\,\psi_t|\psi_t'|^6 - \frac{1}{100}|\psi_t'|^4\\ &= -\frac{2}{1476225}\frac{t^{\frac{1}{3}}(2 \tilde P_4(t) - 492075\frac{r-r_0}{r})}{|r-r_0|^{\frac{4}{3}}}, \end{align*} and where $\tilde P_4(t) = 6400\,t^4 + 32400\,\alpha\,t^2 + 729\,t$. We next fix $\alpha = -\frac{36}{25}$. Then $\tilde P_4(-2) = -85682$, $\tilde P_4(-\frac{8}{5}) = - \frac{1966568}{25}$, $\tilde P_4(\frac{8}{5}) = -\frac{1908248}{25}$, $\tilde P_4(2) = -82766$ and so the equation $2\tilde P_4(t) + 164025 = 0$ has four roots $\tilde t_1, \ldots, \tilde t_4$ satisfying $\tilde t_1 < -2 < \tilde t_2 < -\frac{8}{5} < \frac{8}{5} < \tilde t_3 < 2 < \tilde t_4$. Note that $\lambda_{1,\tilde t_i}(r) = 0$ for $r \neq r_0$, $i = 1, \ldots, 4$. Also, we can find some small $\delta > 0$ such that \[ \tilde t_i\,\lambda_{2,\tilde t_i}(r) > 0 \text{ for } r \neq r_0, |r - r_0| \leq \delta, i \in \{2, 3, 4\}. \] As $\tilde P_4(-3) = 96309 > 0$, we can also assume for $\tilde t_0 = -3$ that \[ \lambda_{1,\tilde t_0}(r) >0 \text{ and } \lambda_{2,\tilde t_0}(r) > 0\text{ for } r \neq r_0, |r - r_0| \leq \delta. \] We define, \begin{align*} w(x) &= w(r) = \left\{\begin{array}{ll} \psi_{\tilde t_4}(r) & \text{ for } r_0 \leq r \leq r_0 + \delta,\\ \psi_{\tilde t_2}(r) & \text{ for } r_0 -\delta \leq r < r_0, \end{array}\right.\\ v(x) &= v(r) = \left\{\begin{array}{ll} \psi_{\tilde t_3}(r) & \text{ for } r_0 \leq r \leq r_0 + \delta,\\ \psi_{\tilde t_0}(r) & \text{ for } r_0 -\delta \leq r < r_0. \end{array}\right. \end{align*} It is readily seen that $w$ and $v$ are respectively a supersolution and a subsolution of \eqref{Eq:FpsiEq} for the operator $\tilde F[\psi] = \nabla^2\psi + \tilde L(\psi, \nabla\psi)$ in $\Omega = \{|r - r_0| < \delta\}$, $w \geq v$ in $\Omega$ and $\{w = v\} = \{r = r_0\}$. Now we proceed to modify $\tilde L$ to our desired $L$ as $\tilde L$ is not non-decreasing in $s$. We note that, as $|\tilde t_2| > \frac{8}{5}$ and $|\tilde t_3| > \frac{8}{5}$, $(w(x), \nabla w(x))$ and $(v(x),\nabla v(x))$ belong to the set \[ N := \{(s,p) \in {\mathbb R} \times {\mathbb R}^n: s\,|p|^2 \in {\mathbb R} \setminus (-\frac{32}{45},\frac{32}{45})\} \text{ for all } x \in \Omega \setminus \{r = r_0\}. \] As $\partial_s \tilde L(s,p) = -|p|^6(3s^2|p|^4 + \alpha)I = -|p|^6(3s^2|p|^4 - \frac{36}{25})I$, we see that $\tilde L$ is non-increasing in $s$ for $(s,p) \in N$. Next, note that, for a fixed $p \neq 0$, \[ \tilde L(-\frac{32}{45}|p|^{-2}, p) = -\frac{245821}{364500}|p|^4I < 0 < \frac{238531}{364500}|p|^4I = \tilde L(\frac{32}{45}|p|^{-2}, p). \] Therefore, there exists a smooth function $L: {\mathbb R} \times {\mathbb R}^n \rightarrow \mathcal{S}^{n \times n}$ which is non-increasing in $s$ such that $L \equiv \tilde L$ in $N$ (e.g. by smoothly interpolating in $s$ the values of $\tilde L$ on the boundary of $N$). Then $w$ and $v$ are also a supersolution and a subsolution of \eqref{Eq:FpsiEq} for the operator $F[\psi] = \nabla^2\psi + L(\psi, \nabla\psi)$ in $\Omega$. This completes the proof. \end{proof} \section{Perron's method}\label{Sec:Perron} We begin with the \begin{proof}[Proof of Theorem \ref{thm:Uniq}] The conclusion a direct consequence of Theorem \ref{thm:CPQuad}(b). \end{proof} In the rest of this section, we prove Theorem \ref{thm:Perron}. We introduce some notations. For $O\subset {\mathbb R}^n$, $\xi: O\to [-\infty, +\infty]$, let $$ \xi^*(x):=\lim_{r\to 0^+} \sup \{\xi(y)\ |\ y\in O, |y-x|<r\}, $$ $$ \xi_*(x):=\lim_{r\to 0^+} \inf \{\xi(y)\ |\ y\in O, |y-x|<r\}. $$ It is easy to see that, if $\xi^*(x) < +\infty$ for all $x \in O$, then $\xi^* \in USC(O)$. Likewise, if $\xi_*(x) > -\infty$ for all $x \in O$, then $\xi_* \in LSC(O)$. $\xi^*$ is called the upper semicontinuous envelope of $\xi$, it is the smallest upper semicontinuous function satisfying $\xi\le \xi^*$ in $O$. Similarly, $\xi_*$ is called the lower semicontinuous envelope of $\xi$, it is the largest lower semicontinuous function satisfying $\xi\ge \xi_*$ in $O$. Note that, for any constant $c$, $F[c] = 0 \in \partial U$. Thus, replacing $v$ by $\max(v,c)$ with some $c < \inf_{\partial\Omega} w$ and $w$ by $\min(w,c')$ with some $c' > \sup_{\partial\Omega} v$ if necessary, we can assume that \[ -\infty < \inf_{\bar\Omega} v \leq \sup_{\bar\Omega} w < +\infty. \] Here we have used the fact that the maximum of two subsolutions is a subsolution and the minimum of two supersolutions is a supersolution. Note that by hypotheses, $w\ge v$ in $\Omega$. Define \begin{eqnarray} u(x):= && \inf\{\xi(x)\ |\ v\le \xi\le w\ \mbox{in}\ \overline \Omega, \xi=v=w\ \mbox{on}\ \partial \Omega,\nonumber \\ && \qquad \qquad \xi\in LSC(\overline \Omega),\ \xi\ \mbox{is a supersolution of \eqref{Eq:FpsiEq} in}\ \Omega\}. \label{2new} \end{eqnarray} Clearly $$ \inf_{ \overline \Omega} u \ge \inf_{ \overline \Omega} v>-\infty. $$ We will prove that the above defined $u$ satisfies the requirement of Theorem \ref{thm:Perron}. \begin{lem}\label{lemC5-1new} Let $O\subset {\mathbb R}^n$ be an open set, $L: O \times {\mathbb R} \times {\mathbb R}^n \rightarrow \mathcal{S}^{n \times n}$ be continuous, $F$ be given by \eqref{Eq:FDef}, and let ${\cal F}$ be a family of supersolutions of \eqref{Eq:FpsiEq} in $O$. Let $$ \eta(x):=\inf\{\xi(x)\ |\ \xi\in {\cal F}\}, \ \ \ x\in O. $$ Assume that $\eta_*(x)>-\infty\ \forall\ x\in O$. Then $\eta_*$ is a supersolution of \eqref{Eq:FpsiEq} in $O$. \end{lem} \begin{proof} Suppose for some $x\in O$ that there exists a polynomial $P$ of the form $$ P(y):=a+p\cdot (y-x)+\frac 12 (y-x)^t M(y-x), $$ with $a \in {\mathbb R}$, $p\in {\mathbb R}^n$, $M\in {\cal S}^{n\times n}$, such that, for some $\epsilon>0$, \begin{equation} P(x)=\eta_*(x) \text{ and } P(y)\le \eta_*(y)\ \ \forall\ |y-x|<\epsilon. \label{C6-1new} \end{equation} We will show that \begin{equation} F[P](x)\in {\cal S}^{n\times n} \setminus U. \label{C6-2new} \end{equation} It is standard that this implies that $\eta_*$ is a supersolution of \eqref{Eq:FpsiEq} in the sense of Definition \ref{Def:ViscositySolution}. By the definition of $\eta_*$, there exists $r_i\to 0^+$, $|x_i-x|<r_i$ such that $$ \inf_{B_{r_i}(x)} \eta \leq \eta(x_i) \leq \inf_{B_{r_i}(x)} \eta + \frac{1}{i} \le \eta_*(x) + \frac{1}{i} \text{ and } \eta(x_i)\to \eta_*(x). $$ Moreover, there exists $\xi_i\in {\cal F}$, such that $\xi_i\ge \eta \ge \eta_*$ and $$ 0\le \xi_i(x_i)-\eta(x_i)<\frac 1i. $$ We see from the above that $$ \xi_i\ge \eta\ge \eta_*\ge P \quad \mbox{in}\ B_\epsilon(x), $$ and $$ \xi_i(x_i)\to \eta_*(x)=P(x). $$ For every $0<2\delta<\min\{\epsilon, dist(x, \partial O)\}$, consider $$ P_\delta(y):= P(y)-\delta|y-x|^2. $$ Then $$ \xi_i\ge P_\delta\ \ \mbox{in}\ B_\epsilon(x), \quad \xi_i\ge P_\delta+ \delta^3\ \ \mbox{in}\ B_\epsilon(x)\setminus B_\delta(x), \text{ and } \xi_i(x_i)-P_\delta(x_i) \to 0. $$ It follows that there exists $\beta_i=\circ(1) \ge 0$ and $x_i^*\in B_\delta(x)$ such that \begin{equation}\label{C7-0new} \xi_i(y)\ge P_\delta(y) + \beta_i,\ \ \mbox{in}\ B_\epsilon(x), \qquad \xi_i(x_i^*)=P_\delta(x_i^*) + \beta_i. \end{equation} As $\xi_i$ is also a supersolution of \eqref{Eq:FpsiEq} in $O$. Thus, \begin{equation} F[P_\delta + \beta_i](x_i^*) \in {\cal S}^{n\times n}\setminus U. \label{C7-1new} \end{equation} \noindent{\bf Claim.}\ $x_i^*\to x$. \medskip Indeed, after passing to a subsequence, $x_i^*\to \bar x$, for some $\bar x$ satisfying $|\bar x-x|\le \delta.$ By \eqref{C7-0new} and the definition of $\eta$ and $\eta_*$, $$ \eta_*(x_i^*) - \beta_i \le \xi_i(x_i^*) - \beta_i=P_\delta(x_i^*). $$ Sending $i$ to infinity in the above, and using the lower-semicontinuity property of $\eta_*$, we have $ \eta_*(\bar x) \le P_\delta(\bar x)=P(\bar x)-\delta|\bar x-x|^2. $ On the other hand, $P(\bar x)\le \eta_*(\bar x)$ according to \eqref{C6-1new}. Thus $\bar x=x$, and the claim is proved. \medskip With the convergence of $x_i^*$ to $x$ and of $\beta_i$ to $0$, sending $\delta$ to $0$ and $i$ to $\infty$ in \eqref{C7-1new} yields \eqref{C6-2new}. Lemma \ref{lemC5-1new} is established. \end{proof} \bigskip \begin{proof}[Proof of Theorem \ref{thm:Perron}] We know that \begin{equation} \max(v,u_*)\le u\le \min(u^*, w), \qquad\mbox{in}\ \overline \Omega, \label{C9-1new} \end{equation} where $u$ is defined by \eqref{2new}. Clearly, \begin{equation} v= u_*= u= u^*= w, \qquad\mbox{on}\ \partial \Omega, \label{C9-2new} \end{equation} By Lemma \ref{lemC5-1new}, $u_*$ is a supersolution of \eqref{Eq:FpsiEq} in $\Omega$. By the comparison principle Theorem \ref{thm:CPQuad}(ii), $u_* \geq v$. Hence, by the definition of $u$, $u\le u_*$ in $\overline \Omega$. Thus $u=u_*$ in $\overline \Omega$, and $u$ is a supersolution of \eqref{Eq:FpsiEq} in $\Omega$. Note that \[ \sup_{\bar\Omega} u^* \leq \sup_{\bar\Omega} w < +\infty. \] \medskip \noindent{\bf Claim.}\ $u^*$ is a subsolution of \eqref{Eq:FpsiEq} in $\Omega$. \medskip To prove this claim, we follow Ishii's argument (\cite{Ishii89-CPAM}). Indeed, if the claim does not hold, there exist $x\in \Omega$ and some quadratic polynomial $$ P(y)=a+p\cdot (y-x)+\frac 12 (y-x)^t M (y-x), $$ with $a \in {\mathbb R}$, $p\in {\mathbb R}^n$, $M\in {\cal S}^{n\times n}$, such that for some $\bar \epsilon>0$ \begin{equation} P(y)\ge u^*(y)\ \ \mbox{for}\ y\in B_{\bar\epsilon}(x),\qquad P(x)=u^*(x), \label{C10-1new} \end{equation} but \begin{equation} F[P](x)\in {\cal S}^{ n\times n}\setminus \overline U. \label{C10-2new} \end{equation} Since ${\cal S}^{ n\times n}\setminus \overline U$ is open, there exists $0<2\bar\delta< \min\{\bar \epsilon^2, 1\}$ such that for all $0<\delta<\bar\delta$, the function $$ P_\delta(y):=P(y)+\delta|y-x|^2 -\delta^2 $$ satisfies \begin{equation} P_\delta(x)=P(x)-\delta^2<u^*(x), \label{C11-0new} \end{equation} and \begin{equation} F[P_\delta](y)\in {\cal S}^{ n\times n}\setminus \overline U,\qquad \forall\ |y-x|<\delta^{1/9}. \label{C10-3new} \end{equation} Clearly, \begin{equation} P_\delta(y)>P(y),\qquad \forall \ |y-x|\ge \delta^{1/5}. \label{C11-1new} \end{equation} Define $$ \hat u(y):= \left\{ \begin{array}{lr} \displaystyle{ \min\{u(y), P_\delta(y)\}, }& \mbox{if}\ |y-x|<\delta^{1/5},\\ u(y), & \mbox{if}\ |y-x|\ge \delta^{1/5}. \end{array} \right. $$ By \eqref{C10-3new}, $P_\delta$ is a supersolution of \eqref{Eq:FpsiEq} in $\{y: |y-x|<\delta^{1/9}\}$. By \eqref{C11-1new}, and using $P\ge u^*\ge u$, we have $$ \hat u(y)=u(y)= \min\{u(y), P_\delta(y)\},\qquad \delta^{1/5}\le |y-x|\le \delta^{1/6}. $$ It follows that $\hat u$, being the minimum of two supersolutions, is a supersolution of \eqref{Eq:FpsiEq} in $\Omega$, and, because of the definition of $u$, \begin{equation} u\le \hat u\qquad\mbox{in}\ \Omega. \label{C12-1new} \end{equation} On the other hand we see from \eqref{C11-0new}, the definition of $\hat u$ and \eqref{C12-1new} that there exists $\epsilon\in (0, \delta^{1/5})$ such that $$ u(y)\le \hat u(y)\le P_\delta(y)<u^*(x)-\epsilon,\qquad \forall\ |y-x|<\epsilon. $$ Thus $$ u^*(x) =\lim_{r\to 0^+} \sup\{u(y)\ |\ |y-x|<r\} \le u^*(x)-\epsilon, $$ a contradition. The claim is proved, i.e. $u^*$ is a subsolution of \eqref{Eq:FpsiEq} in $\Omega$. \bigskip Now we have proved that $u_* = u$ and $u^*$ are respectively supersolution and subsolution of \eqref{Eq:FpsiEq} in $\Omega$, and $u_*=u^*$ on $\partial \Omega$. By the comparison principle Theorem \ref{thm:CPQuad}(ii), $u^* \leq u$ in $\Omega$ and so $u = u_* = u^*$ is a solution of \eqref{Eq:FpsiEq}. \end{proof} To conclude the section, let us remark that: \begin{rem} The conclusion of Theorem \ref{thm:Perron} is still valid for more general $(F,U)$ as in Theorem \ref{thm:CPQuad}, or Theorem \ref{thm:CPNUE} or Theorem \ref{thm:CPNUECat} provided that the function $L(x,s,p)$ is independent of $s$ and \[ -\infty < \inf_{\bar\Omega} v \leq \sup_{\bar\Omega} w < +\infty. \] \end{rem} \section{Lipschitz regularity of viscosity solutions}\label{Sec:LipRegVS} In this section we prove Theorem \ref{thm:regularity}, as an application of the comparison principle Theorem \ref{thm:CPQuad}. We also consider some mild generalization regarding Lipschitz regularity of viscosity solutions for operator of the form \eqref{Eq:FDef}. \begin{proof}[Proof of Theorem \ref{thm:regularity}] Without loss of generality, we may assume that $\Omega=B(0,1)$ and we only need to prove that $u$ is Lipschitz continuous on $\overline{B(0,\frac{1}{2})}$. For any $x\in\overline{B(0,\frac{1}{2})}$, $0<\lambda\leq R:=\frac{1}{4}\left[\frac{\sup\limits_{B(0,\frac{3}{4})}u}{\inf\limits_{B(0,\frac{3}{4})}u}\right]^{-\frac{1}{n-2}}$, we define $u_{x,\lambda}$, the Kelvin transform of $u$, as \begin{equation} u_{x,\lambda}(y):=\frac{\lambda^{n-2}}{|y-x|^{n-2}}u(x+\frac{\lambda^{2}(y-x)}{|y-x|^{2}}),\quad\forall y\in\overline{B(0,\frac{3}{4})\setminus B(x,\lambda)}.\label{jgd} \end{equation} For any $y\in\partial B(0,\frac{3}{4})$, we have \begin{equation*} u_{x,\lambda}(y)\leq(4R)^{n-2}\sup\limits_{B(0,\frac{3}{4})}u=\inf\limits_{B(0,\frac{3}{4})}u\leq u(y). \end{equation*} Also, we know that \begin{equation*} \lambda(A^{u_{x,\lambda}})\in \partial\Gamma,\quad\mbox{ in }B(0,\frac{3}{4})\setminus \overline{B(x,\lambda)},\quad\mbox{ in the viscosity sense}. \end{equation*} Since $u_{x,\lambda}=u$ on $\partial B(x,\lambda)$, by applying the comparison principle Theorem \ref{thm:CPQuad}(b) with $\Omega=B(0,\frac{3}{4})\setminus \overline{B(x,\lambda)}$, $U=\{M\in{\mathcal{S}^{n \times n}}:\lambda(M)\in\Gamma\}$, $F[\psi] = A[\psi]$, $w={-\frac{2}{n-2}}\ln u_{x,\lambda}$ and $v={-\frac{2}{n-2}} \ln u$, we have \begin{equation} u_{x,\lambda}\leq u\mbox{ in }B(0,\frac{3}{4})\setminus \overline{B(x,\lambda)}\mbox{ for any }0<\lambda\leq R,x\in\overline{B(0,\frac{1}{2})}.\label{lashi} \end{equation} By \cite[Lemma 2]{LiNg-arxiv}, \eqref{lashi} implies that $u$ is Lipschitz continuous on $\overline{B(0,\frac{1}{2})}$. This concludes the proof. \end{proof} As pointed out in the introduction, the above proof of Theorem \ref{thm:regularity} uses not only comparison principles but also conformal invariance property of the conformal Hessian. For general operators of the form \eqref{Eq:FDef}, one does not expect a purely local regularity like that in Theorem \ref{thm:regularity} to hold, as illustrated by the following example. \begin{example} Let $U$ be the set of symmetric $n \times n$ matrices $M$ with $M_{11} > 0$, and $L \equiv 0$. The equation $F[\psi] \in \partial U$ becomes \[ \partial_{x_1}^2 \psi = 0. \] Then, the comparison principle holds (by considering the restriction of $\psi$ to each line parallel to the $x_1$-axis). Nevertheless, for any continuous function $f: {\mathbb R}^{n-1} \rightarrow {\mathbb R}$, $\psi(x_1,x_2, \ldots, x_n) = f(x_2,\ldots, x_n)$ is a viscosity solution of $F[\psi] \in \partial U$, and clearly, the regularity of $\psi$ (with respect to the $x_2, \ldots, x_n$ variables) is not better than that of $f$. \end{example} Despite the above negative example, by a variant of the proof of Theorem \ref{thm:regularity} using translational invariance rather than conformal invariance, we have the following partial generalization: \begin{cor}\label{prop:ObLipReg} Assume $n \geq 2$. Let $\Omega\subset\mathbb{R}^{n}$ be a bounded open set, $(F,U)$ be as in Theorem \ref{thm:CPQuad} with constant $\alpha$ and $\beta$. Assume that $\psi \in C^0(\Omega)$ is a viscosity solution to \eqref{Eq:FpsiEq} in $\Omega$. If $\psi \in C^{0,1}(\overline{N \cap \Omega})$ for some open neighborhood of $\partial \Omega$, then $\psi \in C^{0,1}(\bar\Omega)$ and \[ \sup_{\Omega} |\nabla\psi| \leq \sup_{N \cap \Omega} |\nabla\psi|. \] \end{cor} Before giving a proof, we remark that, in general, the Lipschitz regularity of $\psi$ on $\partial\Omega$ does not ensure that the solution $\psi$ is Lipschitz continuous in $\bar\Omega$. \begin{example} Consider the equation \begin{equation} F[\psi] = \nabla^2 \psi - |\nabla \psi|^{m}\,I \in \partial U \label{Eq:BLipCounterEx} \end{equation} where $m > 2$ and $U$ is the set of symmetric $n \times n$ matrices with at least one positive eigenvalue. (This equation can be written equivalently as \[ \det (F[\psi]) = 0 \text{ and } F[\psi] \leq 0.) \] Then $\psi(x) = - (m-1)^{\frac{m-2}{m-1}} (m-2)^{-1}\,(|x|-1)^{\frac{m-2}{m-1}}$ is a solution to \eqref{Eq:BLipCounterEx} on $\Omega_a = \{1 < |x| < a\}$ for any $a > 1$. Clearly $\psi$ is constant on each component of the boundary $\partial \Omega_a$, but $\psi \notin C^{0,1}(\overline{\Omega_a})$. \end{example} \begin{proof}[Proof of Corollary \ref{prop:ObLipReg}] Shrinking $\Omega$ and $N$ if necessary, we may assume that $\psi \in C^{0,1}(\bar N)$. We note that for any vector $e \in {\mathbb R}^n$ and any constant $c \in {\mathbb R}$, the function \[ \psi_{e}(x) := \psi(x + e) \] satisfies $F[\psi_{e} + c] \in \partial U$ in $\Omega_e := \{x: x + e \in \Omega\}$ in the viscosity sense. Thus, by the comparison principle Theorem \ref{thm:CPQuad}(b), \[ \psi \leq \psi_e + \max_{\partial (\Omega \cap \Omega_e)} (\psi - \psi_e) \text{ in } \Omega \cap \Omega_e. \] In particular, there is some $\delta > 0$ such that for $|e| < \delta$, we have $\partial (\Omega \cap \Omega_e) \subset \bar N$ and \[ \psi \leq \psi_e + \sup_N |\nabla\psi||e| \text{ in } \Omega \cap \Omega_e. \] This implies the assertion. \end{proof} \noindent{\bf{\large Acknowledgments.}} Li is partially supported by NSF grant DMS-1501004. Wang is supported in part by the scholarship from China Scholarship Council under the Grant CSC No. 201406040131. \newcommand{\noopsort}[1]{}
2,877,628,089,294
arxiv
\section{Introduction} A {\em hypergraph} is a pair $(V,E)$ where $V$ is a set and $E$ is a family of nonempty subsets of $V$. The $x\in V$ are called vertices and the $e\in E$ are called edges. We use the notations $v(G)=|V(G)|$, $e(G)=|E(G)|$. A hypergraph is called {\em $r$-uniform} if all edges have size $r$. A {\em linear hypergraph} is a hypergraph $(V,E)$ such that for any distinct edges $e,f \in E$, $|e \cap f| \leq 1$. The {\em degree} of a vertex $v$, denoted by $d(v)$, is the number of edges that contains it. A hypergraph is {\em $d$-regular} if all vertices have degree $d$. An {\em independent set} of a hypergraph $G$ is a subset of $V(G)$ which does not contain any edge of $G$. The maximum size of an independent set in $G$ is called the {\em independence number} of $G$, denoted $\alpha(G)$. In this paper, we study a natural randomized greedy algorithm for finding independent sets in hypergraphs. The algorithm iteratively selects a vertex uniformly randomly from all remaining vertices of the hypergraph and adds it to the independent set so far, and then deletes all remaining vertices that form an edge with the set of selected vertices, and repeat until no vertices remain. The independent set generated by this algorithm for a hypergraph $G$ is denoted $\mathcal{I}(G)$. \subsection{Independent sets in graphs} Tur\'an's Theorem~\cite{T} shows that an $n$-vertex graph with average degree $d$ has independence number $\alpha(G)\geq n/(d+1)$, with equality only for a disjoint union of cliques $K_{d + 1}$. For triangle-free graphs $G$, Ajtai, Koml\'os and Szemer\'edi~\cite{AKS} improved this bound by a factor of order $\log d$, and Shearer~\cite{She83} gave a further improvement: \begin{thm} Let $G$ be an $n$-vertex graph triangle-free of average degree $d \geq 2$. Then \begin{equation}\label{She} \alpha(G)\ge\frac{d\log d-d+1}{(d-1)^2}\cdot n. \end{equation} \end{thm} The {\em girth} of a graph containing a cycle is the length of a shortest cycle in the graph. For graphs with high girth, Shearer~\cite{She} improved (\ref{She}), and Lauer and Wormald~\cite{LW} showed that there exist a function $\delta = \delta(g)$ such that $lim_{g \rightarrow \infty} \delta(g) = 0$ and if $G$ is a $d$-regular graph of girth $g$, then \begin{equation} \alpha(G) \geq \frac{1}{2}(1-(d-1)^{-\frac{2}{d-2}})n - \delta n. \end{equation} By analyzing the performance of the greedy algorithm, Gamarnik and Goldberg~\cite{GG} prove the same bound, with an explicit form for $\delta$. It is convenient to let \begin{equation} \epsilon=\epsilon(d,g)= \frac{d(d-1)^{\lfloor\frac{g-3}{2}\rfloor}}{(\lfloor\frac{g-1}{2}\rfloor)!}. \end{equation} Note that for each fixed $d$, $\epsilon(d,g) \rightarrow 0$ as $g \rightarrow \infty$. \begin{thm}\label{GG} Let integers $d\geq3$ and $g\geq4$, and let $G$ be a $d$-regular graph on $n$ vertices with girth $g$, and let $\mathcal{I}$ be the independent set generated by the greedy algorithm. Then \begin{equation} \Bigl(\frac{1-(d-1)^{-2/(d-2)}}{2}-\epsilon\Bigr)n\le\mathbb{E}[|\mathcal{I}|]\le\Bigl(\frac{1-(d-1)^{-2/(d-2)}}{2}+\epsilon\Bigr)n, \end{equation} \end{thm} The bounds are effective when $d$ is fixed and $g$ is large and, in particular, Theorem \ref{GG} shows \begin{equation} \alpha(G)\geq\Bigl(\frac{1-(d-1)^{-2/(d-2)}}{2}-\epsilon\Bigr)n. \end{equation} We also observe that when $g$ is sufficiently large relative to $d$, this bound agrees with (\ref{She}) asymptotically as $d \rightarrow \infty$, since $$ (d-1)^{-2/(d-2)}=\exp\l(-\frac{2\log (d-1)}{d-2}\r)=1-\frac{2\log(d-1)}{d-2}(1+o_d(1)), $$ where $o_d(1)$ here represents a function of $d$ that converges to zero as $d\rightarrow\infty$. When we say a function $f(x)$ is asymptotic to $g(x)$ as $x\rightarrow\infty$(which is abbreviated $f\sim g$), it means that $\lim_{x\rightarrow\infty}f(x)/g(x)=1$. \subsection{Independent sets in hypergraphs} For $(r+1)$-uniform hypergraphs with average degree $d$, Caro and Tuza~\cite{CT} showed that \begin{equation} \alpha(G)\ge \frac{d!}{\prod_{i=1}^d(i+\frac{1}{r})}\cdot n. \end{equation} The same bound can also be obtained by extending the Caro-Wei~\cite{C}~\cite{W} bound for independent sets in graphs: taking a random ordering of the vertices of the hypergraph, let $I$ be the set of vertices $v$ such that for every edge $e$ containing $v$, $v$ is not the smallest vertex in $e$. Then it can be shown via elementary combinatorial methods that \begin{equation}\label{combinatorial} \mathbb E[|I|] \geq \sum_{v \in V} \frac{d!}{\prod_{i=1}^d(i+\frac{1}{r})}=\frac{d!}{\prod_{i=1}^d(i+\frac{1}{r})}\cdot n. \end{equation} The same algorithm can be implemented via the following random process, which provide a different (and possibly easier) way to analyze the outcome (see for example in Dutta, Mubayi and Subramanian~\cite{DMS}): \begin{enumerate} \item Equip each vertex with i.i.d.~weight from the uniform distribution on [0,1]. Then with probability 1, all vertices will have distinct weights. \item Select all the vertices that are not the smallest-weighted vertex in any edge that contains it. These vertices form an independent set. \end{enumerate} If we select vertices in a more careful way -- iteratively select the vertex with largest weight, i.e., select the vertex with largest weight, delete vertices that form an edge with the vertices selected thus far, and repeat -- then this random process will be equivalent to the randomized greedy algorithm. In any case, a computation shows \begin{equation} \mathbb E[|I|] \geq n\int_0^1 (1 - x^{r})^ddx \end{equation} which gives (\ref{combinatorial}). These bounds are asymptotic to $\Gamma(1 + \frac{1}{r})nd^{-\frac{1}{r}}$ as $d \rightarrow \infty$, where $\Gamma$ here is the well-known gamma function that extends factorial function to complex numbers. In this paper, we consider this algorithm in uniform hypergraphs of large girth. \medskip To define {\em girth} in hypergraph, we first need to define what is a {\em cycle} in hypergraph. There are many different ways to define cycle in hypergraph--see, e.g., a talk by S\'ark\"ozy~\cite{Sar}. Here we chose to work with the {\em Berge-cycle}. For $k\geq3$, a {\em Berge $k$-cycle} is an $r$-uniform hypergraph with $k$ edges $e_1,e_2,\dots,e_k$ such that there exist distinct vertices $v_1,v_2,\dots,v_k$ such that $\{v_k,v_1\}\in e_1,\{v_1,v_2\}\in e_2,\dots,\{v_{k-1},v_k\}\in e_k$. When $k = 2$, this corresponds to $v_1,v_2 \in e_1 \cap e_2$. The {\em girth} of a hypergraph containing a Berge cycle is the smallest $g$ such that the hypergraph contains a Berge $g$-cycle. In particular, the girth of a non-linear hypergraph is 2. Ajtai, Koml\'os, Pintz, Spencer and Szemer\'edi~\cite{AKPSS} established the following lower bound for $(r + 1)$-uniform hypergraphs with girth $g\geq5$, which improves (\ref{combinatorial}) by a factor of order $(\log d)^{\frac{1}{r}}$. \begin{thm} For integer $r\geq 1$, real number $d$ sufficiently large and integer $n$ sufficiently large, let $G$ be an $n$-vertex $(r+1)$-uniform hypergraphs with average degree $d$ and girth at least $5$, then \begin{equation}\label{AKPSS} \alpha(G)\geq 0.36\cdot10^{-\frac{5}{r}}\l(\frac{\log{d}}{rd}\r)^{\frac{1}{r}}n. \end{equation} \end{thm} Based on this theorem, Duke, Lefmann and R\"odl~\cite{DLR} showed that the same bound(with different constant) holds for linear hypergraphs. \subsection{Main Theorem} In this paper, we extend the ideas of Gamarnik and Goldberg~\cite{GG} to hypergraphs. First, it is convenient to define the following: Let $u(d,r)$ be the only positive real number that satisfies the following equation: \begin{equation}\label{ud} \sum_{n\ge0}\binom{n+d-2}{d-2}\frac{u(d,r)^{rn+1}}{rn+1}=1. \end{equation} Define \begin{equation} \epsilon = \epsilon(g,d,r) = \frac{d(d-1)^{\lfloor\frac{g-3}{2}\rfloor}}{r\sum_{k=1}^{\lfloor\frac{g-1}{2}\rfloor}(k+\frac{1}{r})}. \end{equation} Our main theorem is as follows: \begin{thm}\label{Thm1} For any integers $r\geq1$, $d\geq2$ and $g\geq4$, let $G$ be an $(r+1)$-uniform $d$-regular hypergraph with $n$ vertices and girth $g$, let $\mathcal{I}$ be the independent set of $G$ generated by the greedy algorithm. Let \begin{equation} f(d,r)=u(d,r)-\frac{u(d,r)^{r+1}}{r+1}. \end{equation} Then \begin{equation}\label{main} (f(d,r)-\epsilon)n\le\mathbb{E}[|\mathcal{I}|]\le(f(d,r)+\epsilon)n, \end{equation} \end{thm} In particular, due to the form of the quantity $\epsilon = \epsilon(g,d,r)$, this theorem is effective for fixed $d$ and large $g$, and shows \begin{equation} \alpha(G)\geq(f(d,r)-\epsilon)n. \end{equation} For $r=1$, this coincides with Theorem~\ref{GG}. We prove in Appendix A that as $d \rightarrow \infty$, \begin{equation} f(d,r) \sim \Bigl(\frac{\log d}{rd}\Bigr)^{\frac{1}{r}}, \end{equation} and so if $g$ is large enough relative to $d$, then this slightly improves the constant in (\ref{AKPSS}) asymptotically as $d \rightarrow \infty$. \medskip Our second result shows that the size of the independent set generated by the greedy algorithm concentrate around its mean asymptotically almost surely for linear hypergraphs with bounded degree (i.e. hypergraphs that are not necessarily regular): \begin{thm}\label{Thm2} For any integers $r\geq1$ and $d\geq2$, let $G$ be an $(r+1)$-uniform linear hypergraph with maximum degree $d$ on $n$ vertices, $\mathcal{I}(G)$ be the independent set generated by the greedy algorithm, then for any positive function $b(n)$ with $b(n)\rightarrow\infty$ as $n\rightarrow\infty$, we have $$ \P[||\mathcal{I}(G)|-\mathbb{E}[|\mathcal{I}(G)|]|>\sqrt{n}b(n)]\rightarrow0,\ as\ n\rightarrow\infty. $$ \end{thm} The rest of this paper is structured as follows. In Section 2, we introduce {\em influence-blocking hypergraphs} and {\em bonus function of hypergraphs}. They are originally notions for graphs from Gamarnik and Goldberg~\cite{GG}, which are generalized to notions for hypergraphs here. In Section 3, we prove Theorem~\ref{Thm1} by using the property of influence-blocking hypergraphs to reduce the problem of estimating $\mathbb E[|\mathcal{I}(G)|]$ to a local problem on a rooted hypertree, and then using the bonus function of hypergraphs to establish a differential equation. In Section 4, we use second moment method to prove Theorem~\ref{Thm2}. In the appendix, we present the asymptotic analysis of the quantity $f(d,r)$ from Theorem \ref{Thm1}. \section{Preliminaries} Gamarnik and Goldberg~\cite{GG} introduce two notions for graphs, the {\em influence-blocking subgraph} and {\em bonus function}. In this section, we generalize these notions to hypergraphs and discuss their properties. A {\em hypertree} is a linear hypergraph with no Berge cycle, and a {\em rooted hypertree} is a hypertree in which a special vertex called the {\em root} is singled out. In summary, we show that the performance of the greedy algorithm on hypergraphs with large girth is locally similar to its performance on a rooted hypertree -- note that if a hypergraph has high girth, then for each vertex, its neighbourhood within finite distance looks like a hypertree. Hence, if we can show that the event of a vertex being selected into the independent set is mostly dependent on its neighbourhood within finite distance, then we can simplify the analysis of each vertex into the analysis of the root of a rooted tree. Then we analyze the probability of the root of a rooted hypertree being selected by the randomized greedy algorithm. For ease of analysis, we consider an equivalent way to do the randomized greedy algorithm as follows: \begin{enumerate} \item Equip each vertex with i.i.d.~weight from the uniform distribution on $[0,1]$. Then with probability 1, all vertices will have distinct weights. \item Iteratively select the vertex with largest weight from all remaining vertices of $G$, and add it to the independent set so far, and then delete all remaining vertices that form an edge with the selected vertices, and repeat until no vertices remain. \end{enumerate} The strategy is to analyze the probability of each vertex being selected into the independent set. \subsection{Influence-blocking hypergraphs} Garmarnik and Goldberg~\cite{GG} introduce {\em influence-blocking subgraphs}; here we extend this notion to hypergraphs. Suppose we already applied the first step of the greedy algorithm on $G$. That is, the vertices of $G$ are now equipped with distinct weights. Let $v$ be a vertex of $G$, $e$ be an edge of $G$ such that $e$ contains $v$. We say $v$ {\em defeats} $e$ if there is another vertex $v'$ in $e$ such that the weight of $v'$ is smaller than the weight of $v$. That is, $v$ is not the smallest weighted vertex in $e$. Observe that if $v$ defeats all the edges that contains it, then $v$ must be selected into $\mathcal{I}(G)$, since it cannot be deleted according to the rule of the algorithm. In this case, the weight of any other vertex that is not in the neighbourhood of $v$ will not influence the behaviour of $v$. This phenomenon can be generalized to sub-hypergraphs, which gives us the following definition: \begin{definition} Let $G$ be a hypergraph whose vertices are equipped with distinct weights. An induced sub-hypergraph $H$ of $G$ is called an {\em influence-blocking hypergraph} if for every vertex $v\in V(H)$, and $e\in E(G)\backslash E(H)$ with $v\in e$, $v$ is not the vertex in $e$ with smallest weight. \end{definition} If $G$ is a hypergraph whose vertices are already equipped with distinct weights, then we also let $\mathcal{I}(G)$ denote the independent set of $G$ generated by applying the second step of the greedy algorithm to $G$. Let $v$ be a vertex of $G$, such that $v\not\in\mathcal{I}(G)$. If $e$ is an edge of $G$, such that $v\in e$ and $e\subset v\cup\mathcal{I}(G)$, then we say {\em $v$ is deleted by $e$}. The first property of influence-blocking hypergraphs is that the performance of the greedy algorithm inside this sub-hypergraph is not dependent on the performance of the algorithm outside this sub-hypergraph. This phenomenon is described by the following lemma, which is a straightforward modification of Lemma $5$ in~\cite{GG}: \begin{lem}\label{POfIBG} Let $G$ be a hypergraph whose vertices are equipped with distinct weights. Let $H$ be an influence-blocking hypergraph of $G$. Then $\mathcal{I}(H)=\mathcal{I}(G)\cap V(H)$. \end{lem} \begin{proof} Let $V(H)=\{v_1,v_2,\dots,v_m\}$, such that $v_1>v_2>\dots>v_m$ (where $v_i>v_j$ means the weight of $v_i$ is larger than the weight of $v_j$). To prove the lemma, it suffices to show that $v_i\in\mathcal{I}(H)$ if and only if $v_i\in\mathcal{I}(G)$, for all $i$ such that $1\le i\le m$. We do that by induction. First, for $i=1$, we have $v_1\in\mathcal{I}(H)$. By the definition of influence-blocking hypergraph, $v_1$ cannot be deleted by edges not in $H$. Since $v_1$ has the largest weight among all vertices of $H$, so it cannot be deleted by edges in $H$ either. Hence, we also have $v_1\in\mathcal{I}(G)$. This completes the base case. Now suppose $1<i\le m$, and the argument holds for all integer less than $i$. If $v_i\not\in\mathcal{I}(H)$, then $v_i$ must be deleted by an edge $e\in E(H)$ such that $e\backslash v_i$ consists of vertices whose weights are larger than the weight of $v_i$. Then by the inductive assumption, $v_i$ must be deleted by the same edge in the algorithm for $G$. Hence, we have $v_i\not\in\mathcal{I}(G)$. If $v_i\in\mathcal{I}(H)$, then $v_i$ cannot form an edge in $H$ with vertices whose weights are larger than the weight of $v_i$. Hence, by inductive assumption, $v_i$ cannot be deleted by edges in $H$. Also, by the definition of influence-blocking hypergraph, $v_i$ cannot be deleted by edges not in $H$ either. Therefore, we have $v_i\in\mathcal{I}(G)$. This completes the inductive step, and hence the proof of the lemma. \end{proof} The second property of the influence-blocking hypergraphs is that any subset of vertices can be extended to a unique minimal influence-blocking hypergraph, which is presented by the following lemma, which is a straightforward modification of Lemma $3$ in~\cite{GG}: \begin{lem} Let $G$ be a hypergraph whose vertices are equipped with distinct weights. Let $A$ be such that $A\subset V(G)$, then there exist a unique minimal influence-blocking hypergraph $\mathcal{B}_G(A)$ of $G$ such that $A\subset V(\mathcal{B}_G(A))$. It can be simplified as $\mathcal{B}(A)$ if there is no ambiguity. \end{lem} \begin{proof} Pick a set of vertices $V_A$ as following. First, put all vertices of $A$ into $V_A$. Then, we iteratively take edges that are not in $A$ but whose smallest-weighted vertex is in $A$, and put all the vertices of such edges into $V_A$, and then repeat until no edge like this remains. Let $\mathcal{B}(A)$ be the sub-hypergraph of $G$ induced by $V_A$. By definition, $\mathcal{B}(A)$ is an influence-blocking hypergraph of $G$, and is contained in any influence-blocking hypergraph of $G$ that contains $A$. Hence, it is minimal. Also, by the process that it is generated, we can see that it is unique. \end{proof} \begin{definition}\label{path} For any integers $r, l\geq1$, an {\em $(r+1)$-uniform path of length $l$} connecting $v_0$ to $v_{lr}$ is a hypergraph with vertices $\{v_0, v_1,\dots, v_{lr}\}$ and edges $e_k=\{v_{kr}, v_{kr+1},\dots, v_{(k+1)r}\}$ for ${0\leq k\leq l-1}$. If the vertices of a path are weighted and the smallest-weighted vertex in edge $e_k$ is $v_{kr}$ for all $0\leq k\leq l-1$, then we say the weighted path is {\em increasing} from $v_0$ to $v_{lr}$. \end{definition} Note that the definition of path here is different from the definition of a {\em Berge path}, which is defined in a similar way as the Berge cycle. The following lemma evaluate the probability that a path in a hypergraph is increasing when given a random total order: \begin{lem}\label{NumOfIP} For any integers $r, l\geq1$, the number of ways to assign $\{0,1,\dots,lr\}$ as distinct weights to the vertices of an $(r+1)$-uniform paths of length $l$ from $v_0$ to $v_{lr}$ so that it is increasing is $$\frac{(lr+1)!}{\prod_{k=1}^l(kr+1)}.$$ Hence, for an $(r+1)$-uniform path $P$ of length $l$, if each vertex is equipped with i.i.d.~weight from the uniform distribution on $[0,1]$, then \begin{equation}\label{PofI} \P[\text{$P$ is increasing from $v_0$ to $v_{lr}$} ]=\frac{1}{\prod_{k=1}^l(kr+1)}. \end{equation} \end{lem} \begin{proof} For simplicity, we only prove this for $r=2$. In this case, we want to show that the number of proper weight assignments for paths of length $l$ is $(2l)!!=\prod_{k=1}^l(2k)$. The idea of the proof for general case is exactly the same. Let $a_l$ be the number of proper weight assignments for $3$-uniform paths of length $l$ with distinct weights from $\{0,1,\dots,2l\}$. Let $W_i$ be the weight of $v_i$. We prove $a_l=(2l)!!$ by induction. First, for $l=1$, $W_0$ has to be $0$, $W_{2}$ can be either $1$ or $2$. So $a_1=2=2!!$. Now for $l\geq 2$, suppose the lemma is true for $l-1$. Then again, $W_{0}$ has to be $0$. $W_2$ is less than all $W_i$ with $i>2$, so $W_2$ is at least the third smallest weight. As a result, $W_{2}=1$ or $2$. When $W_{2}=1$, $W_1$ can be any number in $\{2,3,\dots,2l\}$, and all the other vertices form a $3$-uniform increasing path of length $l-1$. So the number of proper weight assignments of this kind is $(2l-1)a_{l-1}$. When $W_{2}=2$, $W_{1}$ has to be 1, and all the other vertices form a $3$-uniform increasing path of length $l-1$, the number of proper weight assignments of this kind is $a_{l-1}$. Hence, by inductive assumption, we have $a_l=2la_{l-1}=2l\cdot(2l-2)!!=(2l)!!$. This completes the proof for $r=2$. \end{proof} For any vertex $v$ and any integer $h\geq 1$, let $N_h(v)$ be the set of vertices $w$ such that there exist a path, as defined in Definition~\ref{path}, connecting $v$ to $w$, whose length is less or equal than $h$. When $h=0$, let $N_0(v)={v}$. The following lemma, which is a modification of Lemma $6$ in~\cite{GG}, show that for any vertex $v$, the probability that the minimal influence-blocking hypergraph containing $v$ is not a sub-hypergraph of $N_h(v)$ converges to $0$ as $h\rightarrow\infty$. \begin{lem}\label{ProbOfE} For any integers $r\geq1$ and $d\geq2$, let $G$ be any $(r+1)$-uniform linear hypergraph of maximum degree $d$, and suppose that the vertices are equipped with i.i.d.~weights from the uniform distribution on $[0,1]$. Then for any vertex $v$ and any $h\geq 0$, $$\P[\mathcal{B}(v)\not\subset N_h(v)]\leq \frac{d(d-1)^h}{r\prod_{k=1}^{h+1}(k+\frac{1}{r})}.$$ \end{lem} \begin{proof} For any vertex $v$ there exist at most $d(d-1)^hr^h$ distinct paths of length $h+1$ that connecting $v$ to some vertex in $N_{h+1}(i)\backslash N_h(i)$. By definition, $\mathcal{B}(v)\not\subset N_h(v)$ if and only if at least one of these path is increasing. So by applying a union bound and equation~(\ref{PofI}), we have $$ \P[\mathcal{B}(v)\not\subset N_h(v)] \le\frac{d(d-1)^hr^h}{\prod_{k=1}^{h+1}(kr+1)} = \frac{d(d-1)^h}{r\prod_{k=1}^{h+1}(k+\frac{1}{r})}. $$ \end{proof} \subsection{Bonus function of hypergraphs} To analyze the probability of the root of a rooted hypertree being selected into the independent set, we use the following notion to establish a recursive equation, and hence by some analysis, a differential equation. Consider the following {\em bonus function of hypergraphs}, which is extended from the bonus function of graphs introduced by Garmarnik and Goldberg~\cite{GG}: \begin{definition} Let $T$ be a rooted hypertree, whose vertices are equipped with distinct positive weights. Let $W_v$ be the weight of a vertex $v$, $DE(v)$ be the set of descending edges of $v$ and $I$ be the indicator function. Then the {\em bonus function of hypergraphs} $S_T:V(T)\rightarrow\mathbb{R}$ is defined by $$ S_T(v)= \left\{ \begin{aligned} &W_v, &&\text{v is leaf,}\\ &W_v\prod_{e\in DE(v)}I(W_v>\min_{u\in e,u\not=v}\{S_T(u)\}),\ &&\text{otherwise.} \end{aligned} \right. $$ \end{definition} Given a weighted rooted tree, the bonus function value of the root is exactly the weight of the root if the root is selected by the greedy algorithm, and is $0$ if the the root is not selected, as shown by the following lemma: \begin{lem}\label{POfBF} Let $T$ be a rooted hypertree, whose vertices are equipped with distinct positive weights. Let $\gamma$ be the root of $T$, $W_\gamma$ be the weight of $\gamma$, then we have $$S_T(\gamma)=W_{\gamma}I(\gamma\in\mathcal{I}(T)).$$ \end{lem} \begin{proof} We prove by induction on the height of the tree. When the height is $0$, this lemma is true. Now suppose $T$ has height $h>0$, and this lemma holds for all trees with height less than $h$. Let $e_k,\ 1\leq k \leq d$, be all descending edges of the root $\gamma$. Then by definition of the bonus function, we have $$S_T(\gamma)=W_\gamma\prod_{k=1}^{d}I(W_\gamma>\min_{v\in e_k, v\not=\gamma}\{S_T(v)\}).$$ So it suffices to show that $$\prod_{k=1}^{d}I(W_\gamma>\min_{v\in e_k, v\not=\gamma}\{S_T(v)\})=I(\gamma\in\mathcal{I}(T)).$$ Let $T_{v}$ be the subtree of $T$ with root $v$, such that $T_v$ contains only the edges descending from $v$. If $W_\gamma>\min_{v\in e_k, v\not=\gamma}\{S_T(v)\}$ for all $k$ such that $1\leq k\leq d$. For an arbitrary $k$, pick $v\in e_k$, $v\not=\gamma$, such that $W_\gamma>S_T(v)$, then there are two cases. Firstly, if $W_\gamma<W_{v}$, then we have $S_T(v)=0$. By inductive assumption, this implies $v\not\in\mathcal{I}(T_{v})$. Then by Lemma \ref{POfIBG}, since $T_v$ is an influence-blocking hypergraph of $T$, we have $v\not\in\mathcal{I}(T)$. This means that $\gamma$ will not be deleted by $e_k$. Secondly, if $W_\gamma>W_{v}$. This also means that $\gamma$ will not be deleted by $e_k$. This argument works for all $1\leq k\leq d$. Therefore, $\gamma\in\mathcal{I}(T)$. On the other hand, if $W_\gamma<\min_{v\in e_k, v\not=\gamma}\{S_T(v)\}$ for some $k$, then $W_\gamma$ must be the smallest-weighted vertex in $e_k$ and $v\in\mathcal{I}(T_v)$ for all $v\in e_k$, $v\not=\gamma$. Since $T_v$ is an influence-blocking hypergraph of $T$, by Lemma \ref{POfIBG} we have $v\in\mathcal{I}(T)$ for all $v\in e_k$, $v\not=\gamma$. This implies that $\gamma$ will be deleted by $e_k$. Therefore, $\gamma\not\in\mathcal{I}(T)$. \end{proof} Let $T(d,h)$ be the $(r+1)$-uniform rooted hypertree such that all non-leaf vertices have $d$ descending edges, and all leaves have depth $h$. Let $\tilde{T}(d,h)$ be the $(r+1)$-uniform rooted hypertree such that the root has $d$ descending edges while all other non-leaf vertices have $d-1$ descending edges, and all leaves have depth $h$. Let $\gamma$ be the root of $T(d,h)$. Apply the first step of the greedy algorithm to $T(d,h)$, that is, randomly assign weights to $T(d,h)$. Let $F_{d,h}$ be the distribution function of $S_{T(d,h)}(\gamma)$. That is, $F_{d,h}(x)=\P[S_{T(d,h)}(\gamma)\leq x]$. Similarly, let $\tilde{F}_{d,h}$ be the distribution function of $S_{\tilde{T}(d,h)}(\gamma)$. That is, $\tilde{F}_{d,h}(x)=\P[S_{\tilde{T}(d,h)}(\gamma)\leq x]$. Note that by Lemma \ref{POfBF}, we have \begin{align} &1-F_{d,h}(0)=\P[\gamma\in\mathcal{I}(T(d,h)]\\ &1-\tilde{F}_{d,h}(0)=\P[\gamma\in\mathcal{I}(\tilde{T}(d,h)] \end{align} Also by definition of the bonus function of hypergraphs, $F_{d,h}$ and $\tilde{F}_{d,h}$ satisfy the following recursive equations for all $x\in[0,1]$: \begin{equation}\label{FdRec} F_{d,h}(x)=1-\int^1_x\P[S_{T(d,h)}(\gamma)=W_\gamma|W_\gamma=t]dt=1-\int^1_x[1-(1-F_{d,h-1}(t))^{r}]^d dt \end{equation} \begin{equation}\label{tFdRec} \tilde{F}_{d,h}(x)=1-\int^1_x\P[S_{\tilde{T}(d,h)}(\gamma)=W_\gamma|W_\gamma=t]dt=1-\int^1_x[1-(1-F_{d-1,h-1}(t))^{r}]^d dt \end{equation} In order to get a differential equation, we need to show that $F_{d,h}$ and $\tilde{F}_{d,h}$ converge as $h\rightarrow\infty$. We make use of the following lemma: \begin{lem}\label{IneqOfF} For any $x\in\mathbb{R}$ and integer $h\geq0$, the following inequalities hold: \begin{equation}\label{Os1} (-1)^hF_{d,h}(x)\leq(-1)^hF_{d,h+1}(x) \end{equation} \begin{equation}\label{Os2} (-1)^hF_{d,h}(x)\leq(-1)^hF_{d,h+2}(x) \end{equation} \end{lem} \begin{proof} We prove inequality (\ref{Os1}) by induction. First, when $h=0$, by definition we have $F_{d,0}(x)\leq F_{d,1}(x)$. Now for $h\geq1$, suppose inequality (\ref{Os1}) holds for $h-1$. Replace $h$ by $h+1$ in equality (\ref{FdRec}) and consider its difference with the original equality, we have $$F_{d,h+1}(x)-F_{d,h}(x)=\int^1_x\l(\l(1-(1-F_{d,h-1}(t))^{r}\r)^d-\l(1-(1-F_{d,h}(t))^{r}\r)^d\r)dt$$ Using this equation, we can check that when $F_{d,h}(x)\geq F_{d,h-1}(x)$, we have $F_{d,h+1}(x)\leq F_{d,h}(x)$; and when $F_{d,h}(x)\leq F_{d,h-1}(x)$, we have $F_{d,h+1}(x)\geq F_{d,h}(x)$. Hence, by inductive assumption, we have $(-1)^hF_{d,h}(x)\leq(-1)^hF_{d,h+1}(x)$. This completes the proof for inequality (\ref{Os1}). Same reasoning gives the proof for inequality (\ref{Os2}). \end{proof} \begin{cor}\label{CofF} There exist functions $F_{d,even}(x):\mathbb{R}\rightarrow[0,1]$ and $F_{d,odd}(x):\mathbb{R}\rightarrow[0,1]$ such that the sequence of functions $\{F_{d,2k}(x)\}_{k\ge0}$ converges pointwise to $F_{d,even}(x)$ and the sequence of functions $\{F_{d,2k+1}(x)\}_{k\ge0}$ converges pointwise to $F_{d,odd}(x)$, and $F_{d,even}(x)\le F_{d,odd}(x)$ for all $x\in\mathbb{R}$. \end{cor} \begin{proof} As a result of inequality~(\ref{Os2}), for any $x\in\mathbb{R}$, the sequence $\{F_{d,2k}(x)\}_{k\geq 0}$ is increasing and the sequence $\{F_{d,2k+1}(x)\}_{k\geq 0}$ is decreasing. Also, by inequality~(\ref{Os1}), both sequences are bounded. Hence, by the Monotone Convergence Theorem~\cite{R}, they must converge, which implies the existence of $F_{d,even}(x)$ and $F_{d,odd}(x)$. The inequality can be obtained by considering the inequality~(\ref{Os1}) with $h=2k$ and $k\rightarrow\infty$. \end{proof} Similar results as Lemma~\ref{IneqOfF} and Corollary~\ref{CofF} for $\tilde{F}_{d,h}$ can also be obtained using the same idea, and we omit the details. \section{Proof of Theorem~\ref{Thm1}} The following lemma, which is a modification of Theorem $7$ in~\cite{GG}, provide an upper bound for the difference between the probability that a vertex $v$ in a hypergraph $G$ is selected and the probability that the root $\gamma$ of a rooted hypertree is selected by the greedy algorithm, showing that the performance of the greedy algorithm on $G$ is locally similar to that on a hypertree. \begin{lem}\label{LocalProb} For any integers $r\geq1$, $d\geq2$ and $g\geq4$, let $G$ be an $(r+1)$-uniform $d$-regular hypergraph with girth $g$. Let $h_0=\lfloor \frac{g-3}{2}\rfloor$, $T=\tilde{T}(d,h)$ with $h\geq h_0+1$, let $\gamma$ be the root of $T$. Then for every vertex $v\in V(G)$, \begin{equation}\label{LP} |\P[v\in \mathcal{I}(G)]-\P[\gamma\in \mathcal{I}(T)]|\leq \frac{d(d-1)^{h_0}}{r\prod_{k=1}^{h_0+1}(k+\frac{1}{r})}. \end{equation} \end{lem} \begin{proof} We apply the first step of greedy algorithm on $G$ and $T$ in the following way. We first give vertices of $G$ i.i.d.~weights from the uniform distribution on $[0,1]$. Observe that $N_{h_0+1}(v)$ is a $\tilde{T}(d,h_0+1)$ hypertree, so we can find an isomorphism $f$ that maps $N_{h_0+1}(v)$ to $N_{h_0+1}(\gamma)$. Then we give the vertices in $N_{h_0+1}(\gamma)$ the same weight as their coimage in $N_{h_0+1}(v)$. Finally we give all remaining vertices in $T$ i.i.d.~weights from the uniform distribution on $[0,1]$. Then we apply the second step of greedy algorithm on both $G$ and $T$ to get $\mathcal{I}(G)$ and $\mathcal{I}(T)$. In this setting, we have the following estimate: \begin{align*} \P[v\in\mathcal{I}(G)] =&\P[v\in\mathcal{I}(G),\mathcal{B}_G(v)\subset N_{h_0}(v)]+\P[v\in\mathcal{I}(G),\mathcal{B}_G(v)\not\subset N_{h_0}(v)]\\ =&\P[\gamma\in\mathcal{I}(T),\mathcal{B}_{T}(\gamma)\subset N_{h_0}(\gamma)]+\P[v\in\mathcal{I}(G),\mathcal{B}_G(v)\not\subset N_{h_0}(v)]\tag{\text{Lemma~\ref{POfIBG}}}\\ \leq&\P[\gamma\in\mathcal{I}(T)]+\P[\mathcal{B}_G(v)\not\subset N_{h_0}(v)]. \end{align*} This implies that \begin{align*} \P[v\in\mathcal{I}(G)]-\P[\gamma\in\mathcal{I}(T)]&\leq\P[\mathcal{B}_G(v)\not\subset N_{h_0}(v)]\\ &\leq \frac{d(d-1)^{h_0}}{r\prod_{k=1}^{h_0+1}(k+\frac{1}{r})}\tag{\text{Lemma~\ref{ProbOfE}}}. \end{align*} We complete the proof by repeating the reasoning above with the roles of $\P[v\in\mathcal{I}(G)]$ and $\P[\gamma\in\mathcal{I}(T)]$ reversed. \end{proof} Using similar idea as in the proof above, we can also show that the following limits exist: \begin{lem}\label{limexist} For any fixed integer $d$, the limits $\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}({T}(d,h))]}$ and $\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]}$ exist, where $\gamma$ denote the root of the rooted hypertrees. \end{lem} \begin{proof} We only present the proof of the existence of $\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]}$. The proof of the existence of $\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}({T}(d,h))]}$ is similar and we omit the details. Let $h$, $h'$ be positive integers with $h'>h$. Using the same idea as in the proof of Lemma~\ref{LocalProb}, we can show that $$ |\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]-\P[\gamma\in \mathcal{I}(\tilde{T}(d,h'))]|\leq \frac{d(d-1)^{h-1}}{r\prod_{k=1}^{h}(k+\frac{1}{r})}\rightarrow0\ \text{as $h\rightarrow\infty$}. $$ So we conclude that the sequence $\{\P[\gamma\in\mathcal{I}(\tilde{T}(d,h))]\}_{h\geq 1}$ is a Cauchy sequence and therefore has a limit. \end{proof} Now we are ready to show that $F_{d,h}(x)$ and $\tilde{F}_{d,h}(x)$ converge, and hence get the differential equations we need: \begin{lem}\label{DefEq} there exist functions $F_{d}(x)$ and $\tilde{F}_{d}(x)$such that $F_{d,h}(x)$ converges pointwise to $F_{d}(x)$ and $\tilde{F}_{d,h}(x)$ converges pointwise to $\tilde{F}_{d}(x)$ as $h\rightarrow\infty$. $F_{d}(x)$ and $\tilde{F}_{d}(x)$ satisfy the following equations: \begin{equation}\label{FdE} F_{d}(x)=1-\int^1_x[1-(1-F_{d}(t))^{r}]^d dt, \end{equation} \begin{equation}\label{tFdE} \tilde{F}_{d}(x)=1-\int^1_x[1-(1-F_{d-1}(t))^{r}]^d dt. \end{equation} \end{lem} \begin{proof} We only present the proof of the existence of $F_d$ here. The proof of the existence of $\tilde{F}_{d}$ is similar and we omit the details. By Corollary \ref{CofF}, there exist $F_{d,even}(x)$ and $F_{d,odd}(x)$ such that $F_{d,2k}(x)$ converges pointwise to $F_{d,even}(x)$ and $F_{d,2k+1}(x)$ converges pointwise to $F_{d,odd}(x)$ as $k\rightarrow\infty$. Hence, to prove the existence of $F_d$, it suffices to show that $F_{d,even}(x)=F_{d,odd}(x)$ for all $x\in\mathbb{R}$. By Lemma \ref{limexist}, $\lim_{h\rightarrow\infty}{\P[\gamma\in\mathcal{I}(T(d,h))]}$ exists. Since $F_{d,h}(0)=1-\P[\gamma\in\mathcal{I}(T(d,h))]$, this implies that $\lim_{h\rightarrow\infty}{F_{d,h}(0)}$ exists. So we have $$F_{d,even}(0)=\lim_{h\rightarrow\infty}{F_{d,h}(0)}=F_{d,odd}(0).$$ Now consider equation~(\ref{FdRec}) with $h=2k$, and let $k$ go to infinity on both sides, and then use the Dominated Convergence Theorem~\cite{R}, we have $$F_{d,even}(x)=1-\int^1_x[1-(1-F_{d,odd}(t))^{r}]^d dt.$$ Similarly, we also have $$F_{d,odd}(x)=1-\int^1_x[1-(1-F_{d,even}(t))^{r}]^d dt.$$ Take the derivative on both sides and then take the difference of these two equations, we have $$F'_{d,even}(x)-F'_{d,odd}(x)=[1-(1-F_{d,odd}(x))^{r}]^d-[1-(1-F_{d,even}(x))^{r}]^d\ge 0,$$ where the inequality comes from the fact that $F_{d,even}\leq F_{d,odd}$ by Corollary~\ref{CofF}. So for any fixed $x\in[0,1]$, $$F_{d,even}(x)=F_{d,even}(0)+\int^x_0F'_{d,even}(t)dt\geq F_{d,odd}(0)+\int^x_0F'_{d,odd}(t)dt=F_{d,odd}(x).$$ This combined with the inequality $F_{d,even}\leq F_{d,odd}$, implies $F_{d,even}=F_{d,odd}$. This completes the proof of the existence of $F_d(x)$. Now consider equations~(\ref{FdRec}) and~(\ref{tFdRec}), let $h\rightarrow\infty$ and then use the Dominated Convergence Theorem~\cite{R}, we get the desired differential equations. \end{proof} \begin{lem}\label{Fd} For any integer $d\ge 3$, let $G_d(x)=1-F_{d-1}(x)$, then $G_d(x)$ satisfies the following equation: \begin{equation} 1-\sum_{n\geq0} \binom{n+d-2}{d-2}\frac{G_d(x)^{rn+1}}{rn+1}=x. \end{equation} \end{lem} \begin{proof} By equation~(\ref{FdE}), we have $$ G_d(x)=\int_{x}^{1}(1-G_d(t)^r)^{d-1}dt. $$ Taking derivatives on both sides, we have $$ G_d'(x)=-(1-G_d(x)^{r})^{d-1}. $$ Let $H_d(x)=\sum_{n\geq0} \binom{n+d-2}{d-2}\frac{x^{rn+1}}{rn+1}$, it is not hard to check that $H_d'(x)=\frac{1}{(1-x^r)^{d-1}}$. So the equation above is equivalent to $$ \l(H_d\l(G_d(x)\r)\r)'=-1. $$ Solving this equation, we obtain $$\sum_{n\geq0} \binom{n+d-2}{d-2}\frac{G_d(x)^{rn+1}}{rn+1}=-x+C$$ Let $x=1$, we have $0=-1+C$, which implies $C=1$. This completes the proof. \end{proof} \begin{lem}\label{tFd} For any integer $d\ge 3$, let $\tilde{G}_d(x)=1-\tilde{F}_d(x)$, then we have the following equation: \begin{equation} \tilde{G}_d(x)=G_d(x)-\frac{G_d(x)^{r+1}}{r+1} \end{equation} \end{lem} \begin{proof} By equation~(\ref{tFdE}), \begin{equation*} \tilde{G}_d(x)=\int^1_x(1-G_d(t)^r)^d dt \end{equation*} Consider changing the variable in the integral by letting $u=G_d(t)$. By equation~(\ref{FdE}), not hard to see $dt=-\frac{du}{(1-u^r)^{d-1}}$. Hence, \begin{equation*} \tilde{G}_d(x)=-\int^{G_d(1)}_{G_d(x)}(1-u^r)du=G_d(x)-\frac{G_d(x)^{r+1}}{r+1} \end{equation*} \end{proof} Now we are ready to prove Theorem 4. \begin{proof}[Proof of Theorem~\ref{Thm1}] Applying inequality~(\ref{LP}), we have \begin{align*} |\frac{\mathbb{E}[|\mathcal{I}(G)|]}{n}-\P[\gamma\in\mathcal{I}(\tilde{T}(d,h))]| &\leq\frac{1}{n}\sum_{v\in V(G)} |\P[v\in \mathcal{I}(G)]-\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]|\\ &\leq\frac{d(d-1)^{h_0}}{r\prod_{k=1}^{h_0+1}(k+\frac{1}{r})} \end{align*} Note that this inequality holds for all $h\geq h_0+1$. Let $h\rightarrow\infty$, we have $$ |\frac{\mathbb{E}[|\mathcal{I}(G)|]}{n}-\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]}|\leq\frac{d(d-1)^{h_0}}{r\prod_{k=1}^{h_0+1}(k+\frac{1}{r})}. $$ Let $f(d,r)=\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}(\tilde{T}(d,h))]}=\tilde{G}_d(0)$, then we have the required inequality~(\ref{main}). Let $u(d,r)=\lim_{h\rightarrow\infty}{\P[\gamma\in \mathcal{I}({T}(d-1,h))]}=G_d(0)$. By Lemma~\ref{Fd}, we know that $u(d,r)$ satisfy equation~(\ref{ud}). By Lemma~\ref{tFd}, we have \begin{equation*} f(d,r)=\tilde{G}_d(0)=G_d(0)-\frac{G_d(0)^{r+1}}{r+1}=u(d,r)-\frac{u(d,r)^{r+1}}{r+1}. \end{equation*} This completes the proof. \end{proof} \section{Proof of Theorem~\ref{Thm2}} In section 2, we notice that vertices that are far away from each other are very likely ``independent''. More accurately, if two vertices $u$,$v$ are far away from each other, then the indicator of the event that $u$ is selected and the indicator of the event that $v$ is selected by the greedy algorithm have small covariance. This phenomenon can also be used to give an upper bound for the variance of the algorithm. \begin{lem}\label{Var} For any integers $r\geq1$ and $d\geq2$, let $G$ be an $(r+1)$-uniform linear hypergraph on $n$ vertices with maximum degree $d$, then the variance satisfies: \begin{equation} \mathrm{Var}[{\mathcal{I}(G)}]\leq {3d^2r^2e^{r^2(d-1)^3}}n. \end{equation} \end{lem} \begin{proof} Let $V(G)=\{v_1, v_2,\dots,v_n\}$, $X_i=I(v_i\in\mathcal{I}(G))$. Then \begin{align*} \mathrm{Var}(\mathcal{I}(G))&=\mathrm{Var}(\sum_{i=1}^n X_i)\\ &=\sum_{i=1}^n(\mathbb{E}[X_i^2]-\mathbb{E}[X_i]^2)+\sum_{1\leq i\not=j\leq n}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j])\\ &\leq n+\sum_{1\leq i\leq n}\sum_{\delta\geq1}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j]), \end{align*} where the inequality uses the bound $(\mathbb{E}[X_i^2]-\mathbb{E}[X_i]^2)\leq1$. For any $1\leq i\leq n$, we consider the sum $$\sum_{\delta\geq1}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j])$$\\ First, for any $\delta\geq3$, let $h=\lfloor\frac{\delta-3}{2}\rfloor$, and let $A_{i,h}$ denote the event $\{\mathcal{B}(v_i)\not\subset N_h(v_i)\}$, $A_{i,h}^c$ denote the complement of the event $A_{i,h}$, that is $\{\mathcal{B}(v_i)\subset N_h(v_i)\}$. This event is only determined by the weights of vertices in $N_{h+1}(v_i)$. Notice that for every $v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)$, $N_{h+1}(v_i)\cap N_{h+1}(v_j)=\emptyset$. So $A_{i,h}^c$ and $A_{j,h}^c$ are independent. Then we have, \begin{align*} \mathbb{E}[X_iX_j]&=\P[v_i\in\mathcal{I}(G), v_j\in\mathcal{I}(G)]\\ &=\P[v_i\in\mathcal{I}(G), v_j\in\mathcal{I}(G), A_{i,h}^c\cap A_{j,h}^c]+\P[v_i\in\mathcal{I}(G), v_j\in\mathcal{I}(G), A_{i,h}\cup A_{j,h}]. \end{align*} By Lemma~\ref{POfIBG} and the independence between $A_{i,h}^c$ and $A_{j,h}^c$, we have \begin{align*} \P[v_i\in\mathcal{I}(G), v_j\in\mathcal{I}(G), A_{i,h}^c\cap A_{j,h}^c]&=\P[v_i\in\mathcal{I}(\mathcal{B}(v_i)), v_j\in\mathcal{I}(\mathcal{B}(v_j)), A_{i,h}^c\cap A_{j,h}^c]\\ &=\P[v_i\in\mathcal{I}(\mathcal{B}(v_i)), A_{i,h}^c]\P[v_j\in\mathcal{I}(\mathcal{B}(v_j)), A_{j,h}^c]\\ &\le\mathbb{E}[X_i]\mathbb{E}[X_j]. \end{align*} On the other hand, by Lemma~\ref{ProbOfE} \begin{align*} \P[v_i\in\mathcal{I}(G), v_j\in\mathcal{I}(G), A_{i,h}\cup A_{j,h}]&\le\P[A_{i,h}]+\P[A_{j,h}]\\ &\le\frac{2d(d-1)^h}{r\prod_{k=1}^{h+1}(k+\frac{1}{r})} \end{align*} Hence, $$ \mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j]\le\frac{2d(d-1)^h}{r\prod_{k=1}^{h+1}(k+\frac{1}{r})} $$ Since $G$ has maximum degree $d$, we have $|N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)|\leq d(d-1)^{\delta-1}r^{\delta}$. \\ In particular, for odd integer $\delta\geq3$, we have $\delta=2h+3$. So the sum \begin{align*} \sum_{odd\ \delta\geq3}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j]) &\leq\sum_{h\geq0}d(d-1)^{2h+2}r^{2h+3}\frac{2d(d-1)^h}{r(h+1)!}\\ &=\frac{2d^2}{d-1}\sum_{h\geq1}\frac{r^{2h}(d-1)^{3h}}{h!}\\ &\leq2d^2\sum_{h\geq1}\frac{r^{2h}(d-1)^{3h}}{h!} \end{align*} For even integer $\delta\geq3$, we have $\delta=2h+4$. So the sum \begin{align*} \sum_{even\ \delta\geq3}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j]) &\leq\sum_{h\geq0}d(d-1)^{2h+3}r^{2h+4}\frac{2d(d-1)^h}{r(h+1)!}\\ &=2d^2r\sum_{h\geq1}\frac{r^{2h}(d-1)^{3h}}{h!}\\ \end{align*} For $1\leq\delta\leq2$, use the bound $\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j]\leq 1$, we have $$\sum_{1\leq\delta\leq2}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j])\leq 2d^2r^2 $$ Combine the three inequalities above, we have $$\sum_{\delta\geq1}\sum_{v_j\in N_{\delta}(v_i)\backslash N_{\delta-1}(v_i)}(\mathbb{E}[X_iX_j]-\mathbb{E}[X_i]\mathbb{E}[X_j])\leq2d^2r^2e^{r^2(d-1)^3}$$ So the variance $$\mathrm{Var}(\mathcal{I}(G))\leq n+2d^2r^2e^{r^2(d-1)^3}n\leq3d^2r^2e^{r^2(d-1)^3}n$$ \end{proof} \begin{proof}[Proof of Theorem \ref{Thm2}] By Lemma~\ref{Var}, we know that for fix $d$ and $r$, there exist a constant $c$ such that $\mathrm{Var}(\mathcal{I}(G))<cn$. Hence, by Chebyshev's Inequality we have $$ \P[||\mathcal{I}(G)|-\mathbb{E}[|\mathcal{I}(G)|]|>\sqrt{n}b(n)]\le\frac{\mathrm{Var}(|\mathcal{I}(G)|)}{b(n)^2n}\rightarrow0,\ \text{as $n\rightarrow\infty$}.$$ \end{proof} \section*{Appendix A} We first collect some real number inequalities: \begin{prop}\label{EandI} Let $n,r,d$ be positive integers. Then \begin{center} \begin{tabular}{lp{4in}} $1$. & For $x \geq 0$, \begin{equation}\label{E1} \int_0^xe^{t^{r}}dt=\sum_{n\geq 0}\frac{x^{rn+1}}{n!(rn+1)} \end{equation} \\ $2$. & For $n\le\sqrt{d}$, \begin{equation}\label{I1} \Bigl(1+\frac{n}{d}\Bigr)^n<e^{\frac{n^2}{d}}<e \end{equation} \\ $3$. & For $y \geq 0$, \begin{equation}\label{I2} \Bigl(\frac{y}{n}\Bigr)^n\leq e^{\frac{y}{e}} \end{equation} \end{tabular} \end{center} \end{prop} Let $u_d=\lim_{h\rightarrow\infty}\P[\gamma\in\mathcal{I}(T(d,h))]=u(d+1,r)$. Note that $u_d$ can be viewed as the probability of the root of $T(d,\infty)$ being selected by the greedy algorithm, while $f(d,r)$ can be viewed as the probability of the root of $\tilde{T}(d,\infty)$ being selected by the greedy algorithm. \begin{prop}\label{AsymAnal} $ f(d,r)\sim(\frac{\log d}{rd})^{\frac{1}{r}} $ as $d\rightarrow\infty$. \end{prop} \begin{proof} Let $g(d,u)=\sum_{n\geq 0}\binom{n+d-1}{n}\frac{u^{rn+1}}{rn+1}$. It is not hard to see that $g$ is increasing with respect to $u$. By Lemma~\ref{Fd}, we have $g(d,u_d)=1$. Now for any $\epsilon>0$, let $u=((\frac{1}{r})^{\frac{1}{r}}+\epsilon)(\frac{\log d}{d})^{\frac{1}{r}}$, we have \setlength{\jot}{12pt} \begin{align*} g(d,u)&\geq\sum_{n\geq 0}\frac{d^n}{n!}\frac{u^{rn+1}}{rn+1}\\ &=d^{-\frac{1}{r}}\sum_{n\geq 0}\frac{(ud^{\frac{1}{r}})^{rn+1}}{n!(rn+1)}\\ &=d^{-\frac{1}{r}}\int_0^{ud^{\frac{1}{r}}}e^{t^{r}}dt \tag{by~\ref{E1}}\\ &>d^{-\frac{1}{r}}\int_{(\log d)^{\frac{1}{r}}(\frac{1}{r})^{\frac{1}{r}}}^{(\log d)^{\frac{1}{r}}[(\frac{1}{r})^{\frac{1}{r}}+\epsilon]}e^{t^{r}}dt\\ &>d^{-\frac{1}{r}}(\epsilon(\log d)^{\frac{1}{r}})e^{\frac{\log d}{r}}\\ &=\epsilon(\log d)^{\frac{1}{r}}. \end{align*} This means that $g(d,u)\rightarrow\infty$ as $d\rightarrow\infty$, hence $u_d<[(\frac{1}{r})^{\frac{1}{r}}+\epsilon](\frac{\log d}{d})^{\frac{1}{r}}$ when $d$ is large enough.\\ On the other hand, for any $\epsilon>0$, let $u=c(\frac{\log d}{d})^{\frac{1}{r}}$, where $c=(\frac{1}{r}-\epsilon)^{\frac{1}{r}}$, we have \begin{align*} g(d,u)&\leq\sum_{n\geq 0}\l(\frac{e(n+d)}{n}\r)^n\frac{u^{rn+1}}{rn+1}\\ &=\sum_{n\geq 0}\frac{u}{rn+1}\l(\frac{e}{n}+\frac{e}{d}\r)^n(c^{r}\log d)^n \end{align*} When $n\geq 4c^{r}e\log d$, and $d$ is large enough, we have \begin{align*} \setlength{\jot}{12pt} \sum_{n\geq 4c^{r}e\log d}\frac{u}{rn+1}\l(\frac{e}{n}+\frac{e}{d}\r)^n(c^{r}\log d)^n &\leq u\sum_{n\geq 4c^{r}e\log d}\l(\frac{2e}{4c^{r}e\log d}\r)^n(c^{r}\log d)^n\\ &=u\sum_{n\geq 4c^{r}e\log d}\l(\frac{1}{2}\r)^n\\ &<c\l(\frac{\log d}{d}\r)^{\frac{1}{r}}\rightarrow 0\ as\ d\rightarrow\infty. \end{align*} When $n<4c^{r}e\log d$, and $d$ is large enough, we have \begin{align*} \sum_{n< 4c^{r}e\log d}\frac{u}{rn+1}\l(\frac{e}{n}+\frac{e}{d}\r)^n(c^{r}\log d)^n &<u\sum_{n< 4c^{r}e\log d}\l(1+\frac{n}{d}\r)^n\l(\frac{c^{r}e\log d}{n}\r)^n\\ &<ue\sum_{n< 4c^{r}e\log d}\l(\frac{c^{r}e\log d}{n}\r)^n \tag{by~\ref{I1}}\\ &<c\l(\frac{\log d}{d}\r)^{\frac{1}{r}}e(4c^{r}e\log d)e^{c^{r}\log d} \tag{by~\ref{I2}}\\ &=4e^2c^{r+1}(\log d)^{\frac{r+1}{r}}d^{-\epsilon}\rightarrow 0\ as\ d\rightarrow\infty. \end{align*} This means that $g(d,u)\rightarrow 0$ as $d\rightarrow\infty$, hence $u_d>(\frac{1}{r}-\epsilon)^{\frac{1}{r}}(\frac{\log d}{d})^{\frac{1}{r}}$ when $d$ is large enough.\\ These estimates imply that $u_d\sim(\frac{\log d}{rd})^{\frac{1}{r}}$, hence $u_d\rightarrow 0$ as $d\rightarrow\infty$. Recall that by Theorem~\ref{Thm1}, \begin{equation*} f(d,r)=u(d,r)-\frac{u(d,r)^{r+1}}{r+1}=u_{d-1}-\frac{u_{d-1}^{r+1}}{r+1} \end{equation*} Therefore, $f(d,r)\sim(\frac{\log d}{rd})^{\frac{1}{r}}$ as $d\rightarrow\infty$. \end{proof} \section{Acknowledgements} We would like to thank Patrick Bennett and Deepak Bal for some helpful comments on simplifying the formulation of the main theorem. We would also like to thank the anonymous referees for their careful reading of the paper and useful suggestions.
2,877,628,089,295
arxiv
\section{Introduction} Maximally monotone operators have proven to be a significant class of objects in both modern Optimization and Functional Analysis. They extend both the concept of subdifferentials of convex functions, as well as that of a positive semi-definite function. Their study in the context of Banach spaces, and in particular nonreflexive ones, arises naturally in the theory of partial differential equations, equilibrium problems, and variational inequalities. For a detailed study of these operators, see, e.g., \cite{Bor1,Bor2,Bor3}, or the books \cite{BC2011,BorVan,BurIus,ph,Si,Si2,RockWets,Zalinescu,Zeidler}. A useful tool for studying or proving properties of a maximally monotone operator $A$ is the concept of the ``enlargement of $A$''. A main example of this usefulness is Rockafellar's proof of maximality of the subdifferential of a convex function (Fact \ref{SubMR} below), which uses the concept of $\ensuremath{\varepsilon}$-subdifferential. The latter is an enlargement of the subdifferential introduced in \cite{BR}. Broadly speaking, an enlargement is a multifunction which approximates the original maximally monotone operator in a convenient way. Another useful way to study a maximally monotone operator is by associating to it a convex function called the Fitzpatrick function. The latter was introduced by Fitzpatrick in \cite{Fitz88} and its connection with enlargements, as shown in \cite{BurSva:1}, is contained in \eqref{Enl:1} below. Our first aim in the present paper is to provide further characterizations of maximally monotone operators which are not enlargeable, in the setting of possibly nonreflexive Banach spaces (see Section~\ref{secneo}). In other words, in which cases the enlargement does not change the graph of a maximally monotone mapping defined in a Banach space? We address this issue Corollary \ref{CEl:2}, under a closedness assumption on the graph of the operator. Our other aim is to use the Fitzpatrick function to derive new results which establish the maximality of the sum of two maximally monotone operators in nonreflexive spaces (see Section~\ref{secsumo}). First, we provide a different proof of the maximality of the sum of two maximally monotone linear relations. Second, we provide a proof of the maximality of the sum of a maximally monotone linear relation and a normal cone operator when the domain of the operator intersects the interior of the domain of the normal cone. \section{Technical Preliminaries} Throughout this paper, $X$ is a real Banach space with norm $\|\cdot\|$, and $X^*$ is the continuous dual of $X$. The spaces $X$ and $X^*$ are paired by the duality pairing, denoted as $\scal{\cdot}{\cdot}$. The space $X$ is identified with its canonical image in the bidual space $X^{**}$. Furthermore, $X\times X^*$ and $(X\times X^*)^*: = X^*\times X^{**}$ are paired via $\scal{(x,x^*)}{(y^*,y^{**})}:= \scal{x}{y^*} + \scal{x^*}{y^{**}}$, where $(x,x^*)\in X\times X^*$ and $(y^*,y^{**}) \in X^*\times X^{**}$. Let $A\colon X\ensuremath{\rightrightarrows} X^*$ be a \emph{set-valued operator} (also known as a multifunction) from $X$ to $X^*$, i.e., for every $x\in X$, $Ax\subseteq X^*$, and let $\ensuremath{\operatorname{gra}} A:= \menge{(x,x^*)\in X\times X^*}{x^*\in Ax}$ be the \emph{graph} of $A$. The \emph{domain} of $A$ is $\ensuremath{\operatorname{dom}} A:= \menge{x\in X}{Ax\neq\varnothing}$, and $\ensuremath{\operatorname{ran}} A:=A(X)$ for the \emph{range} of $A$. Recall that $A$ is \emph{monotone} if \begin{equation} \scal{x-y}{x^*-y^*}\geq 0,\quad \forall (x,x^*)\in \ensuremath{\operatorname{gra}} A\; \forall (y,y^*)\in\ensuremath{\operatorname{gra}} A, \end{equation} and \emph{maximally monotone} if $A$ is monotone and $A$ has no proper monotone extension (in the sense of graph inclusion). Let $A:X\rightrightarrows X^*$ be monotone and $(x,x^*)\in X\times X^*$. We say $(x,x^*)$ is \emph{monotonically related to} $\ensuremath{\operatorname{gra}} A$ if \begin{align*} \langle x-y,x^*-y^*\rangle\geq0,\quad \forall (y,y^*)\in\ensuremath{\operatorname{gra}} A.\end{align*} Let $A:X\rightrightarrows X^*$ be maximally monotone. We say $A$ is \emph{of type (FPV)} if for every open convex set $U\subseteq X$ such that $U\cap \ensuremath{\operatorname{dom}} A\neq\varnothing$, the implication \begin{equation*} x\in U\text{and}\,(x,x^*)\,\text{is monotonically related to $\ensuremath{\operatorname{gra}} A\cap U\times X^*$} \Rightarrow (x,x^*)\in\ensuremath{\operatorname{gra}} A \end{equation*} holds. Maximally monotone operators of type (FPV) are relevant primarily in the context of nonreflexive Banach spaces. Indeed, it follows from \cite[Theorem 44.1]{Si2} and a well-known result from \cite{Rock701} that every maximally monotone operator defined in a reflexive Banach space is of type (FPV). As mentioned in \cite[\S44]{Si2}, an example of a maximally monotone operator which is not of type (FPV) has not been found yet. Let $A:X\rightrightarrows X^*$ be monotone such that $\ensuremath{\operatorname{gra}} A\neq\varnothing$. The \emph{Fitzpatrick function} associated with $A$ is defined by \begin{equation*} F_A\colon X\times X^*\to\ensuremath{\,\left]-\infty,+\infty\right]}\colon (x,x^*)\mapsto \sup_{(a,a^*)\in\ensuremath{\operatorname{gra}} A} \big(\scal{x}{a^*}+\scal{a}{x^*}-\scal{a}{a^*}\big). \end{equation*} When $A$ is maximally monotone, a fundamental property of the Fitzpatrick function $F_A$ (see Fact \ref{FFc}) is that \begin{align} F_A(x,x^*)&\ge \scal{x}{x^*} \hbox{ for all }(x,x^*)\in X\times X^*,\label{FFa}\\ F_A(x,x^*)&= \scal{x}{x^*} \hbox{ for all }(x,x^*)\in \ensuremath{\operatorname{gra}} A.\label{FFb} \end{align} Hence, for a fixed $\varepsilon\geq0$, the set of pairs $(x,x^*)$ for which $F_A(x,x^*)\le \langle x,x^*\rangle+\varepsilon$ contains the graph of $A$. This motivates the definition of enlargement of $A$ for a general monotone mapping $A$, which is as follows. Let $\varepsilon\geq0$. We define $A_{\ensuremath{\varepsilon}}:X\rightrightarrows X^*$ by \begin{align}\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}&:=\Big\{(x,x^*)\in X\times X^*\mid\langle x^*-y^*,x-y\rangle \geq-\varepsilon,\; \forall (y, y^*)\in \ensuremath{\operatorname{gra}} A\Big\}\nonumber\\ &=\Big\{(x,x^*)\in X\times X^*\mid F_A (x,x^*)\leq \langle x,x^*\rangle+\varepsilon\Big\}.\label{Enl:1} \end{align} Let $A:X\rightrightarrows X^*$ be monotone. We say $A$ is \emph{enlargeable} if $\ensuremath{\operatorname{gra}} A\varsubsetneqq \ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}$ for some $\varepsilon\geq0$, and $A$ is \emph{non-enlargeable} if $\ensuremath{\operatorname{gra}} A=\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}$ for every $\varepsilon\geq0$. Lemma 23.1 in \cite{Si2} proves that if a proper and convex function verifies \eqref{FFa}, then the set of all pairs $(x,x^*)$ at which \eqref{FFb} holds is a monotone set. Therefore, if $A$ is non-enlargeable then it must be maximally monotone. We adopt the notation used in the books \cite[Chapter 2]{BorVan} and \cite{Bor1, Si, Si2}. Given a subset $C$ of $X$, $\ensuremath{\operatorname{int}} C$ is the \emph{interior} of $C$, $\overline{C}$ is the \emph{norm closure} of $C$. The \emph{support function} of $C$, written as $\sigma_C$, is defined by $\sigma_C(x^*):=\sup_{c\in C}\langle c,x^*\rangle$. The \emph{indicator function} of $C$, written as $\iota_C$, is defined at $x\in X$ by \begin{align} \iota_C (x):=\begin{cases}0,\,&\text{if $x\in C$;}\\ +\infty,\,&\text{otherwise}.\end{cases}\end{align} For every $x\in X$, the \emph{normal cone operator} of $C$ at $x$ is defined by $N_C(x):= \menge{x^*\in X^*}{\sup_{c\in C}\scal{c-x}{x^*}\leq 0}$, if $x\in C$; and $N_C(x):=\varnothing$, if $x\notin C$. The \emph{closed unit ball} is $B_X:=\menge{x\in X}{\|x\|\leq 1}$, and $\ensuremath{\mathbb N}:=\{1,2,3,\ldots\}$. If $Z$ is a real Banach space with dual $Z^*$ and a set $S\subseteq Z$, we denote $S^\bot$ by $S^\bot := \{z^*\in Z^*\mid\langle z^*,s\rangle= 0,\quad \forall s\in S\}$. The \emph{adjoint} of an operator $A$, written $A^*$, is defined by \begin{equation*} \ensuremath{\operatorname{gra}} A^* := \big\{(x^{**},x^*)\in X^{**}\times X^*\mid(x^*,-x^{**})\in(\ensuremath{\operatorname{gra}} A)^{\bot}\big\}. \end{equation*} We will be interested in monotone operators which are \emph{linear relations}, i.e., such that $\ensuremath{\operatorname{gra}} A$ is a linear subspace. Note that in this situation, $A^*$ is also a linear relation. Moreover, $A$ is \emph{symmetric} if $\ensuremath{\operatorname{gra}} A \subseteq\ensuremath{\operatorname{gra}} A^*$. Equivalently, for all $(x,x^*), (y,y^*)\in\ensuremath{\operatorname{gra}} A$ it holds that \begin{equation}\label{sym} \scal{x}{y^*}=\scal{y}{x^*}. \end{equation} We say that a linear relation $A$ is \emph{skew} if $\ensuremath{\operatorname{gra}} A \subseteq \ensuremath{\operatorname{gra}} (-A^*)$. Equivalently, for all $(x,x^*)\in\ensuremath{\operatorname{gra}} A$ we have \begin{equation}\label{skew} \langle x,x^*\rangle=0. \end{equation} We define the \emph{symmetric part} a of $A$ via \begin{equation} \label{Fee:1} A_+ := \ensuremath{\tfrac{1}{2}} A + \ensuremath{\tfrac{1}{2}} A^*. \end{equation} It is easy to check that $A_+$ is symmetric. Let $f\colon X\to \ensuremath{\,\left]-\infty,+\infty\right]}$. Then $\ensuremath{\operatorname{dom}} f:= f^{-1}(\ensuremath{\mathbb R})$ is the \emph{domain} of $f$, and $f^*\colon X^*\to\ensuremath{\,\left[-\infty,+\infty\right]}\colon x^*\mapsto \sup_{x\in X}(\scal{x}{x^*}-f(x))$ is the \emph{Fenchel conjugate} of $f$. We denote by $\overline{f}$ the lower semicontinuous hull of $f$. We say that $f$ is proper if $\ensuremath{\operatorname{dom}} f\neq\varnothing$. Let $f$ be proper. The \emph{subdifferential} of $f$ is defined by $$\partial f\colon X\ensuremath{\rightrightarrows} X^*\colon x\mapsto \{x^*\in X^*\mid(\forall y\in X)\; \scal{y-x}{x^*} + f(x)\leq f(y)\}.$$ For $\varepsilon \geq 0$, the \emph{$\varepsilon$--subdifferential} of $f$ is defined by \begin{align*}\partial_{\varepsilon} f\colon X\ensuremath{\rightrightarrows} X^*\colon x\mapsto \menge{x^*\in X^*}{(\forall y\in X)\; \scal{y-x}{x^*} + f(x)\leq f(y)+\varepsilon}. \end{align*} Note that $\partial f = \partial_{0}f$. Relatedly, we say $A$ is of Br{\o}nsted-Rockafellar (BR) type \cite{Si2,BorVan} if whenever $(x,x^*)\in X\times X^*$, $\alpha,\beta>0$ while \begin{align*}\inf_{(a,a^*)\in\ensuremath{\operatorname{gra}} A} \langle x-a,x^*-a^*\rangle >-\alpha\beta\end{align*} then there exists $(b,b^*)\in\ensuremath{\operatorname{gra}} A$ such that $\|x-b\|<\alpha,\|x^*-b^*\|<\beta$. The name is motivated by the celebrated theorem of Br{\o}nsted and Rockafellar \cite{Si2,BorVan} which can be stated now as saying that all closed convex subgradients are of type (BR). Let $g\colon X\to\ensuremath{\,\left]-\infty,+\infty\right]}$. The \emph{inf-convolution} of $f$ and $g$, $f\Box g$, is defined by \begin{align*}f\Box g: x\rightarrow\inf_{y\in X} \left[f(y)+g(x-y)\right]. \end{align*} Let $Y$ be another real Banach space. We set $P_X: X\times Y\rightarrow X\colon (x,y)\mapsto x$. We denote $\ensuremath{\operatorname{Id}}: X\rightarrow X$ by the \emph{identity mapping}. Let $F_1, F_2\colon X\times Y\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$. Then the \emph{partial inf-convolution} $F_1\Box_2 F_2$ is the function defined on $X\times Y$ by \begin{equation}\label{infconv} F_1\Box_2 F_2\colon (x,y)\mapsto \inf_{v\in Y}\left[ F_1(x,y-v)+F_2(x,v)\right]. \end{equation} \section{Auxiliary results}\label{s:aux} We collect in this section some facts we will use later on. These facts involve convex functions, maximally monotone operators and Fitzpatrick functions. \begin{fact}\emph{(See \cite[Proposition~3.3 and Proposition~1.11]{ph}.)} \label{pheps:1}Let $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a lower semicontinuous convex and $\ensuremath{\operatorname{int}\operatorname{dom}}\, f\neq\varnothing$. Then $f$ is continuous on $\ensuremath{\operatorname{int}\operatorname{dom}}\, f$ and $\partial f(x)\neq\varnothing$ for every $x\in\ensuremath{\operatorname{int}\operatorname{dom}}\, f$. \end{fact} \begin{fact}[Rockafellar] \label{f:F4} \emph{(See {\cite[Theorem~3(a)]{Rock66}}, {\cite[Corollary~10.3 and Theorem~18.1]{Si2}}, or {\cite[Theorem~2.8.7(iii)]{Zalinescu}}.)} Let $f,g: X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be proper convex functions. Assume that there exists a point $x_0\in\ensuremath{\operatorname{dom}} f \cap \ensuremath{\operatorname{dom}} g$ such that $g$ is continuous at $x_0$. Then for every $z^*\in X^*$, there exists $y^*\in X^*$ such that \begin{equation} (f+g)^*(z^*) = f^*(y^*)+g^*(z^*-y^*). \end{equation} \end{fact} \begin{fact}[Rockafellar]\label{SubMR}\emph{(See \cite[Theorem~A]{Rock702}, \cite[Theorem~3.2.8]{Zalinescu}, \cite[Theorem~18.7]{Si2} or \cite[Theorem~2.1]{MSV})} Let $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a proper lower semicontinuous convex function. Then $\partial f$ is maximally monotone. \end{fact} \begin{fact}[Attouch-Br\'ezis]\emph{(See \cite[Theorem~1.1]{AtBrezis} or \cite[Remark~15.2]{Si2}).}\label{AttBre:1} Let $f,g: X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be proper lower semicontinuous and convex. Assume that $ \bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} f-\ensuremath{\operatorname{dom}} g\right]$ is a closed subspace of $X$. Then \begin{equation*} (f+g)^*(z^*) =\min_{y^*\in X^*} \left[f^*(y^*)+g^*(z^*-y^*)\right],\quad \forall z^*\in X^*. \end{equation*} \end{fact} Fact \ref{SubMR} above relates a convex function with maximal monotonicity. Fitzpatrick functions go in the opposite way: from maximally monotone operators to convex functions. \begin{fact}[Fitzpatrick]\label{FFc} \emph{(See {\cite[Corollary~3.9]{Fitz88}} and \cite{Bor1,BorVan}.)} \label{f:Fitz} Let $A\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone. Then for every $(x,x^*)\in X\times X^*$, the inequality $\scal{x}{x^*}\leq F_A(x,x^*)$ is true, and the equality holds if and only if $(x,x^*)\in\ensuremath{\operatorname{gra}} A$. \end{fact} It was pointed out in \cite[Problem 31.3]{Si2} that it is unknown whether $\overline{\ensuremath{\operatorname{dom}} A}$ is necessarily convex when $A$ is maximally monotone and $X$ is not reflexive. When $A$ is of type (FPV), the question was answered positively by using $F_A$. \begin{fact}[Simons] \emph{(See \cite[Theorem~44.2]{Si2}.)} \label{f:referee02c} Let $A:X\ensuremath{\rightrightarrows} X^*$ be of type (FPV). Then $\overline{\ensuremath{\operatorname{dom}} A}=\overline{P_X\left[\ensuremath{\operatorname{dom}} F_A\right]}$ and $\overline{\ensuremath{\operatorname{dom}} A}$ is convex. \end{fact} We observe that when $A$ is of type (FPV) then also $\ensuremath{\operatorname{dom}} A_{\ensuremath{\varepsilon}}$ has convex closure. \begin{remark} Let $A$ be of type (FPV) and fix $\varepsilon\geq0$. Then by \eqref{Enl:1}, Fact~\ref{f:Fitz} and Fact \ref{f:referee02c}, we have $\ensuremath{\operatorname{dom}} A\subseteq\ensuremath{\operatorname{dom}} A_{\ensuremath{\varepsilon}}\subseteq P_X\left[\ensuremath{\operatorname{dom}} F_A\right] \subseteq\overline{\ensuremath{\operatorname{dom}} A}$. Thus we obtain \[ \overline{\ensuremath{\operatorname{dom}} A}= \overline{\left[\ensuremath{\operatorname{dom}} A_{\ensuremath{\varepsilon}}\right]}=\overline{P_X\left[\ensuremath{\operatorname{dom}} F_A\right]}, \] and this set is convex because $\ensuremath{\operatorname{dom}} F_A$ is convex. As a result, for every $A$ of type (FPV) it holds that $\overline{\ensuremath{\operatorname{dom}} A}=\overline{\left[\ensuremath{\operatorname{dom}} A_{\ensuremath{\varepsilon}}\right]}$ and this set is convex. \end{remark} We recall below some necessary conditions for a maximally monotone operator to be of type (FPV). \begin{fact}[Simons] \emph{(See \cite[Theorem~46.1]{Si2}.)} \label{f:referee01} Let $A:X\ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation. Then $A$ is of type (FPV). \end{fact} \begin{fact}[Fitzpatrick-Phelps and Verona-Verona] \emph{(See \cite[Corollary~3.4]{FitzPh}, \cite[Theorem~3]{VV1} or \cite[Theorem~48.4(d)]{Si2}.)} \label{f:referee0d}\index{subdifferential operator} Let $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be proper, lower semicontinuous, and convex.\index{type (FPV)} Then $\partial f$ is of type (FPV). \end{fact} \begin{fact}\emph{(See \cite[Corollary~3.3]{Yao2}.)}\label{domain:L1} Let $A:X\ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation, and $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a proper lower semicontinuous convex function with $\ensuremath{\operatorname{dom}} A\cap\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}}\partial f\neq\varnothing$. Then $A+\partial f$ is of type $(FPV)$. \end{fact} \begin{fact}[Phelps-Simons]\emph{(See \cite[Corollary 2.6 and Proposition~3.2(h)]{PheSim}.)}\label{F:1} Let $A\colon X\rightarrow X^*$ be monotone and linear. Then $A$ is maximally monotone and continuous. \end{fact} \begin{fact}\emph{(See \cite[Theorem~4.2]{BWY3} or \cite[Lemma~1.5]{MarSva2}.)}\label{affine:L1} Let $A:X\ensuremath{\rightrightarrows} X^*$ be maximally monotone such that $\ensuremath{\operatorname{gra}} A$ is convex. Then $\ensuremath{\operatorname{gra}} A$ is affine. \end{fact} \begin{fact}[Simons] \emph{(See \cite[Lemma~19.7 and Section~22]{Si2}.)} \label{f:referee} Let $A:X\rightrightarrows X^*$ be a monotone operator such that $\operatorname{gra} A$ is convex with $\operatorname{gra} A \neq\varnothing$. Then the function \begin{equation} g\colon X\times X^* \rightarrow \left]-\infty,+\infty\right]\colon (x,x^*)\mapsto \langle x, x^*\rangle + \iota_{\operatorname{gra} A}(x,x^*) \end{equation} is proper and convex. \end{fact} \begin{fact} \emph{(See \cite[Theorem~3.4 and Corollary~5.6]{Voi1}, or \cite[Theorem~24.1(b)]{Si2}.)} \label{f:referee1} Let $A, B:X\ensuremath{\rightrightarrows} X^*$ be maximally monotone operators. Assume that $\bigcup_{\lambda>0} \lambda\left[P_X(\ensuremath{\operatorname{dom}} F_A)-P_X(\ensuremath{\operatorname{dom}} F_B)\right]$ is a closed subspace. If \begin{equation} F_{A+B}\geq\langle \cdot,\,\cdot\rangle\;\text{on \; $X\times X^*$}, \end{equation} then $A+B$ is maximally monotone. \end{fact} \begin{definition}[Fitzpatrick family] Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be maximally monotone. The associated \emph{Fitzpatrick family} $\mathcal{F}_A$ consists of all functions $F\colon X\times X^*\to\ensuremath{\,\left]-\infty,+\infty\right]}$ that are lower semicontinuous and convex, and that satisfy $F\geq \scal{\cdot}{\cdot} $, and $F=\scal{\cdot}{\cdot}$ on $\ensuremath{\operatorname{gra}} A$. \end{definition} \begin{fact}[Fitzpatrick]\emph{(See \cite[Theorem~3.10]{Fitz88} or \cite{BurSva:1}.)}\label{GRF:2} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be maximally monotone. Then for every $(x,x^*)\in X\times X^*$, \begin{equation*} F_A(x,x^*) = \min\menge{F(x,x^*)}{F\in \mathcal{F}_A}. \end{equation*} \end{fact} \begin{corollary}\label{GRF:3} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be a maximally monotone operator such that $\operatorname{gra} A$ is convex. Then for every $(x,x^*)\in X\times X^*$, \begin{equation*} F_A(x,x^*) = \min\menge{F(x,x^*)}{F\in \mathcal{F}_A}\quad\text{and}\quad g(x,x^*)= \max\menge{F(x,x^*)}{F\in \mathcal{F}_A}, \end{equation*} where $g:= \langle \cdot, \cdot\rangle + \iota_{\operatorname{gra} A}$. \end{corollary} \begin{proof} Apply Fact~\ref{f:referee} and Fact~\ref{GRF:2}. \end{proof} \begin{fact} \emph{(See \cite[Lemma~23.9]{Si2}, or \cite[Proposition~4.2]{BM}.)} \label{f:referee03} Let $A, B\colon X\ensuremath{\rightrightarrows} X^*$ be monotone operators and $\ensuremath{\operatorname{dom}} A\cap\ensuremath{\operatorname{dom}} B\neq\varnothing$. Then $F_{A+B}\leq F_A\Box_2 F_B$. \end{fact} Let $X,Y$ be two real Banach spaces and let $h:X\times Y \rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a convex function. We say that $h$ is {\em separable} if there exist convex functions $h_1:X \rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ and $h_2:Y \rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ such that $h(x,y)= h_1(x)+h_2(y)$. This situation is denoted as $h=h_1\oplus h_2$. We recall below some cases in which the Fitzpatrick function is separable. \begin{fact} \emph{(See \cite[Corollary~5.9]{BBBRW} or \cite[Fact~4.1]{BBWY4}.)} \label{f:referee04} Let $C$ be a nonempty closed convex subset of $X$. Then $F_{N_C}=\iota_C\oplus \iota^*_C$. \end{fact} \begin{fact} \emph{(See \cite[Theorem~5.3]{BBBRW}.)} \label{f:sub05} Let $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a proper lower semicontinuous sublinear function. Then $F_{\partial f}=f\oplus f^*$ and $\mathcal{F}_A=\big\{ f\oplus f^*\}$. \end{fact} \begin{remark}\label{r:sub05} Let $f$ be as in Fact~\ref{f:sub05}, then \begin{align}\ensuremath{\operatorname{gra}} (\partial f)_{\ensuremath{\varepsilon}}&=\big\{(x,x^*)\in X\times X^*\mid f(x)+f^*(x^*)\leq \langle x,x^*\rangle+\varepsilon\big\}\nonumber\\ &=\ensuremath{\operatorname{gra}} \partial_{\varepsilon}f,\quad \forall \varepsilon\geq0. \label{Enl:Sub1} \end{align} \end{remark} \begin{fact}[Svaiter]\emph{(See \cite[page~312]{SV}.)}\label{CElL:1} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be maximally monotone. Then $A$ is non-enlargeable if and only if $\ensuremath{\operatorname{gra}} A=\ensuremath{\operatorname{dom}} F_A$ and then $\ensuremath{\operatorname{gra}} A$ is convex. \end{fact} It is immediate from the definitions that: \begin{fact}\label{nEBR} Every non-enlargeable maximally monotone operator is of type (BR). \end{fact} Fact \ref{f:sub05} and the subsequent remark refers to a case in which all enlargements of $A$ coincide, or, equivalently, the Fitzpatrick family is a singleton. It is natural to deduce that a non-enlargeable operator will also have a single element in its Fitzpatrick family. \begin{corollary}\label{CEl:1} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be maximally monotone. Then $A$ is non-enlargeable if and only if $F_A=\iota_{\ensuremath{\operatorname{gra}} A}+\langle\cdot,\cdot\rangle$ and hence $\mathcal{F}_A=\big\{ \iota_{\ensuremath{\operatorname{gra}} A}+\langle\cdot,\cdot\rangle\big\}$. \end{corollary} \begin{proof} ``$\Rightarrow$": By Fact~\ref{CElL:1}, we have $\ensuremath{\operatorname{gra}} A$ is convex. By Fact~\ref{f:Fitz} and Fact~\ref{CElL:1}, we have $F_A=\iota_{\ensuremath{\operatorname{gra}} A}+\langle \cdot,\cdot\rangle$. Then by Corollary~\ref{GRF:3}, $\mathcal{F}_A=\big\{ \iota_{\ensuremath{\operatorname{gra}} A}+\langle\cdot,\cdot\rangle\big\}$. ``$\Leftarrow$": Apply directly Fact~\ref{CElL:1}. \end{proof} \begin{remark}The condition that $\mathcal{F}_A$ is singleton does not guarantee that $\ensuremath{\operatorname{gra}} A$ is convex. For example, let $f:X\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be a proper lower semicontinuous sublinear function. Then by Fact~\ref{f:sub05}, $\mathcal{F}_A$ is singleton but $\ensuremath{\operatorname{gra}} \partial f$ is not necessarily convex. \end{remark} \section{Non-Enlargeable Monotone Linear Relations} \label{secneo} We begin with a basic characterization: \begin{theorem}\label{CElC:01} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation such that $\ensuremath{\operatorname{gra}} A$ is weak$\times$weak$^*$ closed. Then $A$ is non-enlargeable if and only if $\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\subseteq\ensuremath{\operatorname{gra}} A$. In this situation, we have that $\langle x,x^*\rangle=0, \forall (x,x^*)\in\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*$. \end{theorem} \begin{proof} ``$\Rightarrow$": By Corollary~\ref{CEl:1}, \begin{align}F_A=\iota_{\ensuremath{\operatorname{gra}} A}+\langle\cdot,\cdot\rangle.\label{TheEF:1} \end{align} Let $(x,x^*)\in\ensuremath{\operatorname{gra}}(-A^*)\cap X\times X^*$. Then we have \begin{align} F_A(x,x^*)&=\sup_{(a,a^*)\in\ensuremath{\operatorname{gra}} A}\big\{\langle a^*,x\rangle +\langle a,x^*\rangle-\langle a,a^*\rangle\big\}\nonumber\\ &=\sup_{(a,a^*)\in\ensuremath{\operatorname{gra}} A}\big\{-\langle a,a^*\rangle\big\}\nonumber\\ &=0.\label{CRNon:1} \end{align} Then by \eqref{CRNon:1}, $(x,x^*)\in\ensuremath{\operatorname{gra}} A$ and $\langle x,x^*\rangle=0$. Hence $\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\subseteq\ensuremath{\operatorname{gra}} A$. ``$\Leftarrow$": By the assumption that $\ensuremath{\operatorname{gra}} A$ is weak$\times$weak$^*$ closed, we have \begin{align} \left[\ensuremath{\operatorname{gra}}(-A^*)\cap X\times X^*\right]^{\bot}\cap X^*\times X =\left[\big(\ensuremath{\operatorname{gra}} A^{-1}\big)^{\bot}\cap X\times X^*\right]^{\bot}\cap X^*\times X=\ensuremath{\operatorname{gra}} A^{-1}.\label{WWSA:1} \end{align} By \cite[Lemma~2.1(2)]{SV}, we have \begin{align}\langle z,z^*\rangle=0,\quad \forall(z,z^*) \in \ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*.\label{TheEF:03} \end{align} Hence $A^*|_X$ is skew. Let $(x,x^*)\in X\times X^*$. Then by \eqref{TheEF:03}, we have \begin{align} F_A(x,x^*)&=\sup_{(a,a^*)\in\ensuremath{\operatorname{gra}} A}\big\{\langle x, a^*\rangle+ \langle x^*, a\rangle-\langle a,a^*\rangle\big\}\nonumber\\ &\geq\sup_{(a,a^*)\in\ensuremath{\operatorname{gra}}(-A^*)\cap X\times X^*}\big\{\langle x, a^*\rangle+ \langle x^*, a\rangle-\langle a,a^*\rangle\big\}\nonumber\\ &=\sup_{(a,a^*)\in\ensuremath{\operatorname{gra}}(-A^*)\cap X\times X^*}\big\{\langle x, a^*\rangle+ \langle x^*, a\rangle\big\}\nonumber\\ &=\iota_{\big(\ensuremath{\operatorname{gra}}(-A^*)\cap X\times X^*\big)^{\bot}\cap X^*\times X}(x^*,x)\nonumber\\ &=\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*)\quad\text{(by \eqref{WWSA:1})}.\label{SVCK:2} \end{align} Hence by Fact~\ref{f:Fitz} \begin{align} F_A(x,x^*)=\langle x,x^*\rangle+\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*). \label{SVCK:3} \end{align} Hence by Corollary~\ref{CEl:1}, $A$ is non-enlargeable. \end{proof} The following corollary, which holds in a general Banach space, provides a characterization of non-enlargeable operators under a closedness assumption on the graph. A characterization of non-enlargeable linear operators for reflexive spaces (in which the closure assumption is hidden) was established by Svaiter in \cite[Theorem~2.5]{SV}. \begin{corollary}\label{CEl:2} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be maximally monotone and suppose that $\ensuremath{\operatorname{gra}} A$ is weak$\times$weak$^*$ closed. Select $(a,a^*)\in\ensuremath{\operatorname{gra}} A$ and set $\ensuremath{\operatorname{gra}} \widetilde{A}:=\ensuremath{\operatorname{gra}} A-\{(a,a^*)\}$. Then $A$ is non-enlargeable if and only if $\ensuremath{\operatorname{gra}} A$ is convex and $\ensuremath{\operatorname{gra}} (-\widetilde{A}^*)\cap X\times X^*\subseteq\ensuremath{\operatorname{gra}} \widetilde{A}$. In particular, $\langle x,x^*\rangle=0, \forall (x,x^*)\in\ensuremath{\operatorname{gra}} \widetilde{A}^*\cap X\times X^*$. \end{corollary} \begin{proof} ``$\Rightarrow$": By the assumption that $A$ is non-enlargeable, so is $\widetilde{A}$. By Fact~\ref{CElL:1}, $\ensuremath{\operatorname{gra}} A$ is convex and then $\ensuremath{\operatorname{gra}} A$ is affine by Fact~\ref{affine:L1}. Thus $\widetilde{A}$ is a linear relation. Now we can apply Theorem~\ref{CElC:01} to $\widetilde{A}$. ``$\Leftarrow$": Apply Fact~\ref{affine:L1} and Theorem~\ref{CElC:01} directly. \end{proof} \begin{remark} We cannot remove the condition that ``$\ensuremath{\operatorname{gra}} A$ is convex" in Corollary~\ref{CEl:2}. For example, let $X=\ensuremath{\mathbb R}^n$ with the Euclidean norm. Suppose that $f:=\|\cdot\|$. Then $\partial f$ is maximally monotone by Fact~\ref{SubMR}, and hence $\ensuremath{\operatorname{gra}} \partial f$ is weak$\times$weak$^*$ closed. Now we show that \begin{align} \ensuremath{\operatorname{gra}} (\partial f)^*=\{(0,0)\}.\label{IExCo:01} \end{align} Note that \begin{align} \partial f(x)&=\begin{cases}B_X,\;&\text{if}\; x=0;\label{esee:3}\\ \{\frac{x}{\|x\|}\},\;&\text{otherwise}.\end{cases} \end{align} Let $(z,z^*)\in\ensuremath{\operatorname{gra}} (\partial f)^*$. By \eqref{esee:3}, we have $(0, B_X)\subseteq\ensuremath{\operatorname{gra}} \partial f$ and thus \begin{align} \langle -z, B_X\rangle=0. \end{align} Thus $z=0$. Hence \begin{align} \langle z^*, a \rangle=0,\quad \forall a\in \ensuremath{\operatorname{dom}} \partial f.\label{IExCo:1} \end{align} Since $\ensuremath{\operatorname{dom}} \partial f=X$, $z^*=0$ by \eqref{IExCo:1}. Hence $(z,z^*)=(0,0)$ and thus \eqref{IExCo:01} holds. By \eqref{IExCo:01}, $\ensuremath{\operatorname{gra}} -(\partial f)^*\subseteq\ensuremath{\operatorname{gra}} \partial f$. However, $\ensuremath{\operatorname{gra}} \partial f$ is not convex. Indeed, let $e_k=(0,\ldots,0,1,0,\cdots,0):$ the $k$th entry is $1$ and the others are $0$. Take \begin{align*} a=\frac{e_1-e_2}{\sqrt{2}}\quad\text{and}\quad b=\frac{e_2-e_3}{\sqrt{2}}. \end{align*} Then $(a, a)\in \ensuremath{\operatorname{gra}} \partial f$ and $(b, b)\in \ensuremath{\operatorname{gra}} \partial f$ by \eqref{esee:3}, but \begin{align*} \frac{1}{2}(a, a)+\frac{1}{2}(b, b)\notin\ensuremath{\operatorname{gra}} \partial f. \end{align*} Hence $\partial f$ is enlargeable by Fact~\ref{CElL:1}. \end{remark} In the case of a skew operator we can be more exacting: \begin{corollary}\label{CElC:1} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be a maximally monotone and skew operator and $\varepsilon\geq0$. Then \begin{enumerate} \item\label{EN:em01} $\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}=\{(x,x^*) \in\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\mid \langle x,x^*\rangle\geq-\varepsilon\}.$ \item \label{EN:em02}$A$ is non-enlargeable if and only if $\ensuremath{\operatorname{gra}} A=\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*$. \item \label{EN:em02b} $A$ is non-enlargeable if and only if $\ensuremath{\operatorname{dom}} A=\ensuremath{\operatorname{dom}} A^*\cap X$. \item \label{EN:em03}Assume that $X$ is reflexive. Then $F_{A^*}=\iota_{\ensuremath{\operatorname{gra}} A^*}+\scal{\cdot}{\cdot}$ and hence $A^*$ is non-enlargeable. \end{enumerate} \end{corollary} \begin{proof} \ref{EN:em01}: By \cite[Lemma~3.1]{BBWY3}, we have \begin{align}F_A=\iota_{\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*}.\label{FPaSL:1}\end{align} Hence $(x,x^*)\in\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}$ if and only if $F_A(x,x^*)\le \langle x,x^*\rangle + \ensuremath{\varepsilon}$. This yields $(x,x^*)\in \ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*$ and $0\le \langle x,x^*\rangle + \ensuremath{\varepsilon}$. \ref{EN:em02}: From Fact~\ref{CElL:1} we have that $\ensuremath{\operatorname{dom}} F_A=\ensuremath{\operatorname{gra}} A$. The claim now follows by combining the latter with \eqref{FPaSL:1}. \ref{EN:em02b}: For ``$\Rightarrow$": use ~\ref{EN:em02}. ``$\Leftarrow$": Since $A$ is skew, we have $\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\supseteq \ensuremath{\operatorname{gra}} A$. Using this and \ref{EN:em02}, it suffices to show that $\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\subseteq \ensuremath{\operatorname{gra}} A$. Let $(x,x^*)\in\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*$. By the assumption, $x\in\ensuremath{\operatorname{dom}} A$. Let $y^*\in Ax$. Note that $\langle x,-x^*\rangle=\langle x,y^*\rangle=0$, where the first equality follows from the definition of $A^*$ and the second one from the fact that $A$ is skew. In this case we claim that $(x,x^*)$ is monotonically related to $\ensuremath{\operatorname{gra}} A$. Indeed, let $(a,a^*)\in\ensuremath{\operatorname{gra}} A$. Since $A$ is skew we have $\langle a, a^*\rangle= 0$. Thus \[ \langle x-a,x^*-a^*\rangle=\langle x,x^*\rangle-\langle (x^*,x), (a,a^*)\rangle+\langle a, a^*\rangle=0 \] since $(x^*,x)\in (\ensuremath{\operatorname{gra}} A)^{\bot}$ and $\langle x,x^*\rangle=\langle a, a^*\rangle=0$. Hence $(x,x^*)$ is monotonically related to $\ensuremath{\operatorname{gra}} A$. By maximality we conclude $(x,x^*)\in\ensuremath{\operatorname{gra}} A$. Hence $\ensuremath{\operatorname{gra}} (-A^*)\cap X\times X^*\subseteq \ensuremath{\operatorname{gra}} A$. \ref{EN:em03}: Now assume that $X$ is reflexive. By \cite[Theorem~2]{Brezis-Browder} (or see \cite{Yao, Si04}), $A^*$ is maximally monotone. Since $\ensuremath{\operatorname{gra}} A\subseteq \ensuremath{\operatorname{gra}} (-A^*)$ we deduce that $\ensuremath{\operatorname{gra}} (-A^{**})=\ensuremath{\operatorname{gra}} (-A)\subseteq \ensuremath{\operatorname{gra}} A^*$. The latter inclusion and Theorem~\ref{CElC:01} applied to the operator $A^*$ yields $A^*$ non-enlargeable. The conclusion now follows by applying Corollary~\ref{CEl:1} to $A^*$. \end{proof} \subsection{Limiting examples and remarks} It is possible for a non-enlargeable maximally monotone operator to be non-skew. This is the case for the operator $A^*$ in Example~\ref{Exam:eL1}. \begin{example} Let $A\colon X \ensuremath{\rightrightarrows} X^*$ be a non-enlargeable maximally monotone operator. By Fact~\ref{CElL:1} and Fact~\ref{affine:L1}, $\ensuremath{\operatorname{gra}} A$ is affine. Let $f:X\rightarrow \ensuremath{\,\left]-\infty,+\infty\right]}$ be a proper lower semicontinuous convex function with $\ensuremath{\operatorname{dom}} A\cap\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} \partial f\neq\varnothing$ such that $\ensuremath{\operatorname{dom}} A\cap\ensuremath{\operatorname{dom}} \partial f$ is not an affine set. By Fact~\ref{domain:L1}, $A+\partial f$ is maximally monotone. Since $\ensuremath{\operatorname{gra}} (A+\partial f)$ is not affine, $A+\partial f$ is enlargeable.\ensuremath{\quad \hfill \blacksquare} \end{example} The operator in the following example was studied in detail in \cite{BWY7}. \begin{fact}\label{FE:1} Suppose that $X=\ell^2$, and that $A:\ell^2\rightrightarrows \ell^2$ is given by \begin{align}Ax:=\frac{\bigg(\sum_{i< n}x_{i}-\sum_{i> n}x_{i}\bigg)_{n\in\ensuremath{\mathbb N}}}{2} =\bigg(\sum_{i< n}x_{i}+\tfrac{1}{2}x_n\bigg)_{n\in\ensuremath{\mathbb N}}, \quad \forall x=(x_n)_{n\in\ensuremath{\mathbb N}}\in\ensuremath{\operatorname{dom}} A,\label{EL:1}\end{align} where $\ensuremath{\operatorname{dom}} A:=\Big\{ x:=(x_n)_{n\in\ensuremath{\mathbb N}}\in \ell^{2}\mid \sum_{i\geq 1}x_{i}=0, \bigg(\sum_{i\leq n}x_{i}\bigg)_{n\in\ensuremath{\mathbb N}}\in\ell^2\Big\}$ and $\sum_{i<1}x_{i}:=0$. Now \cite[Propositions~3.6]{BWY7} states that \begin{align} \label{PF:a2} A^*x= \bigg(\ensuremath{\tfrac{1}{2}} x_n + \sum_{i> n}x_{i}\bigg)_{n\in\ensuremath{\mathbb N}}, \end{align} where \begin{equation*} x=(x_n)_{n\in\ensuremath{\mathbb N}}\in\ensuremath{\operatorname{dom}} A^*=\bigg\{ x=(x_n)_{n\in\ensuremath{\mathbb N}}\in \ell^{2}\;\; \bigg|\;\; \bigg(\sum_{i> n}x_{i}\bigg)_{n\in\ensuremath{\mathbb N}}\in \ell^{2}\bigg\}. \end{equation*} Then $A$ is an at most single-valued linear relation such that the following hold (proofs of all claims are in brackets). \begin{enumerate} \item $A$\label{NEC:1} is maximally monotone and skew (\cite[Propositions 3.5 and 3.2]{BWY7}). \item\label{NEC:2} $A^*$ is maximally monotone but not skew (\cite[Theorem 3.9 and Proposition 3.6]{BWY7}). \item \label{NEC:3}$\ensuremath{\operatorname{dom}} A$ is dense in $\ell^2$ (\cite[Theorem 2.5]{PheSim}), and $\ensuremath{\operatorname{dom}} A\subsetneqq\ensuremath{\operatorname{dom}} A^*$ (\cite[Proposition 3.6]{BWY7}). \item\label{NEC:5} $\langle A^*x, x\rangle=\tfrac{1}{2}s^2, \quad \forall x=(x_n)_{n\in\ensuremath{\mathbb N}}\in\ensuremath{\operatorname{dom}} A^*\ \text{with}\quad s:=\sum_{i\geq1} x_i$ (\cite[Proposition 3.7]{BWY7}). \end{enumerate} \end{fact} \begin{example}\label{Exam:eL1} Suppose that $X$ and $A$ are as in Fact~\ref{FE:1}. Then $A$ is enlargeable but $A^*$ is non-enlargeable and is not skew. Moreover, \begin{align*}\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}=\big\{(x,x^*) \in\ensuremath{\operatorname{gra}} (-A^*)\mid\big|\sum_{i\geq1} x_i\big|\leq\sqrt{2\varepsilon},\ x=(x_n)_{n\in\ensuremath{\mathbb N}}\big\}, \end{align*} where $\varepsilon\geq0$. \end{example} \begin{proof} By Corollary~\ref{CElC:1}(iii) and Fact~\ref{FE:1}\ref{NEC:3}, $A$ must be enlargeable. For the second claim, note that $X=\ell^2$ is reflexive, and hence by Fact~\ref{FE:1}\ref{NEC:1} and Corollary~\ref{CElC:1}\ref{EN:em03}, for every skew operator we must have $A^*$ non-enlargeable. For the last statement, apply Corollary~\ref{CElC:1}\ref{EN:em01} and Fact~\ref{FE:1}\ref{NEC:5} directly to obtain $\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}}$. \end{proof} \begin{example}\label{Exam:eN1} Let $C$ be a nonempty closed convex subset of $X$ and $\varepsilon\geq 0$. Then \begin{align*} \ensuremath{\operatorname{gra}} (N_C)_{\ensuremath{\varepsilon}}= \big\{(x,x^*)\in C\times X^*\mid \sigma_C (x^*)\leq \langle x,x^*\rangle+ \varepsilon\big\}. \end{align*} \end{example} \begin{proof} By Fact~\ref{f:referee04}, we have \begin{align*} (x,x^*)\in\ensuremath{\operatorname{gra}} \ (N_C)_{\ensuremath{\varepsilon}} &\Leftrightarrow F_{N_C}(x,x^*)=\iota_C(x)+\sigma_C(x^*) \leq \langle x,x^*\rangle+\varepsilon\\ &\Leftrightarrow x\in C,\ \sigma_C(x^*) \leq \langle x,x^*\rangle+\varepsilon. \end{align*} \end{proof} \begin{example}\label{Exam:eL02} Let $f(x):=\|x\|,\;\forall x\in X$ and $\varepsilon\geq 0$. Then $$\ensuremath{\operatorname{gra}} (\partial f)_{\ensuremath{\varepsilon}}= \big\{(x,x^*)\in X\times B_{X^*} \mid \|x\|\leq \langle x,x^*\rangle+\varepsilon\big\}.$$ In particular, $(\partial f)_{\ensuremath{\varepsilon}}(0)=B_{X^*}$. \end{example} \begin{proof} Note that $f$ is sublinear, and hence by Fact \ref{f:sub05} and Remark \ref{r:sub05} we can write \begin{align*}(x,x^*)\in\ensuremath{\operatorname{gra}} (\partial f)_{\ensuremath{\varepsilon}} &\Leftrightarrow F_{\partial f}(x,x^*)=f(x)+f^*(x^*) \leq \langle x,x^*\rangle+\varepsilon\quad\text{(by \eqref{Enl:Sub1})}\\ &\Leftrightarrow \|x\|+\iota_{B_{X^*}}(x^*) \leq \langle x,x^*\rangle+\varepsilon\quad\text{(by \cite[Corollary~2.4.16]{Zalinescu})}\\ &\Leftrightarrow x^*\in B_{X^*},\ \|x\|\leq \langle x,x^*\rangle+\varepsilon. \end{align*} Hence $(\partial f)_{\ensuremath{\varepsilon}}(0)=B_{X^*}$. \end{proof} \begin{example}\label{Exam:eL2} Let $p>1$ and $f(x):=\tfrac{1}{p} \|x\|^p,\;\forall x\in X$. Then $$(\partial f)_{\ensuremath{\varepsilon}}(0)= p^{\tfrac{1}{p}}(q\varepsilon)^{\tfrac{1}{q}}B_{X^*},$$ where $\tfrac{1}{p}+\tfrac{1}{q}=1$ and $\varepsilon\geq 0$. \end{example} \begin{proof} We have \begin{align*}x^*\in(\partial f)_{\ensuremath{\varepsilon}}(0) &\Leftrightarrow\langle x^*-y^*,-y\rangle\geq-\varepsilon,\quad\forall y^*\in\partial f(y)\\ &\Leftrightarrow \langle x^*,-y\rangle+\|y\|^{p}\geq-\varepsilon, \quad\forall y\in X\\ &\Leftrightarrow \langle x^*,y\rangle-\|y\|^{p} \leq\varepsilon,\quad\forall y\in X\\&\Leftrightarrow p\sup_{y\in X} \Big[ \langle \tfrac{1}{p}x^*,y\rangle-\tfrac{1}{p}\|y\|^{p}\Big]\leq\varepsilon\\ &\Leftrightarrow p\cdot \tfrac{1}{q}\|\tfrac{1}{p} x^*\|^{q}\leq\varepsilon\\ &\Leftrightarrow \|x^*\|^{q}\leq q\varepsilon p^{q-1} = q\varepsilon p^{\tfrac{q}{p}}\\ &\Leftrightarrow x^*\in p^{\tfrac{1}{p}}(q\varepsilon)^{\tfrac{1}{q}} B_{X^*}. \end{align*} \end{proof} \subsection{Applications of Fitzpatrick's last function} For a monotone linear operator $A\colon X \rightarrow X^*$ it will be very useful to define the following quadratic function (which is actually a special case of \emph{Fitzpatrick's last function} \cite{BorVan} for the linear relation $A$): \begin{equation*} q_A \colon x\mapsto \ensuremath{\tfrac{1}{2}} \scal{x}{Ax}. \end{equation*} Then $q_A=q_{A_+}$. We shall use the well known fact (see, e.g., \cite{PheSim}) that \begin{equation} \label{e:gradq} \nabla q_A = A_+, \end{equation} where the gradient operator $\nabla$ is understood in the G\^ateaux sense. The next result was first given in \cite[Proposition~2.2]{BWY2} for a reflexive space. The proof is easily adapted to a general Banach space. \begin{fact}\label{better} Let $A\colon X\to X^*$ be linear continuous, symmetric and monotone. Then \begin{equation}\label{qa} \big(\forall (x,x^*)\in X\times X^*\big)\quad q_{A}^*(x^*+Ax)=q_{A}(x)+\scal{x}{x^*} +q_{A}^*(x^*) \end{equation} and $q_{A}^*\small\circ A=q_{A}$. \end{fact} The next result was first proven in \cite[Proposition~2.2(v)]{BBW} in Hilbert space. We now extend it to a general Banach space. \begin{proposition}\label{f1:Fitz} Let $A\colon X\to X^*$ be linear and monotone. Then \begin{equation} F_A(x,x^*)=2 q_{A_+}^*(\tfrac{1}{2}x^*+\tfrac{1}{2}A^*x) =\tfrac{1}{2}q_{A_{+}}^*(x^*+A^*x),\quad \forall(x,x^*)\in X\times X, \end{equation} and $\ensuremath{\operatorname{ran}} A_+\subseteq\ensuremath{\operatorname{dom}}\partial q_{A_{+}}^* \subseteq\ensuremath{\operatorname{dom}} q_{A_{+}}^*\subseteq\overline{\ensuremath{\operatorname{ran}} A_+}$. If $\ensuremath{\operatorname{ran}} A_{+}$ is closed, then $\ensuremath{\operatorname{dom}} q_{A_{+}}^*=\ensuremath{\operatorname{dom}}\partial q_{A_{+}}^*=\ensuremath{\operatorname{ran}} A_{+}$. \end{proposition} \begin{proof} By Fact~\ref{F:1}, $\ensuremath{\operatorname{dom}} A^*\cap X=X$, so for every $x,y\in X$ we have $x,y\in \ensuremath{\operatorname{dom}} A^*\cap \ensuremath{\operatorname{dom}} A$. The latter fact and the definition of $A^*$ yield $\scal{y}{A^*x}=\scal{x}{Ay}$. Hence for every $(x,x^*)\in X\times X^*$, \begin{align} F_A(x,x^*) &= \sup_{y\in X} \scal{x}{Ay} +\scal{y}{x^*} - \scal{y}{Ay}\notag\\ &= 2\sup_{y\in X} \scal{y}{\ensuremath{\tfrac{1}{2}} x^* + \ensuremath{\tfrac{1}{2}} A^*x} - q_{A_+}(y)\notag\\ &= 2q_{A_+}^*(\ensuremath{\tfrac{1}{2}} x^*+\ensuremath{\tfrac{1}{2}} A^*x)\notag\\ &= \ensuremath{\tfrac{1}{2}} q_{A_+}^*(x^* + A^*x), \end{align} where we also used the fact that $q_A=q_{A_+}$ in the second equality. The third equality follows from the definition of Fenchel conjugate. By \cite[Proposition~2.4.4(iv)]{Zalinescu}, \begin{align} \ensuremath{\operatorname{ran}} \partial q_{A_{+}}\subseteq \ensuremath{\operatorname{dom}}\partial q_{A_{+}}^*\label{suu:fix1}\end{align} By \eqref{e:gradq}, $\ensuremath{\operatorname{ran}} \partial q_{A_{+}}=\ensuremath{\operatorname{ran}} A_+$. Then by \eqref{suu:fix1}, \begin{align}\ensuremath{\operatorname{ran}} A_+\subseteq\ensuremath{\operatorname{dom}}\partial q_{A_{+}}^*\subseteq\ensuremath{\operatorname{dom}} q_{A_{+}}^*\end{align} Then by the Br{\o}ndsted-Rockafellar Theorem (see \cite[Theorem~3.1.2]{Zalinescu}), \begin{align*}\ensuremath{\operatorname{ran}} A_+\subseteq\ensuremath{\operatorname{dom}}\partial q_{A_{+}}^* \subseteq\ensuremath{\operatorname{dom}} q_{A_{+}}^*\subseteq\overline{\ensuremath{\operatorname{ran}} A_+}. \end{align*} Hence, under the assumption that $\ensuremath{\operatorname{ran}} A_+$ is closed, we have $\ensuremath{\operatorname{ran}} A_+=\ensuremath{\operatorname{dom}}\partial q_{A_{+}}^*=\ensuremath{\operatorname{dom}} q_{A_{+}}^*$. \end{proof} We can now apply the last proposition to obtain a formula for the enlargement of a single valued-operator. \begin{proposition}[Enlargement of a monotone linear operator]\label{Bu:1} Let $A:X\rightarrow X^*$ be a linear and monotone operator, and $\varepsilon\geq0$. Then \begin{align}A_{\ensuremath{\varepsilon}}(x)=\Big\{Ax+z^*\mid q^*_{A}(z^*)\leq 2\varepsilon\Big\},\quad \forall x\in X.\label{LSNE:1}\end{align} Moreover, $A$ is non-enlargeable if and only if $A$ is skew. \end{proposition} \begin{proof} Fix $x\in X$, $z^*\in X^*$ and $x^*=Ax+z^*$. Then by Proposition~\ref{f1:Fitz} and Fact~\ref{better}, \begin{align*}&x^*\in A_{\ensuremath{\varepsilon}}(x) \Leftrightarrow F_A(x,Ax+z^*)\leq\langle x,Ax+z^*\rangle+\varepsilon\\ &\Leftrightarrow \tfrac{1}{2}q^*_{A_+}(Ax+z^*+A^*x) \leq\langle x,Ax+z^*\rangle+\varepsilon\\ &\Leftrightarrow \tfrac{1}{2}q^*_{A_+}\big(A_+ (2x)+z^*\big) \leq\langle x,Ax+z^*\rangle+\varepsilon\\ &\Leftrightarrow \tfrac{1}{2}\left[q^*_{A_+}(z^*)+2\langle x,z^*\rangle+2\langle x, Ax\rangle\right] \leq\langle x,Ax+z^*\rangle+\varepsilon\\ &\Leftrightarrow q^*_{A}(z^*)\leq 2\varepsilon, \end{align*} where we also used in the last equivalence the fact that $q_A=q_{A_+}$. Now we show the second statement. By Fact~\ref{F:1}, $\ensuremath{\operatorname{dom}} A^*\cap X=X$. Then by Theorem~\ref{CElC:01} and Corollary~\ref{CElC:1}\ref{EN:em02b}, we have $A$ is non-enlargeable if and only if $A$ is skew. \end{proof} A result similar to Corollary~\ref{Tu:1} below was proved in \cite[Proposition~2.2]{BurIus:1} in reflexive space. Their proof still requires the constraint that $\ensuremath{\operatorname{ran}} (A+A^*)$ is closed. \begin{corollary}\label{Tu:1} Let $A:X\rightarrow X^*$ be a linear continuous and monotone operator such that $\ensuremath{\operatorname{ran}} (A+A^*)$ is closed. Then $$A_{\ensuremath{\varepsilon}}(x)=\Big\{Ax+(A+A^*)z\mid q_{A}(z)\leq \tfrac{1}{2}\varepsilon\Big\},\quad \forall x\in X.$$ \end{corollary} \begin{proof} Proposition~\ref{Bu:1} yields \begin{equation}\label{Tu:eq1} x^*\in A_{\ensuremath{\varepsilon}}(x)\Leftrightarrow x^*=Ax+z^*,\; q^*_{A}(z^*)\leq 2\varepsilon. \end{equation} In particular, $z^*\in \ensuremath{\operatorname{dom}} q^*_{A}$. Since $\ensuremath{\operatorname{ran}} (A_{+})$ is closed, Proposition~\ref{f1:Fitz} yields \[ \ensuremath{\operatorname{ran}} (A_{+}) = \ensuremath{\operatorname{ran}} (A+A^*)=\ensuremath{\operatorname{dom}} q^*_{A_+}=\ensuremath{\operatorname{dom}} q^*_{A}. \] The above expression and the fact that $z^*\in \ensuremath{\operatorname{dom}} q^*_{A}$ implies that there exists $z\in X$ such that $z^*=(A+A^*)z$. Note also that (by Fact~\ref{better}) \[ q^*_{A}(z^*)=q^*_{A_+}(z^*)=q^*_{A_+}(A_+(2z))=q_{A_+}(2z)=4q_A(z), \] where we used Fact~\ref{better} in the last equality. Using this in \eqref{Tu:eq1} gives \begin{align*} x^*\in A_{\ensuremath{\varepsilon}}(x)&\Leftrightarrow x^*=Ax+(A+A^*)z,\;4q_{A}(z)\leq 2\varepsilon\\ &\Leftrightarrow x^*=Ax+(A+A^*)z,\;q_{A}(z)\leq \tfrac{1}{2}\varepsilon,\end{align*} establishing the claim. \end{proof} We conclude the section with two examples. \begin{example}[Rotation]\label{Exam:eL3} Assume that $X$ is the Euclidean plane $\ensuremath{\mathbb R}^2$, let $\theta \in\left[0,\tfrac{\pi}{2}\right]$, and set \begin{align} A:=\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin \theta & \cos\theta \end{pmatrix}. \end{align} Then for every $(\varepsilon,x)\in\ensuremath{\mathbb R}_+\times\ensuremath{\mathbb R}^2$, \begin{align}A_{\ensuremath{\varepsilon}}(x)=\Big\{Ax+v\mid v\in 2\sqrt{(\cos{\theta})\varepsilon\,}B_X\Big\}.\label{ComSE:1} \end{align} \end{example} \begin{proof} We consider two cases. \emph{Case 1}: $\theta=\tfrac{\pi}{2}$. Then $A$ is skew operator. By Corollary~\ref{CElC:1}, $A_{\ensuremath{\varepsilon}}=A$ and hence \eqref{ComSE:1} holds. \emph{Case 2}: $\theta \in\left[0,\tfrac{\pi}{2}\right[$. Let $x\in\ensuremath{\mathbb R}^2$. Note that $\tfrac{A+A^*}{2}= (\cos\theta)\ensuremath{\operatorname{Id}}$, $q_{A} = \tfrac{\cos\theta}{2}\|\cdot\|^2$. Then by Corollary~\ref{Tu:1}, $$A_{\ensuremath{\varepsilon}}(x)=\Big\{Ax+2(\cos{\theta}) z\mid q_{A}(z) =\tfrac{\cos\theta}{2}\|z\|^2\leq \tfrac{1}{2}\varepsilon\Big\}.$$ Thus, \begin{align*}A_{\ensuremath{\varepsilon}}(x)&=\Big\{Ax+v\mid \|v\|\leq 2\sqrt{(\cos\theta)\varepsilon\,}\Big\} =\Big\{Ax+v\mid v\in 2\sqrt{(\cos\theta)\varepsilon\,}B_X\Big\}. \end{align*} \end{proof} \begin{example}[Identity]\label{Exam:eL4} Assume that $X$ is a Hilbert space, and $A:=\ensuremath{\operatorname{Id}}$. Let $\varepsilon\geq0$. Then \begin{align*}\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}} =\Big\{(x,x^*)\in X\times X\mid x^*\in x+ 2\sqrt{\varepsilon}B_X\Big\}. \end{align*} \end{example} \begin{proof} By \cite[Example~3.10]{BM}, we have \begin{align*} (x,x^*)\in\ensuremath{\operatorname{gra}} A_{\ensuremath{\varepsilon}} &\Leftrightarrow \tfrac{1}{4}\|x+x^*\|^2\leq\langle x,x^*\rangle+\varepsilon\\ &\Leftrightarrow \tfrac{1}{4}\|x-x^*\|^2\leq\varepsilon\\ &\Leftrightarrow\|x-x^*\|\leq 2\sqrt{\varepsilon}\\ &\Leftrightarrow x^*\in x+ 2\sqrt{\varepsilon}B_X. \end{align*} \end{proof} \section{Sums of operators} \label{secsumo} The conclusion of the lemma below has been established for reflexive Banach spaces in \cite[Lemma~5.8]{BWY3}. Our proof for a general Banach space assumes the operators to be of type (FPV) and follows closely that of \cite[Lemma~5.8]{BWY3}. \begin{lemma}\label{Co:1} Let $A, B\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone of type (FPV), and suppose that $\bigcup_{\lambda>0}\lambda\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]$ is a closed subspace of $X$. Then we have \begin{align*}\bigcup_{\lambda>0}\lambda\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]= \bigcup_{\lambda>0}\lambda\left[P_{X}\ensuremath{\operatorname{dom}} F_A-P_{X}\ensuremath{\operatorname{dom}} F_B\right].\end{align*} \end{lemma} \begin{proof} By Fact~\ref{f:Fitz} and Fact~\ref{f:referee02c}, we have \begin{align*} &\bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]\subseteq \bigcup_{\lambda>0} \lambda\left[P_{X}\ensuremath{\operatorname{dom}} F_A-P_{X}\ensuremath{\operatorname{dom}} F_B\right] \subseteq\bigcup_{\lambda>0} \lambda\left[\overline{\ensuremath{\operatorname{dom}} A}-\overline{\ensuremath{\operatorname{dom}} B}\right]\\ &\subseteq\bigcup_{\lambda>0} \lambda\left[\overline{\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B}\right]\subseteq \overline{\bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]}\\ &=\bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]\quad \text{(by the assumption)}. \end{align*} \end{proof} \begin{corollary}\label{Co:01} Let $A, B\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone linear relations, and suppose that $\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B$ is a closed subspace. Then \begin{align*}\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]=\bigcup_{\lambda>0} \lambda\left[P_{X}\ensuremath{\operatorname{dom}} F_A-P_{X}\ensuremath{\operatorname{dom}} F_B\right]. \end{align*} \end{corollary} \begin{proof} Directly apply Fact~\ref{f:referee01} and Lemma~\ref{Co:1}. \end{proof} \begin{corollary}\label{Co:01sd} Let $A\colon X\ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation and let $C\subseteq X$ be a nonempty and closed convex set. Assume that $\bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} A-C\right]$ is a closed subspace. Then \begin{align*}\bigcup_{\lambda>0} \lambda\left[P_{X}\ensuremath{\operatorname{dom}} F_A-P_{X}\ensuremath{\operatorname{dom}} F_{N_C}\right] &=\bigcup_{\lambda>0} \lambda\left[\ensuremath{\operatorname{dom}} A-C\right]. \end{align*} \end{corollary} \begin{proof} Let $B=N_C$. Then apply directly Fact~\ref{f:referee01}, Fact~\ref{f:referee0d} and Lemma~\ref{Co:1}. \end{proof} Theorem~\ref{FS6} below was proved in \cite[Theorem~5.10]{BWY3} for a reflexive space. We extend it to a general Banach space. \begin{theorem}[Fitzpatrick function of the sum]\label{FS6} Let $A,B\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone linear relations, and suppose that $\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B$ is closed. Then $$F_{A+B}= F_A\Box_2F_B,$$ and the partial infimal convolution is exact everywhere. \end{theorem} \begin{proof}Let $(z,z^*)\in X\times X^*$. By Fact~\ref{f:referee03}, it suffices to show that there exists $v^*\in X^*$ such that \begin{equation} \label{elrl:ourgoal} F_{A+B}(z,z^*)\geq F_A (z,z^*-v^*)+ F_{B}(z,v^*). \end{equation} If $(z,z^*)\notin \ensuremath{\operatorname{dom}} F_{A+B}$, clearly, \eqref{elrl:ourgoal} holds. Now assume that $(z,z^*)\in \ensuremath{\operatorname{dom}} F_{A+B}$. Then \begin{align} &F_{A+B}(z,z^*)\nonumber\\ &=\sup_{\{x,x^*,y^*\}}\big[\langle x,z^*\rangle+\langle z,x^*\rangle-\langle x,x^*\rangle +\langle z-x, y^*\rangle -\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*)-\iota_{\ensuremath{\operatorname{gra}} B}(x,y^*)\big].\label{lrsee:1} \end{align} Let $Y=X^*$ and define $F,K:X\times X^*\times Y\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ respectively by \begin{align*} F: &(x,x^*,y^*)\in X\times X^*\times Y\rightarrow\langle x,x^*\rangle+\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*)\\ K:&(x,x^*, y^*)\in X\times X^*\times Y\rightarrow\langle x,y^*\rangle+\iota_{\ensuremath{\operatorname{gra}} B}(x,y^*)\\ \end{align*} Then by \eqref{lrsee:1}, \begin{align}F_{A+B}(z,z^*)=(F+K)^*(z^*,z,z)\label{lrsee:2}\end{align} By Fact~\ref{f:referee} and the assumptions, $F$ and $K$ are proper lower semicontinuous and convex. The definitions of $F$ and $K$ yield \begin{align*} \ensuremath{\operatorname{dom}} F-\ensuremath{\operatorname{dom}} K=\left[\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B\right]\times X^*\times Y,\ \text{ which is a closed subspace}. \end{align*} Thus by Fact~\ref{AttBre:1} and \eqref{lrsee:2}, there exists $(z^*_0 ,z^{**}_0,z^{**}_1)\in X^*\times X^{**}\times Y^*$ such that \begin{align*}F_{A+B}(z,z^*)&= F^*(z^*-z^*_0,z-z^{**}_0,z-z^{**}_1)+K^*(z^*_0 ,z^{**}_0,z^{**}_1)\\ &= F^*(z^*-z^*_0,z,0)+K^*(z^*_0,0,z)\quad\text{(by $(z,z^*)\in\ensuremath{\operatorname{dom}} F_{A+B}$)}\\ &=F_A(z,z^*-z^*_0)+F_B(z,z^*_0). \end{align*} Thus \eqref{elrl:ourgoal} holds by taking $v^*=z_0^*$ and hence $F_{A+B}=F_A\Box_2 F_B$. \end{proof} The next result was first obtained by Voisei in \cite{Voisei06} while Simons gave a different proof in \cite[Theorem~46.3]{Si2}. We are now in position to provide a third approach. \begin{theorem}\label{lisum:1} Let $A,B\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone linear relations, and suppose that $\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B$ is closed. Then $A+B$ is maximally monotone. \end{theorem} \begin{proof} By Fact~\ref{f:Fitz}, we have that $F_A\geq \langle\cdot,\cdot\rangle$ and $F_B\geq \langle\cdot,\cdot\rangle$. Using now Theorem~\ref{FS6} and \eqref{infconv} implies that $F_{A+B}\geq\langle\cdot,\cdot\rangle$. Combining the last inequality with Corollary~\ref{Co:01} and Fact~\ref{f:referee1}, we conclude that $A+B$ is maximally monotone. \end{proof} \begin{theorem}\label{lisum:e1} Let $A,B\colon X\ensuremath{\rightrightarrows} X^*$ be maximally monotone linear relations, and suppose that $\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B$ is closed. Assume that $A$ and $B$ are non-enlargeable. Then $$F_{A+B}=\iota_{\ensuremath{\operatorname{gra}}(A+B)}+\langle\cdot,\cdot\rangle$$ and hence $A+B$ is non-enlargeable. \end{theorem} \begin{proof} By Corollary~\ref{CEl:1}, we have \begin{align} F_A=\iota_{\ensuremath{\operatorname{gra}} A}+\langle\cdot,\cdot\rangle\quad\text{and}\quad F_B=\iota_{\ensuremath{\operatorname{gra}} B}+\langle\cdot,\cdot\rangle.\label{SumEL:1} \end{align} Let $(x,x^*)\in X\times X^*$. Then by \eqref{SumEL:1} and Theorem~\ref{FS6}, we have \begin{align*} F_{A+B} (x,x^*)&=\min_{y^*\in X^*}\big\{\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*-y^*)+\langle x^*-y^*,x\rangle+\iota_{\ensuremath{\operatorname{gra}} B}(x,y^*)+\langle y^*,x\rangle\big\}\\ &=\iota_{\ensuremath{\operatorname{gra}} (A+B)}(x,x^*)+\langle x^*,x\rangle. \end{align*} By Theorem~\ref{lisum:1} we have that $A+B$ is maximally monotone. Now we can apply Corollary~\ref{CEl:1} to $A+B$ to conclude that $A+B$ is non-enlargeable. \end{proof} The proof of Theorem~\ref{tf:main} in part follows that of \cite[Theorem~3.1]{BWY4}. \begin{theorem}\label{tf:main} Let $A:X\ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation. Suppose $C$ is a nonempty closed convex subset of $X$, and that $\ensuremath{\operatorname{dom}} A \cap \ensuremath{\operatorname{int}} C\neq \varnothing$. Then $F_{A+N_C}= F_A\Box_2F_{N_C}$, and the partial infimal convolution is exact everywhere. \end{theorem} \begin{proof} Let $(z,z^*)\in X\times X^*$. By Fact~\ref{f:referee03}, it suffices to show that there exists $v^*\in X^*$ such that \begin{equation} \label{e:ourgoal} F_{A+N_C}(z,z^*)\geq F_A (z,v^*)+ F_{N_C}(z,z^*-v^*). \end{equation} If $(z,z^*)\notin \ensuremath{\operatorname{dom}} F_{A+N_C}$, clearly, \eqref{e:ourgoal} holds. Now assume that \begin{align}(z,z^*)\in \ensuremath{\operatorname{dom}} F_{A+N_C}.\label{EacInm} \end{align} By Fact~\ref{domain:L1} and Fact~\ref{f:referee02c}, \begin{align*}P_X\left[\ensuremath{\operatorname{dom}} F_{A+N_C}\right]\subseteq \overline{\left[\ensuremath{\operatorname{dom}} (A+N_C)\right]}\subseteq C.\end{align*} Thus, by \eqref{EacInm}, we have \begin{align}z\in C.\label{EacInm:1} \end{align} Set \begin{equation}\label{e:defofg} g\colon X\times X^* \to \ensuremath{\,\left]-\infty,+\infty\right]}\colon (x,x^*)\mapsto \scal{x}{x^*} + \iota_{\ensuremath{\operatorname{gra}} A}(x,x^*). \end{equation} By Fact~\ref{f:referee}, $g$ is convex. Hence, \begin{equation} \label{e:defofh} h = g + \iota_{C\times X^*} \end{equation} is convex as well. Let \begin{equation} \label{e:defofc0} c_0 \in \ensuremath{\operatorname{dom}} A \cap \ensuremath{\operatorname{int}} C, \end{equation} and let $c_0^*\in Ac_0$. Then $(c_0,c_0^*)\in \ensuremath{\operatorname{gra}} A \cap (\ensuremath{\operatorname{int}} C \times X^*) = \ensuremath{\operatorname{dom}} g \cap \ensuremath{\operatorname{int}\operatorname{dom}}\,\iota_{C\times X^*}$. Let us compute $F_{A+N_C}(z,z^*)$. As in \eqref{lrsee:1} we can write \begin{align} &F_{A+N_C}(z,z^*)\nonumber\\ &=\sup_{(x,x^*,c^*)}\big[\langle x,z^*\rangle+\langle z,x^*\rangle-\langle x,x^*\rangle +\langle z-x, c^*\rangle -\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*)-\iota_{\ensuremath{\operatorname{gra}} N_C}(x,c^*)\big]\nonumber\\ &\geq \sup_{(x,x^*)}\big[\langle x,z^*\rangle+\langle z,x^*\rangle-\langle x,x^*\rangle -\iota_{\ensuremath{\operatorname{gra}} A}(x,x^*)-\iota_{C\times X^*}(x,x^*)\big]\nonumber\\ &=\sup_{(x,x^*)}\left[\langle x,z^*\rangle+\langle z,x^*\rangle-h(x,x^*)\right]\nonumber\\ &=h^*(z^*,z),\nonumber \end{align} where we took $c^*=0$ in the inequality. By Fact~\ref{pheps:1}, $\iota_{C\times X^*}$ is continuous at $(c_0,c_0^*)\in \ensuremath{\operatorname{int}\operatorname{dom}}\,\iota_{C\times X^*}$. Since $(c_0,c_0^*)\in \ensuremath{\operatorname{dom}} g \cap \ensuremath{\operatorname{int}\operatorname{dom}}\,\iota_{C\times X^*}$ we can use Fact~\ref{f:F4} to conclude the existence of $(y^*,y^{**})\in X^{*}\times X^{**}$ such that \begin{align} h^*(z^*,z)&=g^*(y^*,y^{**}) + \iota_{C\times X^*}^*(z^*-y^*,z-y^{**})\nonumber\\ &=g^*(y^*,y^{**}) + \iota_{C}^*(z^*-y^*) + \iota_{\{0\}}(z-y^{**}).\label{EacInm:2} \end{align} Then by \eqref{EacInm} and \eqref{EacInm:2} we must have $z=y^{**}$. Thus by \eqref{EacInm:2} and the definition of $g$ we have \begin{align*}&F_{A+N_C}(z,z^*)\geq g^*(y^*,z) + \iota_{C}^*(z^*-y^*) =F_A(z,y^*) + \iota_{C}^*(z^*-y^*)\\ &=F_A(z,y^*) + \iota_{C}^*(z^*-y^*)+\iota_C(z)\quad\text{(by \eqref{EacInm:1})}\\ &=F_A(z,y^*) + F_{N_C}(z,z^*-y^*)\quad\text{(by Fact~\ref{f:referee04})}. \end{align*} Hence \eqref{e:ourgoal} holds by taking $v^*=y^*$ and thus $F_{A+N_C}= F_A\Box_2F_{N_C}$. \end{proof} We decode the prior result as follows: \begin{corollary}[Normal cone]\label{FLiN:1} Let $A:X\ensuremath{\rightrightarrows} X^*$ be a maximally monotone linear relation. Suppose $C$ is a nonempty closed convex subset of $X$, and that $\ensuremath{\operatorname{dom}} A \cap \ensuremath{\operatorname{int}} C\neq \varnothing$. Then $A+N_C$ is maximally monotone. \end{corollary} \begin{proof} By Fact~\ref{f:Fitz}, we have that $F_A\geq \langle\cdot,\cdot\rangle$ and $F_{N_C}\geq \langle\cdot,\cdot\rangle$. Using now Theorem~\ref{tf:main} and \eqref{infconv} implies that $F_{A+N_C}\geq\langle\cdot,\cdot\rangle$. Combining the last inequality with Corollary~\ref{Co:01} and Fact~\ref{f:referee1}, we conclude that $A+N_C$ is maximally monotone. \end{proof} To conclude we revisit a quite subtle example. All statements in the fact below have been proved in \cite[Example~4.1 and Theorem~3.6(vii)]{BBWY3}. \begin{fact}\label{FCPEX:1} Consider $X: = c_0$, with norm $\|\cdot\|_{\infty}$ so that $X^* = \ell^1$ with norm $\|\cdot\|_{1}$, and $X^{**}=\ell^{\infty}$ with second dual norm $\|\cdot\|_{*}$. Fix $\alpha:=(\alpha_n)_{n\in\ensuremath{\mathbb N}}\in\ell^{\infty}$ with $\limsup \alpha_n\neq0$, and define $A_{\alpha}:\ell^1\rightarrow\ell^{\infty}$ by \begin{align}\label{def:Aa} (A_{\alpha}x^*)_n:=\alpha^2_nx^*_n+2\sum_{i>n}\alpha_n \alpha_ix^*_i, \quad \forall x^*=(x^*_n)_{n\in\ensuremath{\mathbb N}}\in\ell^1.\end{align} \allowdisplaybreaks Finally, let $T_{\alpha}:c_{0}\rightrightarrows X^*$ be defined by \begin{align}\ensuremath{\operatorname{gra}} T_{\alpha}&: =\big\{(-A_{\alpha} x^*,x^*)\mid x^*\in X^*, \langle \alpha, x^*\rangle=0\big\}\nonumber\\ &=\Big\{\big((-\sum_{i>n} \alpha_n \alpha_ix^*_i+\sum_{i<n}\alpha_n \alpha_ix^*_i)_n, x^*\big) \mid x^*\in X^*, \langle \alpha, x^*\rangle=0\Big\}.\label{PBABA:Ea1} \end{align} Then \begin{enumerate} \item\label{BCCE:A01} $\langle A_{\alpha}x^*,x^*\rangle=\langle \alpha , x^*\rangle^2, \quad \forall x^*=(x^*_n)_{n\in\ensuremath{\mathbb N}}\in\ell^1$ and so \eqref{PBABA:Ea1} is well defined. \item \label{BCCE:SA01} $A_{\alpha}$ is a maximally monotone operator on $\ell^1$. \item\label{BCCE:A1} $T_{\alpha}$ is a maximally monotone and skew operator on $c_0$. \item\label{BCCE:A1F} $F_{T_{\alpha}}=\iota_C$, where $ C:=\{(-A_{\alpha}x^*,x^*)\mid x^*\in X^*\}. $ \end{enumerate} \end{fact} This set of affairs allows us to show the following: \begin{example} Let $X=c_0$, $A_{\alpha}$, $C$, and $T_{\alpha}$ be defined as in Fact~\ref{FCPEX:1}. Then $T_{\alpha}:c_{0}\rightrightarrows \ell^1$ is a maximally monotone enlargeable skew linear relation. Indeed \begin{align*} \ensuremath{\operatorname{gra}} (T_{\alpha}+N_{B_X})_{\ensuremath{\varepsilon}}= \Big\{(-A_{\alpha}x^*,z^*)\in B_X\times X^*\mid x^*\in X, \|z^*-x^*\|_1\leq\langle -A_{\alpha}x^*,z^*\rangle+\varepsilon\Big\}. \end{align*} \end{example} \allowdisplaybreaks \begin{proof} From \eqref{PBABA:Ea1}, we have that $\ensuremath{\operatorname{gra}} T_{\alpha}\subsetneqq C$ therefore Fact~\ref{FCPEX:1}\ref{BCCE:A1F} yields $F_{T_{\alpha}}\neq \iota_{\ensuremath{\operatorname{gra}} T_{\alpha}} +\langle\cdot,\cdot\rangle$. Using now Fact~\ref{FCPEX:1}\ref{BCCE:A1} and Corollary~\ref{CEl:1}, we conclude that $T_{\alpha}$ is enlargeable. Now we determine $\ensuremath{\operatorname{gra}} (T_{\alpha}+N_{B_X})_{\ensuremath{\varepsilon}}$. By Fact~\ref{FCPEX:1}\ref{BCCE:A1}, Theorem~\ref{tf:main} and \eqref{Enl:1}, we have \begin{align} &(z,z^*)\in \ensuremath{\operatorname{gra}} (T_{\alpha}+N_{B_X})_{\ensuremath{\varepsilon}}\nonumber\\ &\Leftrightarrow F_{T_{\alpha}}\Box_2F_{N_{B_X}}(z,z^*)\leq\langle z,z^*\rangle+\varepsilon \nonumber\\ &\Leftrightarrow F_{T_{\alpha}}(z,x^*)+\iota_{B_X}(z)+\iota^*_{B_X}(z^*-x^*) \leq\langle z,z^*\rangle+\varepsilon, \;\exists x^*\in X^* \quad\text{(by Fact~\ref{f:referee04})}\nonumber\\ &\Leftrightarrow z\in B_X,\, \iota_C(z,x^*)+\|z^*-x^*\|_1 \leq\langle z,z^*\rangle+\varepsilon, \;\exists x^*\in X^*\;\text{(by Fact~\ref{FCPEX:1}\ref{BCCE:A1F})}\nonumber\\ &\Leftrightarrow z=-A_{\alpha}x^*\in B_X,\ \|z^*-x^*\|_1 \leq\langle z,z^*\rangle+\varepsilon, \;\exists x^*\in X^*\nonumber\\ &\Leftrightarrow z=-A_{\alpha}x^*\in B_X,\; \|z^*-x^*\|_1\leq\langle -A_{\alpha}x^*,z^*\rangle+\varepsilon , \;\exists x^*\in X^*.\nonumber \end{align} This is the desired result. \end{proof} {\small
2,877,628,089,296
arxiv
\section{Introduction} In 1968 Simons \cite{Sim68} (see also \cite[$\S$7 or $\S$9]{Che68} and \cite[$\S$1.6]{Xin03}) showed that the second fundamental form of an immersed minimal hypersurface in the sphere or in the Euclidean space satisfies a second order elliptic partial differential equation, which can imply the famous Simons' inequality. The Simons' inequality enabled him to prove a gap phenomenon for minimal submanifolds in the sphere and the Bernstein's problem in $\mathbb{R}^n$ for $n\leq{}7$. Since then the Simons' inequality has been used by various authors to study minimal immersions. In this paper, we shall classify all complete minimal hypersurfaces immersed in the space form $\Mbar^{n+1}(c)$ which satisfy the Simons' equation \eqref{eq:Simons equation}, where $n\geq{}3$. Roughly speaking, a \emph{catenoid} is a minimal rotation hypersurface immersed in $\Mbar^{n+1}(c)$. In the case when $c=0$, Tam and Zhou \cite[Theorem 3.1]{TZ09} proved that if a non-flat complete minimal hypersurface $\Sigma^n$ immersed in $\mathbb{R}^{n+1}$ satisfies the Simons' equation \eqref{eq:Simons equation} on all nonvanishing points of $|A|$, then $\Sigma^n$ must be a catenoid. Motivated by the ideas of Tam and Zhou, we generalize the result to the cases when $c\ne{}0$. More precisely we will prove the following theorem. \begin{theorem} \label{thm:catenoid charaterization} Let $\Mbar^{n+1}(c)$ be the space form of dimension $n+1$, where $n\geq{}3$. Suppose that $\Sigma^n$ is a non totally geodesic complete minimal hypersurface immersed in $\Mbar^{n+1}(c)$. If the Simons' equation \eqref{eq:Simons equation} holds as an equation at all nonvanishing points of $|A|$ in $\Sigma^n$, then $\Sigma^n\subset\Mbar^{n+1}(c)$ is either \begin{enumerate} \item a catenoid if $c\leq{}0$, or \item a Clifford minimal hypersurface or a compact Ostuki minimal hypersurface if $c>0$. \end{enumerate} \end{theorem} \begin{remark} The compact minimal rotation hypersurfaces in Theorem \ref{thm:otsuki} are called the \emph{Otsuki minimal hypersurfaces} (see \cite{Ots70}). The Otsuki minimal hypersurfaces are \emph{immersed} catenoids. \end{remark} \begin{remark} Combing Proposition \ref{prop:Clifford_Simons_equation} and Proposition \ref{prop:catenoid_Simons identity} with Theorem \ref{thm:catenoid charaterization}, we can see that the Clifford minimal hypersurfaces and the catenoids are the \emph{only} complete minimal hypersurfaces satisfying the Simons' equation \eqref{eq:Simons equation}. \end{remark} \begin{remark}We say that $\Sigma^n$ is a \emph{complete} minimal hypersurface in the space form $\Mbar^{n+1}(c)$, we actually mean one of the following cases: \begin{enumerate} \item if $c\leq{}0$, then $\Sigma^n$ is a noncompact hypersurface without boundary, that is, an open hypersurface, or \item if $c>0$, then $\Sigma^n$ is a compact hypersurface without boundary, that is, a closed hypersurface. \end{enumerate} \end{remark} \noindent\textbf{Plan of the paper.} This paper is organized as follows: In $\S$\ref{sec:prelim} we define the catenoids and their generating curves in the space forms. In $\S$\ref{sec:Simons_equation} we derive the Simons' identity \eqref{eq:Simons identity}, and we show that the Clifford minimal hypersurfaces and the catenoids satisfy \eqref{eq:Simons equation}. In $\S$\ref{sec:Proof_Main_Theorem} we prove Theorem \ref{thm:catenoid charaterization}. In $\S$\ref{sec:appendix} we offer some figures of the generating curves of the catenoids in the space forms. \section{Preliminary}\label{sec:prelim} A simply connected $(n+1)$-dimensional complete Riemannian manifold whose sectional curvature is equal to a constant $c$, denoted by $\Mbar^{n+1}(c)$, is called a \emph{space form}. There are three types of space forms: (i) If $c>0$, let \begin{equation} \Mbar^{n+1}(c)=\SS^{n+1}(c)=\{x\in\mathbb{R}^{n+2}\ |\ x_{1}^{2}+\cdots+x_{n+2}^{2}=1/c\}\ . \end{equation} (ii) If $c<0$, let \begin{equation} \Mbar^{n+1}(c)=\mathbb{B}^{n+1}(c)= \{x\in\mathbb{R}^{n+1}\ |\ x_{1}^{2}+\cdots+x_{n+1}^{2}<-1/c\}\ , \end{equation} with the metric \begin{equation} ds^{2}=\frac{4|dx|^{2}}{(1+c|x|^2)^{2}}\ , \end{equation} where $x=(x_{1},\ldots,x_{n+1})$ and $|x|^{2}=x_{1}^{2}+\cdots+x_{n+1}^{2}$. (iii) If $c=0$, let \begin{equation} \Mbar^{n+1}(0)=\mathbb{R}^{n+1} \end{equation} be the $(n+1)$-dimensional Euclidean space. In Theorem \ref{Thm:Clifford_minimal_hypersurface} and Theorem \ref{thm:otsuki}, the results that we quote are about minimal hypersurfaces immersed in the unit sphere $\SS^{n+1}$, but these results can be generalized to the space forms $\SS^{n+1}(c)$ for $c>0$. In fact, consider both $\SS^{n+1}$ and $\SS^{n+1}(c)$ ($c>0$) as the subsets of $\mathbb{R}^{n+2}$. Define the map $f:\mathbb{R}^{n+2}\to\mathbb{R}^{n+2}$ by $f(x)={x}/{\sqrt{c}}$ for any $x\in\mathbb{R}^{n+2}$, where $c>0$. It's easy to verify $\SS^{n+1}(c)=f(\SS^{n+1})$. For any hypersurface $\Sigma^{n}$ immersed in $\SS^{n+1}$, let $\Sigma^{n}(c)=f(\Sigma^{n})$, then $\Sigma^{n}(c)$ is a hypersurface immersed in $\SS^{n+1}(c)$. \begin{lemma}\label{lem:conformal_minimal} If $\Sigma^{n}$ is a minimal hypersurface immersed in $\SS^{n+1}$, then $\Sigma^{n}(c)$ is a minimal hypersurface immersed in $\SS^{n+1}(c)$. \end{lemma} \begin{proof}Let $g_{ij}$ and $\tilde{g}_{ij}$ be the first fundamental forms of the hypersurfaces $\Sigma^{n}\subset\SS^{n+1}$ and $\Sigma^{n}(c)\subset\SS^{n+1}(c)$ respectively, then $\tilde{g}_{ij}=g_{ij}/c$ and $\tilde{g}^{ij}=cg^{ij}$ for $1\leq{}i,j\leq{}n$. It's easy to verify that the Laplacians on $\Sigma^{n}$ and $\Sigma^{n}(c)$ satisfy the equation $\Delta_{\Sigma^{n}(c)}=c\Delta_{\Sigma^{n}}$. Let $x$ and $\tilde{x}$ be the position functions of $\Sigma^{n}$ and $\Sigma^{n}(c)$ in $\mathbb{R}^{n+2}$ respectively, then $\tilde{x}=x/\sqrt{c}$. If $\Sigma$ is a minimal hypersurface immersed in $\SS^{n+1}$, then by Theorem 3 in \cite{Tak66} (see also \cite[p.101]{dCW70} or \cite[Theorem 3.10.2]{Ji04}) we have $\Delta_{\Sigma^{n}}x=-nx$. On the other hand, we have the following identities \begin{equation*} \Delta_{\Sigma^{n}(c)}\tilde{x}=c\Delta_{\Sigma^{n}}\left(\frac{x}{\sqrt{c}}\right) =\sqrt{c}\,\Delta_{\Sigma^{n}}x=\sqrt{c}\,(-nx)=-nc\tilde{x}\ , \end{equation*} which can imply that $\Sigma^{n}(c)$ is a minimal hypersurface immersed in $\SS^{n+1}(c)$ by applying \cite[Theorem 3]{Tak66} again. \end{proof} As we will see that any complete minimal hypersurface $\Sigma^n$ immersed in $\Mbar^{n+1}(c)$ satisfying the equation \eqref{eq:Simons equation} at all nonvanishing points of $|A|$ is a minimal rotation hypersurface unless $\Sigma^n$ is a Clifford minimal hypersurface in the case when $c>0$, so at first we shall study the Clifford minimal hypersurfaces and minimal rotation hypersurfaces in the space forms. \iffalse In $\S$\ref{subsec:Clifford_torus} we define the Clifford minimal hypersurfaces in the space form $\SS^{n+1}(c)$ for $c>0$. In $\S$\ref{subsec:rotation_hypersurface} we follow Hsiang \cite{Hsi82,Hsi83} to derive the equations of the generating curves of the minimal rotation hypersurfaces in $\Mbar^{n+1}(c)$ for $c\leq{}0$. For the case when $c>0$, the situation is more complicated, hence in $\S$~\ref{subsec:Otsuki's curves} we will follow Otsuki \cite{Ots70,Ots72} to define the generating curves of the of the minimal rotation hypersurfaces immersed in $\SS^{n+1}$ (see also \cite{BL90} in \emph{equivariant language}). \fi \subsection{Clifford minimal hypersurfaces in $\SS^{n+1}(c)$} \label{subsec:Clifford_torus} In this subsection, we shall define the Clifford minimal hypersurfaces in the space forms $\SS^{n+1}(c)$ for $c>0$. Let $S^q(r)$ be a $q$-dimensional sphere in $\mathbb{R}^{q+1}$ with radius $r$. In particular $\SS^{q}(c)=S^{q}(1/\sqrt{c})$ for $c>0$. For $c>0$ and $m=1,\ldots,n-1$, a \emph{Clifford minimal hypersurface} embedded in $\SS^{n+1}(c)$ is defined as follows \begin{equation}\label{eq:Clifford hypersurfaces} \mathscr{M}_{m,n-m}(c)=S^{m}\left(\sqrt{\frac{m}{cn}}\right)\times S^{n-m}\left(\sqrt{\frac{n-m}{cn}}\right)\ . \end{equation} In particular, $\mathscr{M}_{m,n-m}=\mathscr{M}_{m,n-m}(1)$ is a Clifford minimal hypersurface embedded in $\SS^{n+1}$ (see also \cite{Che68} and \cite[pp.229--230]{Ji04}). The following result is well known. \begin{theorem}[{\cite{CdCK70,Law69}}] \label{Thm:Clifford_minimal_hypersurface} The Clifford minimal hypersurfaces $\mathscr{M}_{m,n-m}$ are the only compact minimal hypersurfaces in $\SS^{n+1}$ with $|A|^2=n$. Furthermore the second fundamental form $A$ has two distinct constant eigenvalues with multiplicities $m$ and $n$ respectively. \end{theorem} \subsection{Catenoids in space forms} \label{subsec:rotation_hypersurface} In this subsection we shall follow Hsiang \cite{Hsi82,Hsi83} to derive the differential equations of the generating curves of catenoids in the space form $\Mbar^{n+1}(c)$, and solve the differential equations in the case when $c\leq{}0$. Let $G=\SO(n)$ be a subgroup of the orientation preserving isometry group of $\Mbar^{n+1}(c)$ which pointwise fixes a given geodesic $M^1\subset{}\Mbar^{n+1}(c)$. We call $G$ the \emph{spherical group} of $\Mbar^{n+1}(c)$ and $M^1$ the \emph{rotation axis} of $G$. A hypersurface in $\Mbar^{n+1}(c)$ that is invariant under $G$ is called a \emph{rotation hypersurface}; if the rotation hypersurface in $\Mbar^{n+1}(c)$ is a complete minimal hypersurface, then it is called a \emph{spherical catenoid} or just \emph{catenoid}, denoted by $\mathcal{C}$. The following $2$-dimensional half space is well defined \begin{equation}\label{eq:half space} M_{+}^2(c)=\Mbar^{n+1}(c)/G \ . \end{equation} There are three types of the half spaces: \begin{enumerate} \item $\SS_{+}^{2}(c)=\{x_{1}^{2}+x_{n+1}^{2}+x_{n+2}^{2}=1/c\ |\ x_{1}\geq{}0\}=M_{+}^{2}(c)$ if $c>0$. \item $\mathbb{B}_{+}^{2}(c)=\{x_{1}^{2}+x_{n+1}^{2}<-1/c\ |\ x_{1}\geq{}0\}=M_{+}^{2}(c)$ if $c<0$. \item $\mathbb{R}_{+}^{2}=\{(x_{1},x_{n+1})\in\mathbb{R}^2\ |\ x_1\geq{}0\}=M_{+}^{2}(0)$ if $c=0$. \end{enumerate} It's easy to see that the rotation axis $M^1$ is the boundary of $M_{+}^2(c)$. The orbital distance metric on $\Mbar^{n+1}(c)/G$ is the same as the restriction metric of $M_{+}^2(c)$. Let $d(\cdot,\cdot)$ be the distance function defined on $M_{+}^2(c)$. We shall parametrize $M_{+}^2(c)$ by the following coordinate system: Choose a base point $O\in{}M^1$ and let $x$ be the arc length on $M^1$ travelling in the positive orientation of $M^1= \partial{}M_{+}^2(c)$. To each point $p\in{}M_{+}^2(c)$, there is a (unique) point $q\in{}M^1$ such that the length of the geodesic arc connecting $p$ and $q$, denoted by $\overline{pq}$, is equal to $d(p,M^1)$. We shall assign to the point $O$ the coordinate $(x,y)$, where $x=d(O,q)$ and $y=d(p,q)=$ the length of the geodesic arc $\overline{pq}$ (see Figure \ref{fig:warped product metric}). \begin{figure}[htbp] \begin{center} \begin{minipage}{.45\textwidth} \centering \includegraphics[scale=0.8]{half_2sphere.pdf} \end{minipage}% \begin{minipage}{.55\textwidth} \centering \includegraphics[scale=0.85]{half_hyperbolic_disk} \end{minipage} \end{center} \caption{The warped product metric on the half space $M_{+}^2(c)$ for $c=\pm{}1$. In each half space, $x=d(O,q)$ and $y=d(p,q)=$ the length of the geodesic arc $\overline{pq}$.}\label{fig:warped product metric} \end{figure} According to the above definition of $x$ and $y$, we have \begin{equation*} \begin{cases} -\infty<x<\infty\ \text{and}\ 0\leq{}y<\infty\ , & \text{if}\ c\leq{}0\ , \\ -\dfrac{\pi}{\sqrt{c}}\leq{}x<\dfrac{\pi}{\sqrt{c}} \ \text{and}\ 0\leq{}y\leq\dfrac{\pi}{2\sqrt{c}}\ , & \text{if}\ c>0\ . \end{cases} \end{equation*} In the case $c>0$, the coordinate of the center is $\left(x,\dfrac{\pi}{2\sqrt{c}}\right)$, where $x$ is arbitrary. The \emph{warped product metric} on $M_{+}^2(c)$ is written in the form \begin{equation}\label{eq:warped product metric} ds^2=(f'(y))^{2}\cdot{}dx^2+dy^2\ , \end{equation} where $f'=df/dy$ and \begin{equation} f(y)= \begin{cases} \dfrac{1}{\sqrt{-c}}\,\sinh(\sqrt{-c}\,y)\ , & \text{if}\ c<0\ (\text{hyperbolic case})\ ,\\ y\ , & \text{if}\ c=0\ (\text{Euclidean case})\ ,\\ \dfrac{1}{\sqrt{c}}\,\sin(\sqrt{c}\,y)\ , & \text{if}\ c>0\ (\text{spherical case})\ . \end{cases} \end{equation} Let $\Sigma^n$ be a rotation hypersurface in $\Mbar^{n+1}(c)$ with respect to the geodesic $M^1$, then the curve $\gamma=\Sigma^n\cap{}M_{+}^2(c)$ is called the \emph{generating curve} of $\Sigma^n$. Suppose that $\gamma$ is given by the parametric equations: $x = x(s)$ and $y = y(s)$, $a\leq{}s\leq{}b$, where $s$ is the arc length of $\gamma$ and $y(s)>0$. Let $\alpha$ be the angle between the unit tangent vector of $\gamma(s)$ and $\partial{}/\partial{}y$ (see Figure \ref{fig:diff eq of gamma}). \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.8]{catenary_curve.pdf} \end{center} \caption{In the hyperbolic half space $\mathbb{B}_{+}^2$, $\alpha$ is the angle between the parametrized curve $\gamma$ and the geodesic $\sigma$ at the point $(x,y)$, where $\sigma$ is perpendicular to the $x_{n+1}$-axis.}\label{fig:diff eq of gamma} \end{figure} Now suppose that the mean curvature of the rotation hypersurface $\Sigma^n\subset\Mbar^{n+1}(c)$ generated by the curve $\gamma\subset{}M_{+}^2(c)$ is zero, then we get the following differential equations of $\gamma$ (see \cite[pp. 487--488]{Hsi82} for the details) \begin{equation}\label{eq:diff equation of gamma} \frac{f^{n-1}\cdot{}(f')^2}{\sqrt{(f')^2+(y')^2}}= f^{n-1}\cdot{}f'\cdot{}\sin\alpha=k\ (\text{constant})\ , \end{equation} where $y'=dy/dx$. Without loss of generality we may consider the differential equations \eqref{eq:diff equation of gamma} with initial data $y(0)=a>0$ and $y'(0)=0$. Plugging the initial conditions into \eqref{eq:diff equation of gamma}, we get $k = f^{n-1}(a)f'(a)$, which implies the following equation \begin{equation}\label{eq:bounded condition} \sin\alpha=\frac{f^{n-1}(a)f'(a)}{f^{n-1}(y)f'(y)}\ . \end{equation} Now we assume $c\leq{}0$, then we can write $dx/dy$ as follows \begin{equation*} \ddl{x}{y}=\frac{1}{f'(y)}\cdot \frac{dy}{\sqrt{\left(\dfrac{f(y)}{f(a)}\right)^{2n-2} \cdot\left(\dfrac{f'(y)}{f'(a)}\right)^2-1}}\ . \end{equation*} Integrating both sides in terms of $y$, we have the following function \begin{equation}\label{eq:generating_curve_function} x(y)=\int_{a}^{y}\frac{1}{f'(t)}\cdot \frac{dt}{\sqrt{\left(\dfrac{f(t)}{f(a)}\right)^{2n-2} \cdot\left(\dfrac{f'(t)}{f'(a)}\right)^2-1}}\ , \end{equation} where $a\leq{}y<\infty$. \subsection{Catenoids in $\SS^{n+1}(c)$}\label{subsec:Otsuki's curves} In this subsection we follow Otsuki \cite{Ots70} to study the generating curves of the compact immersed minimal rotation hypersurfaces in $\SS^{n+1}$. See also \cite{BL90} in \emph{equivariant language}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.8]{otsuki_4_half_1T.pdf} \end{center} \caption{The generating curve for the spherical catenoid in $\SS^{n+1}$, its support function is $h(\theta)=d(O,Q)$ and $h'(\theta)=d(P,Q)$. The coordinates $(x_{n+1},x_{n+2})$ of the point $P$ (inside the unit disk) are given by $x_{n+1}=h\sin\theta+h'\cos\theta$ and $x_{n+2}=-h\cos\theta+h'\sin\theta$ (see equation (4.2) in \cite{Ots70}).} \label{Fig: support function} \end{figure} Suppose that $\gamma\subset\SS_{+}^2$ is the generating curve of a minimal rotation hypersurface in $\SS^{n+1}$. Project orthogonally the curve $\gamma$ into the $x_{n+1}x_{n+2}$-plane, and denote it by $\sigma$. Let $h(\theta)$ is the support function of the curve $\sigma$, Otsuki \cite{Ots70} proved that $h$ satisfies the differential equation \begin{equation}\label{eq:ODE of the support function} nh(1-h^2)\ddz{h}{\theta}+\left(\ddl{h}{\theta}\right)^2+ (1-h^2)(nh^2-1)=0 \end{equation} with the initial conditions $h(0)=a\leq{1}/\sqrt{n}$ and $h'(0)=0$ (see Figure~\ref{Fig: support function}). If $a=1/\sqrt{n}$, then the minimal rotation hypersurface in $\SS^{n+1}$ generated by the curve $\sigma$ satisfying \eqref{eq:ODE of the support function} with this initial data is $\mathscr{M}_{n-1,1}$, one of the Clifford minimal hypersurfaces (see \cite[p.~160]{Ots70}). From now on we may assume that $0<a<1/\sqrt{n}$, and set \begin{equation}\label{eq:constant in ODE} C(a)=(a^2)^{1/n}(1-a^2)^{1-1/n}=a^{2/n}(1-a^2)^{1-1/n} \end{equation} for $0<a<1/\sqrt{n}$. Otsuki \cite{Ots70,Ots72} (see also \cite{LW07}) proved that the support function $h(\theta)$, which is the solution to the differential equation \eqref{eq:ODE of the support function} with the initial conditions $h(0)=a\leq{1}/\sqrt{n}$ and $h'(0)=0$, is a periodic function whose period is \begin{equation}\label{eq:period of support function} T(a)=2\int_{a_0}^{a_1}\frac{dx} {\sqrt{1-x^2-C(a)\left(\dfrac{1}{x^2}-1\right)^{1/n}}}\ , \end{equation} where $a_0=a\in(0,1/\sqrt{n})$, and $a_1\in(1/\sqrt{n},1)$ is a solution to the equation \begin{equation*} 1-x^2-C(a)\left(\frac{1}{x^2}-1\right)^{1/n}=0\ . \end{equation*} It was proved in \cite{Ots70,Ots72,LW07} that the period $T$ satisfies the following conditions: \begin{enumerate} \item $T(a)\in(\pi,2\pi)$ is differentiable on $(0,1/\sqrt{n})$, \item $\displaystyle\lim_{a\to{}0^{+}}T(a)=\pi$ and $\displaystyle\lim_{a\to\left(1/\sqrt{n}\right)^{-}}T(a)=\sqrt{2}\,\pi$. \end{enumerate} Moreover the generating curve $\sigma$ is a simple closed curve if and only if $T(a)=2\pi/k$ for $k=1,2,\ldots$, and $\sigma$ is a closed curve (not necessarily simple) if and only if $T(a)$ is a (positive) rational multiple of $\pi$. In conclusion we have the following results: \begin{theorem}[\cite{Ots70,Ots72,BL90,LW07}]\label{thm:otsuki} Let $n\geq{}3$ be an integer. \begin{enumerate} \item There is no closed minimally embedded rotation hypersurface of $\SS^{n+1}$ other than the Clifford minimal hypersurface $\mathscr{M}_{n-1,1}$ and the round geodesic sphere $\SS^{n}$. \item There are countably infinitely many closed minimal rotation hypersurfaces immersed in $\SS^{n+1}$ {\rm(}see also \cite{Hsi83}{\rm)}. \end{enumerate} \end{theorem} \section{Simons' equation and catenoids in space forms}\label{sec:Simons_equation} Suppose that $\Sigma^n$ is a hypersurface immersed in the $(n+1)$-dimensional space form $\Mbar^{n+1}(c)$. Let $A$ be the second fundamental form of $\Sigma^n$ and $\nabla{}A$ be the covariant derivative of $A$, and let $h_{ij}$ and $h_{ijk}$ be the components of $A$ and $\nabla{}A$ in an orthonormal frame respectively. The following lemma was proved by Tam and Zhou \cite[Lemma 3.1]{TZ09} for the case when $c=0$, but it's also true for the case when $c\ne{}0$. \begin{lemma}Let $\Sigma^n$ be a minimal hypersurface immersed in the space form $\Mbar^{n+1}(c)$. At a point where $|A|>0$, we have \begin{equation}\label{eq:Simons identity} |A|{}\Delta|A|+|A|^4=\frac{2}{n}\,|\nabla|A|{}|^2 +nc|A|^2+E\ , \end{equation} with $E\geq{}0$. Moreover, in an orthonormal frame such that $h_{ij} = \lambda_{i}\delta_{ij}$, then $E = E_1 + E_2 + E_3$, where \begin{equation}\label{eq:error term of Simons equation} \begin{aligned} E_1 & = \sum_{j\ne{}i,k\ne{}i,k\ne{}j}h_{ijk}^2\ ,\\ E_2 & = \frac{2}{n} \sum_{j\ne{}i,k\ne{}i,k\ne{}j} (h_{kki}-h_{jji})^2\ ,\\ E_3 & = \left(1+\frac{2}{n}\right)|A|^{-2}\sum_{k} \sum_{i\ne{}j}(h_{ii}h_{jjk}-h_{jj}h_{iik})^2\ . \end{aligned} \end{equation} \end{lemma} \begin{proof}For any point $p\in{}\Sigma^n$, we choose an orthonormal frame field $e_1,\ldots$, $e_{n+1}$ such that, restricted to $\Sigma^n$, the vectors $e_1,\ldots,e_n$ are tangent to $\Sigma^n$ and the vector $e_{n+1}$ is perpendicular to $\Sigma^n$, and the second fundamental form of $\Sigma^n$ is diagonalized by $h_{ij}=\lambda_{i}\delta_{ij}$, where $1\leq{}i,j\leq{}n$. Recall that the curvature tensor $\widebar{R}_{ABCD}$ of $\Mbar^{n+1}(c)$ is given by \begin{equation}\label{eq:space_form_curvature} \widebar{R}_{ABCD}=c(\delta_{AC}\delta_{BD}-\delta_{AD}\delta_{BC})\ , \quad{}1\leq{}A,B,C,D\leq{}n+1\ , \end{equation} where $\delta_{AB}$ is the Kronecker delta. According to \cite[(3.1)]{CdCK70} and \cite[(1.21) and (1.27)]{SSY75}, we have \begin{equation} \sum_{i,j}h_{ij}\Delta{}h_{ij}=-|A|^{4}+nc{}|A|^2\ , \end{equation} and \begin{equation} |A|\Delta{}|A|+|\nabla{}|A||^2=\frac{1}{2}\Delta|A|^2= \sum_{i,j,k}h_{ijk}^2+\sum_{i,j}h_{ij}\Delta{}h_{ij}\ , \end{equation} where $h_{ijk}$ denotes the component of the covariant derivative of the second fundamental form $A$ of $\Sigma^n$ for $1\leq{}i,j,k\leq{}n$. Therefore we have \begin{equation}\label{eq:simons01} |A|\Delta{}|A|+|\nabla{}|A||^2=|\nabla{}A|^2-|A|^{4}+nc{}|A|^2\ , \end{equation} where $|\nabla{}A|^2=\sum\limits_{i,j,k}h_{ijk}^2$. We claim that \begin{equation}\label{eq:simons02} |\nabla{}A|^2=\left(1+\frac{2}{n}\right)|\nabla{}|A||^2+E\ . \end{equation} In fact, according to the computation in \cite[{pp. 3456--3457}]{TZ09}, we have \begin{align*} |\nabla{}A|^2-|\nabla{}|A||^2 & = E_1+2\sum_{i\ne{}k}h_{iik}^2+\frac{n}{n+2}\,E_3\\ & = E_1+\frac{2}{n}\left(|\nabla{}|A||^2+\frac{n}{n+2}\,E_3+ \frac{n}{2}\,E_2\right)+\frac{n}{n+2}\,E_3\\ & = E_1+\frac{2}{n}|\nabla{}|A||^2+E_2+E_3\ , \end{align*} where we use the fact $h_{ijk}=h_{ikj}$ since the sectional curvature of $\Mbar^{n+1}(c)$ is constant (see \cite[(2.12)]{CdCK70} or \cite[(1.10)]{SSY75}). Combining \eqref{eq:simons01} and \eqref{eq:simons02} together, we have \eqref{eq:Simons identity}. \end{proof} Obviously, the term $E$ in \eqref{eq:Simons identity} is always nonnegative, which implies the famous Simons' inequality \begin{equation}\label{eq:Simons_inequality} |A|{}\Delta|A|+|A|^4\geq\frac{2}{n}\,|\nabla|A|{}|^{2}+nc|A|^2\ . \end{equation} If $E\equiv{}0$ in \eqref{eq:Simons identity}, we get the \emph{Simons' equation} \begin{equation}\label{eq:Simons equation} |A|{}\Delta|A|+|A|^4=\frac{2}{n}\,|\nabla|A|{}|^2+nc|A|^2\ . \end{equation} If $n=2$, then $E\equiv{}0$ in \eqref{eq:Simons identity}, so we have the following corollary. \begin{corollary}If $\Sigma^2$ is a minimal surface immersed in the $3$-dimensional space form $\Mbar^{3}(c)$, then $\Sigma^2$ satisfies the following Simons' equation \begin{equation}\label{eq:Simons equation (n=2)} |A|{}\Delta|A|+|A|^4=|\nabla|A|{}|^2+2c|A|^2\ . \end{equation} \end{corollary} We may ask what kinds of non totally geodesic minimal hypersurfaces in the space form $\Mbar^{n+1}(c)$ satisfy the Simons' equation \eqref{eq:Simons equation}? Proposition \ref{prop:Clifford_Simons_equation} and Proposition \ref{prop:catenoid_Simons identity} show that the Clifford minimal hypersurfaces \eqref{eq:Clifford hypersurfaces} and the minimal rotation hypersurfaces (i.e., the catenoids) satisfy \eqref{eq:Simons equation}. On the other hand, Theorem \ref{thm:catenoid charaterization} shows that the Clifford minimal hypersurfaces and catenoids are the only non totally geodesic complete minimal hypersurfaces satisfying \eqref{eq:Simons equation}. \begin{proposition}\label{prop:Clifford_Simons_equation} When $c>0$, the second fundamental form $|A|$ of each Clifford minimal hypersurface \eqref{eq:Clifford hypersurfaces} in $\SS^{n+1}(c)$ satisfies \eqref{eq:Simons equation}. \end{proposition} \begin{proof} We can prove the statement by direct computation (see \cite[pp.68-70]{CdCK70} or \cite[pp.229--230]{Ji04} for the details). Because of Lemma \ref{lem:conformal_minimal}, we will just prove the statement for the case when $c=1$. For $m=1,\ldots,n-1$, we may embed the Clifford minimal hypersurface $\mathscr{M}_{m,n-m}$ into $\SS^{n+1}$ as follows. Let $(u, v)$ be a point of $\mathscr{M}_{m,n-m}$ where $u$ is a vector in $\mathbb{R}^{m+1}$ of length $\sqrt{m/n}$, and $v$ is a vector in $\mathbb{R}^{n-m+1}$ of length $\sqrt{(n-m)/n}$. We can consider $(u, v)$ as a vector in $\mathbb{R}^{n+2} =\mathbb{R}^{m+1}\times\mathbb{R}^{n-m+1}$ of length $1$. Then we may choose an orthonormal basis on $\mathscr{M}_{m,n-m}$ such that the second fundamental form of $\mathscr{M}_{m,n-m}$ can be written as follows \begin{equation*} h_{ij}=\diag\bigg( \underbrace{\sqrt{\frac{n-m}{n}},\ldots,\sqrt{\frac{n-m}{n}}}_{m}, \underbrace{-\sqrt{\frac{m}{n-m}},\ldots,-\sqrt{\frac{m}{n-m}}}_{n-m} \bigg)\ . \end{equation*} Since each component of $h_{ij}$ is a constant, we have $E=0$ in \eqref{eq:Simons identity}. On the other hand, it's easy to get \begin{equation*} |A|^{2}=m\cdot\frac{n-m}{m}+(n-m)\cdot\frac{m}{n-m}=n\ , \end{equation*} so the Clifford minimal hypersurfaces satisfy \eqref{eq:Simons equation}. \end{proof} Next we shall verify that any catenoid in the space form $\Mbar^{n+1}(c)$ satisfies the Simons' equation \eqref{eq:Simons equation}. In the case when $c=0$, Proposition \ref{prop:catenoid_Simons identity} was proved by Tam and Zhou \cite[Proposition 2.1 (iv)]{TZ09}. \begin{proposition}\label{prop:catenoid_Simons identity} The second fundamental form $|A|$ of each catenoid $\mathcal{C}$ in the space form $\Mbar^{n+1}(c)$ satisfies the Simons' equation \eqref{eq:Simons equation}. \end{proposition} \begin{proof}We shall prove the statement in the unified way by using the argument in \cite[$\S$ 2 and $\S$3]{dCD83}. Consider the space form $\Mbar^{n+1}(c)$ as a subset of $\mathbb{R}^{n+2}$ as follows: (i) If $c>0$, let \begin{equation*} \SS^{n+1}(c)=\{x\in\mathbb{R}^{n+2}\ |\ g_{1}(x,x)=1/c\}=\Mbar^{n+1}(c)\ , \end{equation*} where $g_{1}(x,y)=x_{1}y_{1}+\cdots+x_{n+1}y_{n+1}+x_{n+2}y_{n+2}$ for $x,y\in\mathbb{R}^{n+2}$. (ii) If $c<0$, let \begin{equation*} \H^{n+1}(c)=\{x\in\mathbb{R}^{n+2}\ |\ g_{-1}(x,x)=1/c, x_{n+2}>0\} =\Mbar^{n+1}(c)\ , \end{equation*} where $g_{-1}(x,y)=x_{1}y_{1}+\cdots+x_{n+1}y_{n+1}-x_{n+2}y_{n+2}$ for $x,y\in\mathbb{R}^{n+2}$. (iii) If $c=0$, let \begin{equation*} \mathbb{R}^{n+1}=\{x\in\mathbb{R}^{n+2}\ |\ x_{n+2}=0\}=\Mbar^{n+1}(0)\ . \end{equation*} Let $e_{i}=(0,\cdots,0,\underset{i^{\rm{th}}}{1},0,\cdots,0)$ be the $i$-th vector in the space $\mathbb{R}^{n+2}$ for $i=1,\ldots,n+2$. Let $P^{2}$ be a subspace of $\mathbb{R}^{n+2}$ spanned by either $e_{n+1}$ and $e_{n+2}$ if $c\ne{}0$ or $e_{n+1}$ if $c=0$, and let $\O(P^2)$ be the set of metric-preserving transformations of $(\mathbb{R}^{n+2},g_{1})$ if $c>0$, $(\mathbb{R}^{n+2},g_{-1})$ if $c<0$ or $\mathbb{R}^{n+1}$ if $c=0$, which leave $P^2$ pointwise fixed. Let $P^{3}$ be a subspace of $\mathbb{R}^{n+2}$ spanned by either $e_{1}$, $e_{n+1}$ and $e_{n+2}$ if $c\ne{}0$ or $e_{1}$ and $e_{n+1}$ if $c=0$. Let $M^2(c)=\Mbar^{n+1}(c)\cap{}P^{3}$, and let $\gamma$ be a smooth curve in $M^2(c)$ that does not meet $P^2$. The orbit of $\gamma$ under the action of $\O(P^2)$ is a rotation hypersurface generated by $\gamma$, and the curve $\gamma$ is the generating curve of the rotation hypersurface. Suppose that the generating curve $\gamma$ is parametrized by either $x_{1}=x_{1}(s)$, $x_{n+1}=x_{n+1}(s)$ and $x_{n+2}=x_{n+2}(s)$ if $c\ne{}0$ or $x_{1}=x_{1}(s)$ and $x_{n+1}=x_{n+1}(s)$ if $c=0$, where $s$ is the arc length parameter of the curve $\gamma$. Let $\mathbf{I}$ be either a straight line if $c\leq{}0$ or a closed curve immersed in a plane if $c>0$. Let $f:\SS^{n-1}\times\mathbf{I} \to \Mbar^{n+1}(c)\subset\mathbb{R}^{n+2}$ be the minimal spherical rotation hypersurface generated by $\gamma$, which is parametrized as follows \begin{equation}\label{eq:rotation_hypersurface_1} f(t_1,\ldots,t_{n-1},s)= (x_{1}(s)\varphi_{1},\ldots,x_{1}(s)\varphi_{n},x_{n+1}(s),x_{n+2}(s)) \end{equation} if $c\ne{}0$, or \begin{equation}\label{eq:rotation_hypersurface_2} f(t_1,\ldots,t_{n-1},s)= (x_{1}(s)\varphi_{1},\ldots,x_{1}(s)\varphi_{n},x_{n+1}(s)) \end{equation} if $c=0$, where $\varphi(t_1,\ldots,t_{n-1})=(\varphi_{1},\ldots,\varphi_{n})$ is the orthogonal parametrization of the $(n-1)$-unit sphere of the subspace of $\mathbb{R}^{n+2}$ spanned by $e_1,\ldots,e_{n}$. Let $\mathcal{C}=f(\SS^{n-1}\times\mathbf{I})$ be the minimal rotation hypersurface immersed in $\Mbar^{n+1}(c)\subset\mathbb{R}^{n+2}$. According to the computation in \cite[$\S$3]{dCD83}, we have the first fundamental form of $\mathcal{C}$: \begin{equation}\label{eq:first_fundamental_form} g_{ij}=\begin{cases} \alpha_{ij}x_{1}^{2}(s), & 1\leq{}i,j\leq{}n-1\ ,\\ 0, & i=n,j\ne{}n\ \text{or}\ i\ne{}n, j=n\ ,\\ 1, & i=j=n\ , \end{cases} \end{equation} where $\alpha_{ij}=\sum\limits_{k=1}^{n}\dfrac{\partial\varphi_{k}}{\partial{}t_{i}} \dfrac{\partial\varphi_{k}}{\partial{}t_{j}}$ for $1\leq{}i,j\leq{}n-1$. According to Proposition 3.2 in \cite{dCD83}, the principal curvatures of $\mathcal{C}$ are \begin{equation}\label{eq:principal_curvatures} \lambda_{1}=\cdots=\lambda_{n-1}= -\frac{\sqrt{1-cx_{1}^{2}-\dot{x}_{1}^{2}}}{x_{1}} \quad\text{and}\quad \mu=\frac{\ddot{x}_{1}+cx_{1}}{\sqrt{1-cx_{1}^{2}-\dot{x}_{1}^{2}}}\ , \end{equation} where $\dot{x}_{1}$ and $\ddot{x}_{1}$ are first and second derives of $x_{1}$ on $s$ respectively. Since $\mathcal{C}$ is a minimal rotation hypersurface, i.e., a catenoid, in $\Mbar^{n+1}(c)$, then by \cite[(3.13) and (3.16)]{dCD83} we have \begin{equation}\label{eq:dot_x} \dot{x}_{1}^{2} = 1-cx_{1}^{2}-a^2{}x_{1}^{2-2n}\ , \end{equation} and \begin{equation}\label{eq:ddot_x} \ddot{x}_{1} = -cx_{1}+a^{2}(n-1)x_{1}^{1-2n}\ , \end{equation} where $a>0$ is a constant. Therefore we have the following identities \begin{equation}\label{eq:square_norm_of_A} |A|^2 = (n-1)\cdot\frac{1-cx_{1}^{2}-\dot{x}_{1}^{2}}{x_{1}^{2}}+ \frac{(\ddot{x}_{1}+cx_{1})^2}{1-cx_{1}^{2}-\dot{x}_{1}^{2}} =a^{2}n(n-1)x_{1}^{-2n}\ , \end{equation} where we use the equations \eqref{eq:dot_x} and \eqref{eq:ddot_x} for the last equality. If $\phi=\phi(s)$ is a function on $\mathcal{C}\subset\Mbar^{n+1}(c)$ depending only on the variable $s$, then the Laplacian and the square norm of the covariant derivative of $\phi$ with respect to the metric \eqref{eq:first_fundamental_form} on $\mathcal{C}$ respectively are \begin{equation}\label{eq:Laplacian} \Delta{}\phi=\ddot{\phi}+(n-1)\,\frac{\dot{x}_{1}}{x_{1}}\,\dot{\phi} \end{equation} and \begin{equation}\label{eq:covariant_derivative} |\nabla{}\phi|^2=\dot{\phi}^{2}\ . \end{equation} Since $|A|=a\sqrt{n(n-1)}\,x_{1}^{-n}$ is function on $\mathcal{C}$ that depends only on $s$, we have the following equalities \begin{align*} |A|{}\Delta|A|+|A|^4 & = a^{2}n^{2}(n-1)x_{1}^{-4n}(-2a^{2}+2x_{1}^{2n-2}-cx_{1}^{2n})\\ & = \frac{2}{n}\,|\nabla|A|{}|^2+nc|A|^2 \ , \end{align*} where we use the equations \eqref{eq:dot_x} and \eqref{eq:ddot_x} again. \end{proof} \section{Proof of Theorem \ref{thm:catenoid charaterization}} \label{sec:Proof_Main_Theorem} \begin{maintheorem} Let $\Mbar^{n+1}(c)$ be the space form of dimension $n+1$, where $n\geq{}3$. Suppose that $\Sigma^n$ is a non totally geodesic complete minimal hypersurface immersed in $\Mbar^{n+1}(c)$. If the Simons' equation \eqref{eq:Simons equation} holds as an equation at all nonvanishing points of $|A|$ in $\Sigma^n$, then $\Sigma^n\subset\Mbar^{n+1}(c)$ is either \begin{enumerate} \item a catenoid if $c\leq{}0$, or \item a Clifford minimal hypersurface or a compact Ostuki minimal hypersurface if $c>0$. \end{enumerate} \end{maintheorem} \begin{proof By the assumption $\Sigma^{n}$ is not totally geodesic, so $|A|$ is a nonnegative continuous function which does not vanish identically. Let $p$ be a point such that $|A|(p) > 0$, there exists an open neighborhood $U$ of $p$ such that $|A| > 0$ in $U$. We claim that $|\nabla|A||\not\equiv{}0$ in $U$ unless $\Sigma^{n}$ is a Clifford minimal hypersurface in the case when $c>0$. We need deal with two cases: \underline{\emph{Case one}}: $c\leq{}0$. Assume $|\nabla|A||\equiv{}0$ in $U$. Since $\Sigma^{n}$ satisfies the Simons' equation \eqref{eq:Simons equation}, and $|A|$ is a positive constant in $U$, we have $0<|A|^2=nc\leq{}0$ in the open set $U$, which is a contradiction. \underline{\emph{Case two}}: $c>0$. In this case, if $|\nabla|A||\equiv{}0$ in $U$, then $|A|$ is a nonzero constant in $U$, so $|A|^2=nc$ in $U$ according to \eqref{eq:Simons equation} and the assumption $|A|\ne{}0$ in $U$, then $\Sigma^{n}$ is a Clifford minimal hypersurface according to \cite{CdCK70,Law69}. In this case, we have $|\nabla|A||\not\equiv{}0$ in $U$ if $\Sigma^{n}$ is not a Clifford minimal hypersurface. Therefore in each case, there is a point in $U$ such that $|\nabla|A||\ne{}0$ if $\Sigma^{n}$ is not a Clifford minimal hypersurface in the case when $c>0$. By shrinking $U$, we may assume that $|A| > 0$ and $|\nabla|A|| > 0$ in $U$ if $\Sigma^{n}$ is not a Clifford minimal hypersurface in the case when $c>0$. By \eqref{eq:Simons identity} and the fact that $\Sigma^{n}$ satisfies \eqref{eq:Simons equation} in $U$, we conclude that $E \equiv{} 0$ in $U$. According to the argument in \cite[pp. 3457--3458]{TZ09}, the eigenvalues of $A=(h_{ij})_{n\times{}n}$ are $\lambda$ with multiplicity $n-1$ and $\mu=-(n-1)\lambda$ with $\lambda>0$ since $|A|>0$. According to \cite[Theorem 5]{Ots70} and \cite[Corollary 4.4]{dCD83}, $U$ is part of a catenoid $\mathcal{C}$ in $\Mbar^{n+1}(c)$. According to the maximal principle of minimal submanifolds, $\Sigma^n$ is part of a catenoid $\mathcal{C}$ in $\Mbar^{n+1}(c)$. We claim that $\Sigma^{n}$ must coincide with the catenoid $\mathcal{C}$. Let $f:\Sigma^{n}\to\mathcal{C}$ be the inclusion. There are two cases: \underline{\emph{Case one}}: $c\leq{}0$. In this case, each catenoid is a simply connected (since $n\geq{}3$) complete minimal rotation hypersurface embedded in $\Mbar^{n+1}(c)$. Since $f$ is a local isometry, the inclusion $f$ is a covering map by Lemma 8.14 in \cite[p.224]{Spi99}. Therefore $f$ must be an identity map, i.e., $\Sigma^{n}=\mathcal{C}$. \underline{\emph{Case two}}: $c>0$. In this case, since $\Sigma^n$ is a closed minimal hypersurface immersed in $\SS^{n+1}(c)$, we then have $\Sigma^{n}=\mathcal{C}$ according to Theorem 4 and Theorem 5 in \cite{Ots70}. \end{proof} \section{Appendix}\label{sec:appendix} In the appendix, with the help of Mathematica and Ti\emph{k}Z/PGF, we shall draw some figures of the generating curves of three dimensional catenoids in the space form $\Mbar^{4}(c)$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.7]{otsuki_torus_n3_14Pi.pdf} \end{center} \caption{Generating curve of a compact Otsuki minimal surface in the unit sphere $\SS^4$. In this figure, the support function $h$ has initial conditions $h(0)=a_{0}=0.42231$ and $h'(0)=0$, and the period of $h$ is $T=1.4\pi$.}\label{fig:closed generating curve n3} \end{figure} \subsection{Catenoids in $\SS^{n+1}$} In this case we draw the generating curve for a compact Otsuki minimal hypersurface in $\SS^{4}$. The function $C(a)$ is given by \begin{equation}\label{eq:constant C for n=3} C(a)=a^{2/3}(1-a^2)^{2/3}\ , \end{equation} where $0<a<1/\sqrt{3}\approx{}0.57735$. Then the following equation \begin{equation}\label{eq:upper_lower_limits} 1-x^2-C(a)\left(\frac{1}{x^2}-1\right)^{1/3}=0 \end{equation} has two solutions $a_0=a$ and $a_1=(-a + \sqrt{4 - 3 a^2})/2\in(1/\sqrt{3},1)$. Let $a_0=0.42231$, then $a_1=0.71957$, and the period of $h(\theta)$ is \begin{equation*} T=2\int_{a_0}^{a_1}\frac{dx} {\sqrt{1-x^2-C(a_0)\left(\dfrac{1}{x^2}-1\right)^{1/3}}}=4.39823\ , \end{equation*} i.e. $T=1.4\pi$. Therefore we have a closed immersed generating curve as shown in Figure~\ref{fig:closed generating curve n3}, the rotation hypersurface in $\SS^4$ is a compact immersed minimal hypersurface. \subsection{Catenoids in $\mathbb{B}^{n+1}$}\label{subsubsec:Hyperbolic_catenoids} In this case, $c=-1$ and $f(y)=\sinh{}y$, so $f'(y)=\cosh{}y$. Equation \eqref{eq:generating_curve_function} becomes \begin{equation}\label{eq:hyperbolic_catenary} x(y)=\int_{a}^{y}\frac{1}{\cosh{}t}\cdot \frac{dt}{\sqrt{\left(\dfrac{\sinh{}t}{\sinh{}a}\right)^{2n-2} \cdot\left(\dfrac{\cosh{}t}{\cosh{}a}\right)^2-1}}\ , \end{equation} where $a\leq{}y<\infty$. Now let $n=3$, and let $a=0.2$ and $a=1$ respectively in \eqref{eq:hyperbolic_catenary}, then we have two generating curves as shown in Figure \ref{fig:generating curves in hyperbolic space}. \begin{figure}[htbp] \begin{center} \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=0.85]{cat_n3_o2.pdf} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=0.85]{cat_n3_1.pdf} \end{minipage}% \end{center} \caption{Two generating curves for the catenoids in the hyperbolic space $\mathbb{B}^4$. In these figures, $a=0.2$ and $a=1$ respectively. The rotation axis is the $x_{n+1}$-axis.}\label{fig:generating curves in hyperbolic space} \end{figure} \subsection{Catenoids in $\mathbb{R}^{n+1}$}\label{subsubsec:Euclidean_catenoids} In this case, $c=0$ and $f(y)=y$, so $f'(y)=1$. Equation \eqref{eq:generating_curve_function} becomes \begin{equation}\label{eq:Euclidean_generating_curve} x(y)=\int_{a}^{y} \frac{dt}{\sqrt{(t/a)^{2n-2}-1}}\ , \end{equation} where $a\leq{}y<\infty$. It's easy to see that the integral \begin{equation}\label{eq:Euclidean_catenary} x(a,\infty) =\int_{a}^{\infty} \frac{dt}{\sqrt{(t/a)^{2n-2}-1}} =a\int_{1}^{\infty}\frac{dt}{\sqrt{t^{2n-2}-1}} \end{equation} is always finite if $n\geq{}3$, where $a>0$ is a constant. Actually if $n\geq{}3$, then \begin{equation*} \int_{1}^{\infty}\frac{dt}{\sqrt{t^{2n-2}-1}}\leq \int_{1}^{\infty}\frac{dt}{\sqrt{t^{4}-1}}< \int_{1}^{\infty}\frac{dt}{t\sqrt{t^{2}-1}}= \int_{0}^{\infty}\frac{dx}{\cosh{}x}=\frac{\pi}{2}\ , \end{equation*} where we use the substitution $t=\cosh{}x$. Now let $n=3$, and let $a=0.5$ and $a=1$ respectively in \eqref{eq:Euclidean_generating_curve}, then we have two generating curves as shown in Figure \ref{fig:generating curves in Euclidean space}. \begin{figure}[htbp] \begin{center} \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=0.75]{catE_n3_o5.pdf} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=0.75]{catE_n3_1.pdf} \end{minipage}% \end{center} \caption{Two generating curves for the catenoids in the Euclidean space $\mathbb{R}^4$. In these figures, $a=0.5$ and $a=1$ respectively. The rotation axis is the $x_{n+1}$-axis.}\label{fig:generating curves in Euclidean space} \end{figure} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,877,628,089,297
arxiv
\section{ Introduction } We present a direct calculation of the $\Delta F=2$ heavy-light mixing matrix element, \begin{eqnarray} \label{matrix hl} M_{hl}(\mu) \equiv \langle \bar P_{hl}|\bar h \gamma_\rho (1-\gamma_5)l \bar h \gamma_\rho(1-\gamma_5)l|P_{hl}\rangle,\!\! \end{eqnarray} where we have suppressed the scale dependence of the local four quark operator. As is well known, these matrix elements govern $B^0-\barB^0$ and $B^0_s-\barB^0_s$ oscillations~\cite{FLYNN,SONI}. In the above $h$ and $l$ denote heavy and light quark fields, $P_{hl}$ the corresponding pseudoscalar meson, and $\mu$ is the energy scale appropriate to the calculation. In particular, we present a first direct calculation of the SU(3) flavor breaking ratio, \begin{eqnarray} \label{ratio} r_{sd}=M_{bs}(\mu)/M_{bd}(\mu) \end{eqnarray} Our preliminary result is $r_{sd}=1.54(13)(32)$ where the first error is statistical and the second is from uncertainty in extrapolating to the $B$ mass and to $a\to 0$. The importance of this ratio is that, in conjunction with the experimental measurement of $B^0_s-\barB^0_s$ oscillation (when that becomes available), it should allow the cleanest extraction of the crucial CKM parameter $V_{td}$. We note that the above value of $r_{sd}$ is lower than the result reported at the conference (1.81) which is due primarily to a change in how we extrapolate to the $B$; the difference gives a systematic error. Presently, $V_{td}$ is deduced from $B^0-\barB^0$ oscillation via\cite{BURAS} \begin{eqnarray} \label{mixing par} x_{bd}=\frac{(\Delta M)_{bd}}{\Gamma_{bd}}\propto m_{bd}^2 B_{bd}(\mu)f_{bd}^2 |V_{td}|^2 \end{eqnarray} where $m_{bd}$, $\Gamma^{-1}_{bd}\equiv\tau_{bd}$, and $f_{bd}$ are the mass, life time, and decay constant of the $B^0$ meson, and $(\Delta M)_{bd}$ is the mass difference of the two mass eigenstates of the $B^0-\barB^0$ system. $x_{bd}$ is the mixing parameter characterizing the oscillation and has been determined experimentally, $x_{bd}=0.71(6)$~\cite{PDB}. $B_{bd}$ is the so called bag, or B, parameter. To extract $V_{td}$ from Eq.(~\ref{mixing par}) requires knowledge of two hadronic matrix elements, $f_{bd}$ and $B_{bd}$. These are being calculated by using lattice and other methods. $f_{bd}$ may eventually even be measured experimentally through, for example, the decay $B\to \tau \nu_\tau$. However, $B_{bd}$ is a purely theoretical construct which is inaccessible to experiment. Thus determination of $V_{td}$ from experiment through use of Eq.~\ref{mixing par} will ultimately be limited by the precision of the nonperturbative quantity $f^2_{bd}B_{bd}$. These parameters are related to the matrix element, Eq.~(\ref{matrix hl}), via \begin{eqnarray} \label{B param} M_{bd}(\mu)=\frac{8}{3}f_{bd}^2 m_{bd}^2 B_{bd}. \end{eqnarray} Now making the replacement $d\to s$ in Eq.~(\ref{mixing par}) and taking the ratio with Eq.~(\ref{mixing par}), we arrive at an alternate way to extract $V_{td}$, \begin{eqnarray} \label{ckm ratio} \frac{|V_{td}|^2}{|V_{ts}|^2}&=&r_{sd} \frac{\tau_{bs}}{\tau_{bd}} \frac{x_{bd}}{x_{bs}} \end{eqnarray} Note that $V_{ts}$ in Eq.~(\ref{ckm ratio}) is related by three generation unitarity to $V_{cb}$ and is therefore already quite well determined, $|V_{ts}|\approx|V_{cb}|=0.041 \pm 0.003\cite{PDB}$. The important distinction between using Eq.~(\ref{ckm ratio}) instead of Eq.~(\ref{mixing par}) is that the former requires only knowledge of {\it corrections} to SU(3) flavor symmetry while the latter requires the {\it absolute} value of the matrix element $M_{bd}$. It is also important to realize that since $r_{sd}$ is a ratio of two very similar hadronic matrix elements, it is less susceptible to common systematic errors in lattice calculations, including scale dependence, matching of continuum and lattice operators, and heavy quark mass dependence. Indeed, the ratio $r_{sd}$ is, to an excellent approximation, renormalization group invariant, even though the individual matrix elements $M_{bs}$ and $M_{bd}$ are scale dependent. \section{ Simulations and Results} Table~(\ref{lat table}) summarizes our quenched lattices and the valence Wilson quark hopping parameters, $\kappa_{l}$ and $\kappa_{h}$, used to construct quark propagators. For each $\kappa_{l}$ and $\kappa_{h}$ in Table~\ref{lat table} we calculate a quark propagator using a single point source at the center of the lattice and a point sink. \begin{table*}[hbt] \caption{Summary of simulation parameters.} \label{lat table} \begin{tabular}{ccccc} \hline $6/g^2$ & conf. & size & $\kappa_{l}$ &$\kappa_{h}$ \\ \hline 5.7& 83&$ 16^3\times 33$& 0.160 0.164 0.166&0.115 0.125 0.135 0.145\\ 5.85& 100&$ 20^3\times 61$& 0.158 0.159 0.160&0.107 0.122 0.130 0.138 0.143\\ 6.0& 60&$ 16^3\times 39$& 0.152 0.154 0.155&0.103 0.118 0.130 0.135 0.142\\ 6.0& 100&$ 24^3\times 39$& 0.152 0.154 0.155&0.103 0.118 0.130 0.135 0.142\\ 6.3& 60&$ 24^3\times 61$& 0.148 0.149 0.150 0.1507&0.100 0.110 0.125 0.133 0.140\\ \hline \end{tabular} \end{table*} In Fig.~\ref{mll b63} we show example results at $6/g^2=6.3$ for $M_{hl}$ versus $\kappa_{h}$ for each value of $\kappa_{l}$. The lattice matrix elements are found through simultaneous fits to the three point function corresponding to Eq.~\ref{matrix hl} and the two point function of the corresponding heavy-light interpolating operator. In Fig.~\ref{mll b63} and the following, we have already matched the lattice operator to the continuum one by using the one loop perturbative result from Refs.~\cite{MART,BDS}. The scale dependent renormalization $Z$ factor is calculated in the NDR scheme at scale $\mu=2.0$ GeV. The coupling is tadpole improved and evaluated at scale $1/a$, and we include the KLM normalization for heavy quarks. Results for the physical $B$ and $B_s$ meson systems follow from a series of fits to the lattice data which we use to extrapolate in the two parameters $\kappa_{h}$ and $\kappa_{l}$. Since the data are highly correlated, we use covariant fits and a jackknife procedure at each step to account for the correlations. We take the form of our fits from chiral perturbation theory and expectations based on heavy quark effective theory (HQET). \begin{figure}[hbt] \vspace{-0.2in} \vbox{ \hskip-.2in\epsfxsize=3.0in \epsfbox[0 0 4096 4096]{ll_corr.ps} } \vskip -0.5in \caption{ The matrix element $M_{hl}$ at $6/g^2=6.3$.} \label{mll b63} \vspace{-0.2in} \end{figure} $\kappa_c(6/g^2)$ and $\kappa_s(6/g^2)$ are determined from either linear or quadratic fits to $m_{ll^\prime}^2$ as a function of $\kappa_{l}^{-1}$ and $\kappa_{l^\prime}^{-1}$ ($l^\prime$ refers to the strange quark). The values for $\kappa_c$ and $\kappa_s$ are summarized in Table~\ref{kctable}. Finding $\kappa_s$ requires the scale $a$, which we set from $af_\pi$, to determine the lattice value of the kaon mass $a m_K$ ($a^{-1}$ is tabulated in Table~(\ref{kctable})). We note that at $6/g^2=5.7$ the choice of the coupling constant scale for $Z_A$, the axial current renormalization, has a significant effect on the lattice spacing determination; $Z_A$ differs by $\sim 7\%$ when the scale changes from $1/a$ to $\pi/a$. Next, using chiral perturbation theory for heavy-light mesons~\cite{BOOTH,SY}, we extrapolate $M_{hl}$ to $\kappa_{l}=\kappa_c$. We do not include chiral logarithms in our fits since the light quark masses used in the extrapolations are relatively heavy, $\kappa_{l}\approx\kappa_s$. The results for $M_{hl}$ at $6/g^2=6.3$ (see Fig.~\ref{mllc}) show a smooth linear behavior. The confidence levels for the extrapolations are much lower for $6/g^2=6.0(24^3)$ than for the other points; in addition, the effective values of $M_{hl}$ show poor plateaus, so we exclude this point in our final determination of $r_{sd}$. \begin{figure}[hbt] \vspace{-0.2in} \vbox{ \hskip-.2in\epsfxsize=3.0in \epsfbox[0 0 4096 4096]{mllc.ps} } \vskip -0.5in \caption{\label{mllc} $M_{hl}$ at $6/g^2=6.3$ extrapolated to $\kappa_{l}=\kappa_c$. Curves for the other values of $6/g^2$ are similar.} \vspace{-0.2in} \end{figure} \begin{table}[hbt] \caption{Inverse lattice spacing and critical and strange hopping parameters.} \label{kctable} \begin{tabular}{cccc} \hline $6/g^2$& $a^{-1}$(GeV)&$\kappa_c$&$\kappa_s$ \\ \hline 5.7& 1.45(10)& 0.16969(10)& 0.1642(7) \\ 5.85& 1.64(20)& 0.16163(9) & 0.1577(9)\\ 6.0& 2.06(17)& 0.15711(7) & 0.1548(4)\\ 6.0& 2.08(13)& 0.15714(4) & 0.1543(4)\\ 6.3& 3.37(47)& 0.15226(16)& 0.1506(4)\\ \hline \end{tabular} \vspace{-0.2in} \end{table} Inspired by HQET, we continue by fitting $M_{hc}$ to a polynomial in the inverse heavy meson mass, $m_{hc}^{-1}$, and then extrapolating to the $B$ meson mass (see Fig.~\ref{Mvsminv}). Again, we use $f_\pi$ to set the scale. Of course, the same procedure can be carried through for the heavy-strange mesons by first extrapolating the data to $\kappa_s$ instead of $\kappa_c$. The resulting curve is also shown in Fig.~\ref{Mvsminv}. Fits which include all mass values (dashed line) generally have low confidence levels. Moreover, the lighter points have smaller statistical errors and dominate the fits, yet they are far from the $B$ mass. We therefore omit the lightest two points at each value of $6/g^2$ and use a completely constrained fit to extrapolate to the $B$. Compared with fitting all points, this results in systematically lower values of $r_{sd}$, changing the central value from 1.81 (which was reported at the conference) to 1.54. We take the 0.27 difference as the systematic error of the extrapolation. \begin{figure}[hbt] \vbox{ \hskip-.2in \epsfxsize=3.0in \epsfbox[0 0 4096 4096]{M_vs_minv.ps} } \vskip -0.5in \caption{\label{Mvsminv} $M_{hc}$ (octagons) and $M_{hs}$ (squares) as a function of the inverse heavy-down(strange) meson mass, at $6/g^2=6.0$. The dashed line shows the effect of the lightest points on the fit.} \vspace{-0.25in} \end{figure} The ratio $M_{bs}/M_{bd}$ is shown as a function of $a(6/g^2)$ in Fig.~\ref{Mratio}. At $6/g^2=5.7$, our heaviest mass is still quite far from the $B$ mass, so we also ignore this point in the extrapolation to the continuum limit. Generally, we expect the Wilson quark action to introduce discretization errors of order $a$ in all observables. However, in a ratio of two similar quantities, we might expect a large cancellation of the lowest order discretization errors. A constant fit gives $M_{bs}/M_{bd}= 1.54(13)$ while a linear fit gives 1.72(67). Since the coefficient of the linear term is only 0.3 sigma from 0, we quote the constant fit result as our central value and use the difference as an estimate of the systematic error of the continuum extrapolation. Adding that error in quadrature with the systematic error from the extrapolation to the $B$ mass, we get $r_{sd}=1.54(13)(32)$. \begin{figure}[hbt] \vbox{ \hskip-.2in \epsfxsize=3.0in \epsfbox[0 0 4096 4096]{r_vs_a.ps} } \vskip -0.5in \caption{\label{Mratio} The SU(3) flavor breaking ratio $M_{bs}/M_{bd}$ versus the lattice spacing $a$. The points denoted by crosses were used in the fit (solid line). The circles ($\beta=5.7$ and $\beta=6.0, 24^3$) were omitted, for reasons explained in the text. The burst shows $r_{sd}$ extrapolated to $a=0$. } \vspace{-0.25in} \end{figure} The extraction of the individual values of $M_{bd}$ and $M_{bs}$ is clearly expected to have larger errors. Conventionally~\cite{CB88,FLYNN,SONI} these matrix elements are given in terms of the corresponding B parameter defined in Eq.~(\ref{B param}). Carrying out the above fitting procedure for $B_{bd}(\mu)$, we find a constant fit yields $B_{bd}(2 \,{\rm GeV})=0.97(3)$ while linear extrapolation gives 1.02(13). We cannot however distinguish $B_{bs}(2 \,{\rm GeV})$ from $B_{bd}(2 \,{\rm GeV})$ since our data for $B_{hl}$ versus $\kappa_{l}$ are fit equally well to constant or linear forms. This was not true for $M_{hl}$ as is evident from Fig.~\ref{mllc}. Taking the linear result, we quote $B_{bd}(2 \,{\rm GeV})=B_{bs}(2 \,{\rm GeV})=1.02(13)$. We recall that until now~\cite{FLYNN,SONI}, lattice results for the SU(3) breaking ratio $r_{sd}$ have been obtained by using Eqs.~(\ref{ratio}) and (\ref{B param}) and the lattice measurements of $f_{bd(s)}$ and $B_{bd(s)}$. A reasonable estimate is $f_{bs}/f_{bd}\approx1.13\pm .10$~\cite{FLYNN,SONI,BLS,UKQCD} (we are presently calculating this ratio on our lattices). As indicated above, the ratio of B parameters is consistent with unity, and the ratio of masses is 1.017~\cite{PDB}. Therefore, the conventional method leads to $r_{sd}\approx 1.32\pm .23$ compared to $1.54\pm .13\pm .32$ obtained with our direct method. Thus the two methods are quite consistent. However, as we have emphasized, the direct method offers many distinct advantages, and future lattice computations should be able to improve the precision in our determination of the ratio $r_{sd}$. This research was supported by US DOE grants 2FG02-91ER4D628 and DE-AC0276CH0016. The numerical computations were carried out at the NERSC supercomputing center.
2,877,628,089,298
arxiv
\section{Introduction} A martensitic transformation (MT)~\cite{Ashby1998} is a diffusionless phase transition, triggered by temperature or stress, that changes the symmetry of a high-temperature phase (austenite) and forms variants of a low temperature phase (martensite). Most of the MTs are irreversible, as dislocations, shear, and plastic deformation accumulate during the transformation. However, if the symmetry of martensite is lower than that of austenite and if the variations in lattice parameters and atomic volumes are small, the MT can be reverted, that is, the system can be switched between the two phases with small latent heat~\cite{Bhattacharya2004,Bhattacharya2005,James2005,Cui2006}. Reversible MTs in metals or polymers are appealing as they often result in the shape memory effect, the ability to recover a predetermined shape upon heating, and pseudoelasticity, the capacity to accommodate large deformations without plasticity~\cite{Chang1951,Lendlein2001,Jani2014}. Other examples in which reversible MTs are important include the recently discovered gum metals~\cite{Hao2013}, where metastable phases have been observed to form via reversible transformations~\cite{Zhang2017}. An urgent technological challenge for actuator and biomedical applications is to identify alloys that exhibit reversible MTs that are stable during operational cycles. With very few exceptions~\cite{Haskins2016}, first principles investigations aiming to clarify the mechanisms underlying a MT generally rely on static, $T=0$~K calculations. These, however, are often inadequate to describe the atomistic processes responsible for the dynamic and/or thermodynamic stabilization of the austenite phase at finite temperatures, as well as the interval of temperatures in which austenite and martensite are metastable (metastability region), the free energy barrier, the latent heat, and even the Ehrenfest order of a MT. To overcome these limitations we have employed \textit{ab initio} molecular dynamics (aiMD) simulations to access structural properties at finite temperature, and combined our \textit{ab initio} data with a 2-4-6 Landau-Falk expansion of the free energy~\cite{Falk1980,Khalil-Allafi2005} to characterize the nature of reversible MTs and suggest necessary conditions to distinguish them from irreversible ones. We have applied our method to the shape memory alloy Ti-Ta~\cite{Bagaryatskii1958,Bywater1972,Fedotov1985,Buenconsejo2009,Niendorf2015,Chakraborty2015,Chakraborty2016,Kadletz2017,Kadletz2018,Ferrari2018} that features a reversible MT with a high ($>$100\degree C) transition temperature. Our key findings include that, in this system, there is only a small interval of temperatures where austenite and martensite are both dynamically stable and that, in this interval, the two phases are separated by an extremely small free energy barrier. Any first order phase transition, like the reversible MT described here, involves the nucleation and growth of a new phase inside the other; the consideration of this mechanism is beyond the scope of this work. Nevertheless, even for a homogeneous transition, small metastability regions, energy barriers and latent heats generally distinguish reversible MTs from ordinary MTs; with our approach we provide a fully \textit{ab initio} strategy to identify these fundamental characteristics of a MT. \begin{figure} \begin{centering} \includegraphics[scale=0.15]{images/Martensitic.png} \par\end{centering} \caption{The MT in Ti-Ta. The austenitic phase (\textbf{left}) is a bcc structure. The martensitic phase (\textbf{right}) is orthorhombic, and it is obtained from the austenitic phase by cell distortion and gliding of alternating $\left\{ 110\right\}$ planes (in brown) along the $\left\langle -110\right\rangle$ direction, described by the parameter $\Delta y$. } \label{bet_alp} \end{figure} \begin{figure} \begin{centering} \includegraphics[scale=0.20]{images/Lattice_param.png} \par\end{centering} \caption{Lattice parameters of Ti-25Ta (red) and Ti-31.25Ta (blue) as a function of temperature. Circles are experimental data on bulk samples and thin films at room temperature. Empty squares are DFT calculations from Ref.~\cite{Chakraborty2016}. Broken lines are guide-to-the-eye.} \label{lat} \end{figure} \begin{figure*}[t] \begin{centering} \includegraphics[scale=0.20]{images/SLS_y.png} \par\end{centering} \caption{ \label{sls_and_y} a) Spontaneous lattice strain of $\alpha''$ for Ti-25Ta (red) and Ti-31.25Ta (blue) as a function of temperature. b) The time- and atom-averaged $\Delta y$ parameter as a function of temperature for Ti-25Ta (red) and Ti-31.25Ta (blue). The inset shows the directions of the average atomic displacements observed in the aiMD simulations. For both order parameters, squares are extracted from the aiMD simulations, circles are experimental data for bulk and thin film samples at room temperature, and solid lines are predictions from the Landau-Falk expansion (no fit). } \end{figure*} \begin{figure}[b] \begin{centering} \includegraphics[scale=0.20]{images/free_en_1.png} \par\end{centering} \caption{The free energy profiles as a function of the order parameter for Ti-25Ta (\textbf{left}) and Ti-31.25Ta (\textbf{right}) for different temperatures. The inset shows the same profiles for an interval of temperatures around the transition temperature $T_0$.} \label{free_en} \end{figure} The austenitic phase in Ti-Ta is a solid solution of Ti and Ta with body-centered cubic (bcc) symmetry, called $\beta$ phase. At lower temperatures the $\beta$ phase breaks its cubic symmetry and transforms into one of the twelve-fold degenerate orthorhombic martensitic variants, called $\alpha''$. As depicted in Fig.~\ref{bet_alp}, $\alpha''$ (right panel) is obtained from $\beta$ (left panel) by an orthorhombic cell distortion and a displacement of alternating $\left\{ 110\right\}$ atomic planes along $\left\langle -110\right\rangle$ directions. The lattice vectors of the martensitic phase are ${(a_{\alpha''},0,0), (0,b_{\alpha''},0), (0,0,c_{\alpha''})}$, with $a_{\alpha''}<c_{\alpha''}/\sqrt{2}<b_{\alpha''}/\sqrt{2}$. The MT in Ti-Ta can be described by two order parameters that change together: the spontaneous lattice strain (SLS) of martensite, which accounts for the respective elongation and shrinkage of the lattice parameters, and the average displacement from ideal bcc positions $\Delta y$. The SLS is given by~\cite{Kadletz2018}: \begin{equation} \text{SLS}= 2 \cdot \frac{b_{\alpha''}/\sqrt{2}-a_{\alpha''}}{b_{\alpha''}/\sqrt{2}+a_{\alpha''}} \quad, \label{sls} \end{equation} and $\Delta y$ is the average relative distance of the atoms in the gliding planes from the ideal bcc positions. We have performed Parrinello-Rahman~\cite{Parrinello1980} aiMD simulations in the $NPT$ ensemble using special quasirandom structures (SQS)~\cite{Zunger1990} for two compositions with 25~at.\% and 31.25~at.\%~Ta (Ti-25Ta and Ti-31.25Ta, see the Supplemental Material \cite{SuppMat} for the details of the calculations). SQSs arrangements mimic solid solutions by minimizing geometrical $n$-body correlations. For Ti-25Ta we have carried out aiMD simulations at 500~K, 600~K, 650~K, and~700 K, whereas for Ti-31.25Ta at 230~K, 415~K, 500~K, and 600~K. In Fig.~\ref{lat} the average lattice parameters $a$, $b/\sqrt{2}$, and $c/\sqrt{2}$ extracted from the aiMD simulations are presented as a function of temperature, and compared to previous $T=0$~K calculations~\cite{Chakraborty2016} and experimental data on bulk samples~\cite{Kadletz2017} and thin films~\cite{Kadletz2018}. At low temperature the structures correspond to the orthorhombic $\alpha''$ phase, as $a<c/\sqrt{2}<b/\sqrt{2}$ for both compositions. Our 0~K relaxed lattice constants are generally in very good agreement with the values by Chakraborty~\textit{et al.}~\cite{Chakraborty2016}, and the aiMD simulation results compare very well with the experimental data at room temperature by Kadletz~\textit{et al.}~\cite{Kadletz2017, Kadletz2018}. At $T>600$ K and $T\geq500$ K for Ti-25Ta and Ti-31.25Ta, respectively, $b$ and $c$ become equal, indicating that the austenitic phase forms. The fact that the lattice parameter $a$ is slightly smaller than $b/\sqrt{2}$ and $c/\sqrt{2}$ even at high temperatures, when the system is in the austenitic phase, is due to finite size effects. The results for the SLS from the numerical simulations are shown as square symbols in Fig.~\ref{sls_and_y}a. The values of the calculated SLS are consistent with the experimental data. At high temperatures the residual SLS is around 1\%, suggesting that the mentioned size effects are small. The square symbols in Fig.~\ref{sls_and_y}b represent the atomic displacements $\Delta y$, averaged over time and over all atoms in the supercell, as a function of temperature. For both compositions, at low temperature the value of $\Delta y$ is approximately 0.1. At $T>600$ K and $T\geq500$ K for Ti-25Ta and Ti-31.25Ta, respectively, $\Delta y$ drops to zero, which indicates that the average atomic positions coincide with those of an ideal bcc lattice. The inset of Fig.~\ref{sls_and_y}b shows that the displacements for both compositions are in the $\left\langle1 1 0\right\rangle$ direction, consistent with the mechanism depicted in Fig.~\ref{bet_alp}. The deviation of the theoretical $\Delta y$ values from the experimental data is attributed to the presence of phase separation in both the bulk and thin film samples in the experiments~\cite{Ferrari2018a}. Phase separation implies that the Ta content in the $\alpha''$ phase is considerably higher than the nominal composition of the samples and leads to a severe underestimation of the $\Delta y$ value. From the temperature dependence of the two order parameters in our aiMD simulations the transition temperatures $T_0$ for Ti-25Ta and Ti-31.25Ta have been determined~\cite{SuppMat} to be approximately 625~K and 500~K, respectively, slightly overestimated in comparison to the experimental data (560~K and 420~K, respectively)~\cite{Ferrari2018}. An even more severe overestimation has been noted before in aiMD simulations of the shape memory alloy NiTi~\cite{Haskins2016} and imputed to the absence of crystal defects and internal stresses in the calculations. Our values should therefore be considered as an upper limit for $T_0$ in an ideal, defect free crystal. As an additional possible source of error, the finite size of the simulation cell may induce artificial correlations. To fully characterize the MT $\alpha''\rightleftharpoons\beta$ we can parametrize the free energy $F(V,T)$, which, at zero pressure, governs the thermodynamics of the phase transition. For reversible MTs, Falk~\cite{Falk1980} has suggested a 2-4-6 Landau expansion of $F(V,T)$ as a function of a one dimensional order parameter $\eta$ \begin{equation} F(\eta,T)=a \eta^6 - b \eta^4+ c (T-T_c) \eta^2, \label{falk_free_en} \end{equation} where $a$, $b$, and $c$ are material-dependent parameters, and $T_c<T_0$ is the temperature at which the austenitic phase becomes metastable. In this picture, $T_0$ is the temperature at which the free energies of austenite and martensite are equal. \newline In the case of the MT in Ti-Ta, Eq.~\eqref{falk_free_en} provides a one-dimensional description of the relative stability of austenite and one of the twelve-fold degenerate martensitic variants; $\eta$ can be either the SLS or $\Delta y$, as in the MT the lattice constants and atomic positions are observed to change together. Traditionally, Eq.~\eqref{falk_free_en} has been used to fit order parameters and latent heats measured experimentally. Here, we determine the parameters $a$, $b$, $c$, and $T_c$ exclusively from first principles simulation data. Specifically, we have parametrized the free energy to reproduce the energy difference between $\beta$ and $\alpha''$ at 0~K, the transition temperature $T_0$, and the values of the order parameters at 0~K and at $T_0$ (see the Supplemental Material~\cite{SuppMat} for details). The obtained free energy curves as a function of $\eta$ are presented in Fig.~\ref{free_en} for Ti-25Ta and Ti-31.25Ta at different temperatures. At 0~K the austenitic phase (corresponding to $\eta=0$) is a maximum of the energy, whereas the martensitic phase (corresponding to $\eta=\pm 1$) is a minimum. At this temperature there is no barrier separating the two states, meaning that austenite is unstable, in agreement with previous 0~K static calculations~\cite{Chakraborty2015}. As the temperature increases, the martensitic minimum shifts towards smaller values of $\eta$. At high temperature the free energy has only one minimum at the austenitic phase, hence the martensite is unstable. The martensitic and austenitic phases are therefore found to be unstable in a very wide range of temperatures. This is confirmed by our aiMD simulations: as initial configurations we used both the $\alpha''$ as well as the $\beta$ phase and apart from the simulations for Ti-31.25Ta at $T=500$~K the structure immediately transformed to the thermodynamically stable one, reflecting the instability of the corresponding other phase. Within the Landau-Falk expansion, however, a small interval of temperatures around $T_0$ is predicted in which both phases are metastable, separated by a very small free energy barrier, as shown in the inset of Fig.~\ref{free_en}. Consequently, the phase transition $\alpha''\rightleftharpoons\beta$ is of first order, in agreement with experiments~\cite{Kadletz2018}. This is also supported by the numerical data: for Ti-31.25Ta at $T\sim$ 500 K we have found that the martensitic and austenitic phases coexist. The presence of this free energy barrier is due to entropy contributions to the free energy and cannot be detected with 0 K calculations. Finite temperature simulations are thus essential to capture the correct mechanism of stabilization of the austenitic phase. In particular, the entropy difference $\Delta S$ between austenite and martensite induces a finite latent heat $T_0 \Delta S$ of the MT. We obtain from the Falk-Landau model values of $T_0 \Delta S=$ 19$\pm$3 meV/at.~ and 11$\pm$3 meV/at. for Ti-25Ta and Ti-31.25Ta, respectively. Most notably, we extract from the analytical expansion metastability regions of only 70$\pm$30~K and 30$\pm$10~K, and free energy barriers of only 200$\pm$70 $\mu$eV/at.~and 100$\pm$30 $\mu$eV/at.~for Ti-25Ta and Ti-31.25Ta, respectively. These exceptionally small values indicate that the MT in Ti-Ta is highly reversible. In fact, such small metastability regions and energy barriers for bulk material are necessary properties that distinguish reversible MTs from irreversible MTs. For comparison, the energy barriers for the MTs in Fe-C alloys range between $20-50$~meV/at.~\cite{Zhang2015,Zhang2016}, which is approximately 2~orders of magnitude larger than the barriers we observe in Ti-Ta. A very small free energy barrier is also consistent with our numerical calculations, as for Ti-31.25Ta we have captured a MT within one aiMD run (see Fig.~5 in the Supplemental Material~\cite{SuppMat}). Another factor that favors the reversibility of the MT is a small difference in atomic volume between the martensite and austenite~\cite{Cui2006}, which is also fulfilled in Ti-Ta (details are given in the Supplemental Material~\cite{SuppMat}). The analytical expansion in Eq.~\eqref{falk_free_en} can further be used to extract the temperature dependence of the order parameters SLS and $\Delta y$: the value of the order parameter at each temperature is the one that minimizes the free energy at that particular temperature~\cite{SuppMat}. The corresponding trends in SLS and $\Delta y$ predicted by the Landau-Falk expansion are presented in Fig.~\ref{sls_and_y} as solid lines. The agreement between the aiMD data and the analytical predictions is remarkable. We would like to stress that the parameters entering Eq.~\eqref{falk_free_en} have not been obtained by fitting the temperature dependence of the order parameters SLS and $\Delta y$, but have been extracted from our first principles data at 0~K and $T_0$. Furthermore, within the Landau-Falk expansion the two order parameters are predicted to be discontinuous at the transition temperature, confirming the first-order character of the MT. In conclusion, we have successfully applied a combination of \textit{ab initio} molecular dynamics simulations with an analytical expansion of the free energy to characterize the most significant properties of martensitic transformations, which often cannot be captured by 0~K calculations. The methodology presented in this work is based entirely on first principles data and is very well suited to study MTs in a variety of compounds. In particular, we have applied this formalism to the technologically relevant Ti-Ta alloy, for which we have predicted for bulk transformations very small metastability regions (tens of K) and very small free energy barriers (hundreds of $\mu$eV). These two quantities are decisive in specifying reversible MTs and have to be considered as the fundamental origin of the shape memory effect. The work presented in this letter has been financially supported by the Deutsche Forschungsgemeinschaft (DFG) within the research unit FOR 1766 (High Temperature Shape Memory Alloys, http://www.for1766.de), under the grant number RO3073/4-2 (sub-project 3). D.G.S.~acknowledges financial support from the Olle Engkvist Foundation. The computations have been performed using the Gamma and Triolith clusters, managed by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre (NSC) in Link{\"o}ping, the Kebnekaise cluster at the High Performance Computing Center North (HPC2N) in Ume\r{a}, and the Beskow cluster at the Center for High Performance Computing (PDC) in Stockholm. \section{Anharmonicity in Austenite} \section{Computational Details} We have performed Parrinello-Rahman $NPT$ molecular dynamics~\cite{Parrinello1980,Parrinello1981} with a Langevin thermostat and an Andersen barostat with Langevin friction~\cite{Allen1991}, as implemented in the Vienna \textit{Ab initio} Simulation Package (VASP 5.4)~\cite{Kresse1993,Kresse1996,Kresse1996a}. The friction coefficients of both the Langevin thermostat and the barostat have been set to $\gamma=0.1$ ps$^{-1}$, while a value of $M=1$ a.m.u.~has been used for the mass of the extended particle in the Andersen barostat. With these settings, the root mean squared deviation of the instantaneous $T$ and $P$ from their average values was of the order of 20~K and 100~MPa, respectively. A timestep of 1 fs has been employed for all simulations. The sampling has always been started after complete equilibration of both temperature and pressure, and thermodynamic averages have been performed on trajectories with a duration of at least 7 ps. \newline Total energies and forces have been computed using density functional theory (DFT) with projector-augmented wave (PAW)~\cite{Bloechl1994,Kresse1999} pseudopotentials including $s$, $p$, and $d$ electrons for Ti and Ta. The generalized gradient approximation (GGA) functional parametrized by Perdew, Burke, and Ernzerhof (PBE)~\cite{Perdew1996} has been utilized for the exchange-correlation term. To integrate the Brillouin zone, we have employed the Monkhorst-Pack scheme~\cite{Baldereschi1973,Monkhorst1976} with a k-point mesh with a linear density of 0.3 2$\pi/$\AA. The electronic occupations have been smeared with the Methfessel-Paxton method~\cite{Methfessel1989} with a width of 0.05~eV. The energy cutoff has been fixed to 400~eV. These settings have been found to ensure an accuracy of approximately 4~meV/at.~on total energy differences. $NPT$ simulations change the volume of the supercell and therefore imply the presence of Pulay stresses if plane-wave basis sets are used~\cite{DaCosta1986}. In our calculations, we estimate that the absolute value of the volume is systematically underestimated by roughly 0.5\% with respect to static calculations of the equilibrium volume with the Birch-Murnaghan equation of state~\cite{Murnaghan1944,Birch1947}. The structural relaxations at 0~K have been performed on both the atomic and lattice degrees of freedom until all the forces were less than 0.01~eV/\AA~and all the components of the stress tensor were less than 100~MPa. \newline The simulations have been carried out in $(4\times4\times4)$ supercells of the conventional orthorhombic cell of $\alpha''$ containing 256 atoms (see Fig.~\ref{SQS}). The occupations of lattice sites have been determined according to special quasirandom structures (SQS) configurations~\cite{Zunger1990,Wei1990} generated with the Monte Carlo algorithm of a modified version~\cite{Pezold2010,Kossmann2015} of the ATAT package~\cite{VanDeWalle2002}. In the minimization algorithm, geometrical correlations of pair, 3-body, 4-body, and 5-body figures have been considered up to the 9th, 5th, 4th, and 2nd neighbor shells, respectively. \begin{figure} \begin{centering} \includegraphics[scale=0.17]{images/SQS.png} \par\end{centering} \caption{The SQS employed for the simulations of Ti-25Ta (\textbf{left}) and Ti-31.25Ta (\textbf{right}). Ti atoms are blue, and Ta atoms are red.} \label{SQS} \end{figure} \section{0 K Minimum Energy Path} Fig.~\ref{neb} shows the minimum energy path for the MT in Ti-31.25Ta at 0 K obtained using the solid state nudged elastic band (SSNEB) method~\cite{Sheppard2012} as implemented in the VTST package~\cite{ssneb}. The atomic positions in the austenitic phase have been determined using the average positions of an aiMD run at 600 K. In agreement with previous calculations~\cite{Chakraborty2015}, the minimum energy path at 0 K does not display any barrier, meaning that static calculations are unable to capture even the first order nature of the MT. \begin{figure}[b] \begin{centering} \includegraphics[scale=0.12]{images/neb.png} \par\end{centering} \caption{Minimum energy path at 0 K for the MT between austenite ($\beta$) and martensite ($\alpha''$) for Ti-31.25Ta.} \label{neb} \end{figure} \section{Transition Temperatures} The transition temperatures have been determined from the simulations by considering the temperature dependence of the spontaneous lattice strain of martensite (SLS) and the average atomic displacement ($\Delta y$) as a function of temperature. For Ti-25Ta both order parameters drop to zero between 600 K and 650 K, while for Ti-31.25Ta at roughly 500 K. By averaging the actual temperatures of the MD runs, the values for the transition temperature have been calculated to be $T_0=$ 627 K and 496 K for the two compositions, respectively. \newline The experimental transition temperatures have been evaluated as \begin{equation} T_0= \frac{M_\text{s}+A_\text{s}}{2} \label{T0} \end{equation} \noindent where $M_\text{s}$ and $A_\text{s}$ are the martensitic and austenitic start temperatures, respectively. We have taken the measured temperatures for Ti-Ta from Ref.~\cite{Ferrari2018} and linearly interpolated them to obtain $T_0$ values for the compositions Ti-25Ta and Ti-31.25Ta yielding 560~K and 420~K, respectively. \section{Details on the Landau-Falk expansion} \begin{table*}[t] \caption{ \label{coeff_free_en} \textbf{Columns 2-4}: input coefficients for the parametrization of the free energy. \textbf{Columns 5-8}: coefficients of the Landau-Falk expansion. \textbf{Columns 9-10}: conversion factors for the two order parameters.} \begin{ruledtabular} \begin{tabular}{cccc|cccc|cc} & $\Delta E^{(\beta - \alpha'')}$ (meV/at.) & $T_{0}$ (K) & $\eta_{0}$ & $a$ (meV/at.) & $b$ (meV/at.) & $c$ ($\mu$eV/at./K) & $T_{c}$ (K) & SLS$_{0}$ & $\Delta y_{0}$\tabularnewline \hline Ti-25Ta & 40 & 627 & 0.606 & 31.6 & 23.2 & 84.0 & 576 & 7.5 & 0.099\tabularnewline Ti-31.25Ta & 29 & 496 & 0.536 & 20.3 & 11.7 & 79.3 & 475 & 6.0 & 0.093\tabularnewline \end{tabular} \end{ruledtabular} \end{table*} \begin{figure}[t] \begin{centering} \includegraphics[scale=0.20]{images/Energy.png} \par\end{centering} \caption{``Static'' contribution to the energy as a function of the order parameter for Ti-25Ta (red) and Ti-31.25Ta (blue). The values of the order parameter are those which minimize the free energy at a given temperature and are obtained from the spontaneous lattice strain (circles), or from the average atomic displacements (squares). Solid lines are predictions from the Landau-Falk expansion (no fit). Since $\eta$ is normalized to 1, the numerical data for SLS and $\Delta y$ have been divided by the factors reported in the last two columns of Tab.~\ref{coeff_free_en}.} \label{energy} \end{figure} \begin{figure}[b] \begin{centering} \includegraphics[scale=0.16]{images/entropy.png} \par\end{centering} \caption{Entropy as a function of the order parameter for Ti-25Ta (red) and Ti-31.25Ta (blue) obtained from the Landau-Falk expansion.} \label{entropy} \end{figure} \begin{figure}[b] \begin{centering} \includegraphics[scale=0.20]{images/Lattice.png} \par\end{centering} \caption{A 10 ps aiMD trajectory of Ti-31.25Ta at 500~K. The arrow points out the instant at which the MT takes place.} \label{traj} \end{figure} In 1980, Falk \cite{Falk1980} proposed that the first order martensitic transformation (MT) in shape memory alloys can be described by the free energy \begin{equation} F(\eta,T)=a \eta^6 - b \eta^4+ c (T-T_c) \eta^2+ F_0(T) \label{falk_free_en} \end{equation} \noindent where $\eta$ is an order parameter, $a$, $b$, $c$, and $T_c$ are positive, material-dependent constants, and $F_0(T)$ decribes the temperature dependence of the absolute free energy of austenite. Without loss of generality, to treat the relative free energy difference between austenite and martensite, we have chosen $F_0(T)=0$. \newline To determine the values of $a$, $b$, $c$, and $T_c$ for Ti-25Ta and Ti-31.25Ta we have imposed the following conditions: \begin{itemize} \item at 0 K, $F(\eta,T)$ has two minima at respectively $\eta=-1$ and $\eta=+1$; \item at 0 K, $F(0,0)-F(1,0)=\Delta E^{(\beta - \alpha'')}$, where $\Delta E^{(\beta - \alpha'')}$ is the 0~K energy difference between austenite and martensite; \item at the transition temperature $T_0$, $F(\eta,T_0)$ has two minima at respectively $\eta=-\eta_0$ and $\eta=+\eta_0$, where $\eta_0$ is the value of the order parameter at $T_0$ extracted from the $NPT$ simulations; \item at the transition temperature $T_0$, $F(0,T_0)-F(\eta_0,T_0)=0$. \end{itemize} \noindent To compute the total energy of austenite at 0 K, we have employed the average positions of the aiMD run at 700~K for Ti-25Ta and at 600~K for Ti-31.25Ta. In fact, the chemical disorder in Ti-Ta implies that in the austenitic phase the average atomic positions do not correspond exactly to the perfect bcc positions. \newline The input parameters and the values of the coefficients of the free energy expansion are compiled in Tab.~\ref{coeff_free_en}. \newline The analytical values of the order parameter $\eta$ as a function of temperature can be derived from Eq.~\eqref{falk_free_en} by imposing \begin{equation} \frac{\partial F(\eta,T)}{\partial\eta}=0. \label{ord_par_eq} \end{equation} \noindent $\eta$ can be used to obtain the values of SLS and $\Delta y$ as a function of temperature, as done in Fig.~3 in the main text. Since $\eta$ is normalized to 1, the multiplicative factors listed in the last two colums of Tab.~\ref{coeff_free_en} have been used for the comparison of the analytical predictions of the Landau-Falk expansion to the simulation data for SLS and $\Delta y$. \newline To further test the validity of Eq.~\eqref{falk_free_en} for the free energy of our system, we have also extracted the values of the total energy as a function of the order parameter from our model as \begin{equation} E(\eta,T)=\frac{\partial(\beta F(\eta,T))}{\partial\beta} \label{energy_eq} \end{equation} \noindent where $\beta=\frac{1}{k_\text{B} T}$ and $k_\text{B}$ is the Boltzmann constant. Considering that we have neglected the temperature dependence of the absolute free energy of austenite by setting $F_0=0$, Eq.~\eqref{energy_eq} gives the ``static'' contribution to the energy, i.e.~the energy that a system with a given value of the order parameter would have at 0 K. We have hence performed additional calculations of the 0 K energy of our system for different values of the order parameter and compared the results to the analytical trends. Figure~\ref{energy} displays $E(\eta)$, where $\eta$ is either normalized SLS or $\Delta y$. The analytical predictions are in excellent agreement with the numerical results for both compositions, and the data for SLS and $\Delta y$ agree with each other very well. The discontinuous jump in the energy is the latent heat $T_0 \Delta S$ of the MT. A finite latent heat also confirms the first order nature of the phase transition. \newline The entropy difference between austenite and martensite can be obtained from Eq.~\eqref{falk_free_en} as \begin{equation} S(\eta,T)=-\frac{\partial F(\eta,T)}{\partial T}. \label{ord_par_eq} \end{equation} If $F_0(T)=0$, the entropy does not depend on temperature $S(\eta,T)=S(\eta)$. As can be seen in Fig.~\ref{entropy}, where $S$ is plotted as a function of $\eta$, the entropy of the austenitic phase is higher than that of the martensitic phase. This favors the austenitic phase over the martensitic phase at high temperatures. Furthermore, the actual value of the entropy depends very weakly on the composition. This peculiar characteristic of Ti-Ta-based alloys was already assumed in previous works~\cite{Chakraborty2015,Ferrari2018} on these materials, where the compositional dependence of the phase stability, which in general depends on both energy and entropy, has been correlated only to 0 K energy differences, supposing that the entropy difference is constant as a function of the chemical concentration. \newline The analytical model provides also the range of temperatures in which martensite and austenite are both stable (metastability region of the MT) \begin{equation} \Delta T=\frac{b^2}{3ac} \label{hysteresis} \end{equation} \noindent and the height of the barrier at the transition temperature $T_0$ \begin{equation} \Delta E_{\text{barr}}=-\frac{k\cdot\left[6ac\cdot(T_{0}-T_{c})+b\cdot k\right]}{27a^{2}} \label{barrier} \end{equation} \noindent where \begin{equation} k=-b+\sqrt{b^{2}+3ac\cdot(T_{c}-T_{0})} \label{barrier2} \end{equation} \noindent The error bars associated with the values of $T_0 \Delta S$, $\Delta T$ and $\Delta E_\text{barr}$ have been determined by a sensitivity analysis. The factor that influences the most the latent heat $T_0 \Delta S$ is $\Delta E^{(\beta - \alpha'')}$. Deviations of 5 meV/at.~on this quantity change the latent heat by roughly 2-3 meV/at., hence a value of 3 meV/at.~has been taken as the absolute error in this case. The reported values for $\Delta T$ and $\Delta E_\text{barr}$, instead, have been found to be almost insensitive to variations of 5 meV/at.~and 50 K in the parameters $\Delta E^{(\beta - \alpha'')}$ and $T_0$. Changes of the order of 5\% on $\eta_0$ though affected the final value of these quantities by roughly 30\%. This has therefore been assumed as the relative error on $\Delta T$ and $\Delta E_\text{barr}$. \section{A Martensitic Transformation during the MD run} \begin{figure} \begin{centering} \includegraphics[scale=0.16]{images/Volume.png} \par\end{centering} \caption{The atomic volume as a function of temperature for Ti-25Ta (red) and Ti-31.25Ta (blue). The asymmetric error bars take into account the Pulay stresses.} \label{volume} \end{figure} For Ti-31.25Ta we detected a MT during a 10 ps aiMD run at 500 K (figure \ref{traj}): at $\simeq 5000$ fs the lattice parameters $b$ and $c$ become equal in magnitude and the system transforms from $\alpha''$ to $\beta$. This supports the calculated value for the free energy barrier of Ti-31.25Ta at $T_0$ of 100~$\mu$eV/at.; indeed, a typical time scale for the detection of a MT can be estimated as \begin{equation} \tau \sim \frac{1}{\nu_0 e^{-\frac{\Delta E_\text{tot}}{k_\text{B}T_0}}} \end{equation} where $\Delta E_\text{tot}$ is the absolute barrier for the MT and $\nu_0$ is the attempt frequency. For the concerted transformation of the entire system $\Delta E_\text{tot}$ scales with the system size, hence for 256 atoms $\Delta E_\text{tot}\simeq25.6$~meV. Assuming a value of approximately $10^{12}$~Hz for $\nu_0$, we obtain $\tau \sim 1000$~fs, in agreement with the time scale at which the MT takes place in the numerical simulations. \bigskip \section{Temperature Dependence of the Volume} Simulations in the $NPT$ ensemble make it possible to compute the equilibrium volume of the system as a function of temperature. Figure~\ref{volume} presents the average atomic volume extracted from the calculations as a function of temperature. Despite the first order character of the MT, the atomic volume appears to be almost continuous before and after the MT. This demonstrates that the $\alpha''$ and $\beta$ phases have approximately the same volume at the transition temperature. The exceptionally small change of the volume at the transition temperature is one of the factors that favor high reversibility, in agreement with the small height of the barrier predicted by the free energy expansion.
2,877,628,089,299
arxiv
\section{Introduction} The generalised Chaplygin gas \cite{Kamenshchik:2001cp} (hereafter gCg) is a cosmological model which has raised some interest thanks to its ability to describe alone the dynamics of the entire dark sector of the Universe, i.e. Dark Matter and Dark Energy. The gCg is a perfect fluid characterised by the following equation of state: \begin{equation}\label{gcgeos} p = - \frac{A}{\rho^{\alpha}}\;, \end{equation} where $p$ is the pressure, $\rho$ is the density and $A$ and $\alpha$ are positive parameters. In the setting of the Friedmann-Lema\^{i}tre-Robertson-Walker cosmological theory, one easily integrates the energy conservation equation and finds that the gCg density evolves as a function of the redshift $z$ as follows: \begin{equation}\label{gcgrhoevo} \rho = \left[A + B\left(1 + z\right)^{3(1 + \alpha)}\right]^{\frac{1}{1 + \alpha}}\;, \end{equation} where $B$ is a positive integration constant. Equation~\eqref{gcgrhoevo} interpolates between a past ($z \gg 1$) dust-like phase of evolution, i.e. $\rho \sim \left(1 + z\right)^{3}$, and a recent ($z \lesssim 1$) one in which $\rho$ tends asymptotically to a cosmological constant, i.e. $\rho \sim A^{\frac{1}{1 + \alpha}}$. As cosmological model, the gCg was first investigated in \cite{Kamenshchik:2001cp} because it had already raised some attention, e.g. in connection with string theory. In particular, the equation of state (\ref{gcgeos}) can be extracted from the Nambu-Goto action for $d$-branes moving in a $(d + 2)$-dimensional spacetime in the lightcone parametrisation, see \cite{Bordemann:1993ep}. Also, the gCg is the only fluid which, up to now, admits a supersymmetric generalisation, see \cite{Hoppe:1993gz, Jackiw:2000cc}. From the point of view of cosmology, a possible unification picture between Dark Matter and Dark Energy is particularly appealing, especially in connection with the so-called cosmic coincidence problem (see \cite{Zlatev:1998tr} for much detail about the latter). This motivation prompted an intensive study of the gCg and of those models that have the property of unifying (dynamically) Dark Matter and Dark Energy. Such models are called Unified Dark Matter (UDM). It must be pointed out that in a conventional cosmological model (i.e. a model in which Dark Matter and Dark Energy are separated entities) Dark Matter has the primary role of providing for structure formation whereas Dark Energy has to account for the recently observed accelerated expansion of the Universe \cite{Riess:1998cb, Perlmutter:1998np}. The fundamental task for the gCg is then to play both these roles. To understand if this is the case, the gCg parameters space $(A,\alpha)$ has been intensively analysed in relation with observation of Large Scale Structures (LSS), Cosmic Microwave Background (CMB), type Ia Supernovae (SNIa), X-ray cluster gas fraction and Baryon Acoustic Oscillations (BAO), see for example \cite{Wu:2006pe, Davis:2007na, Lu:2008hp, Lu:2008zzb, Barreiro:2008pn, Sandvik:2002jz, Carturan:2002si,Bean:2003ae, Amendola:2003bz, Amendola:2005rk, Giannantonio:2006ij}. The most instructive constraints are about the parameter $\alpha$ and come from the LSS matter power spectrum and the CMB angular power spectrum analysis. In the former, the resulting best fit value is $\alpha \lesssim 10^{-5}$, \cite{Sandvik:2002jz}. One can think that this narrow constraint is a flaw of the gCg model because if we take the limit $\alpha \to 0$ in the equation of state \eqreff{gcgeos}, then we reproduce exactly the $\Lambda$CDM model (if $A$ assumes the value of the cosmological constant energy density). The authors of \cite{Beca:2003an} have found a loophole in this degeneracy problem. Indeed, they have shown that if we consider a baryon component added to the gCg and we compute the matter power spectrum for the baryonic part alone, it turns out that the latter is only poorly affected (at the linear perturbative regime) by the presence of the gCg. The model constituted by gCg plus baryons is therefore in good agreement with LSS observation for all $\alpha \in (0,1)$. On the other hand, even if the ``baryon loophole'' allows to circumvent the very narrow constraint on $\alpha$ coming from LSS analysis, it cannot prevent the tight one coming from CMB analysis and due in particular to the Integrated Sachs-Wolfe (ISW) effect, \cite{Carturan:2002si,Bean:2003ae, Amendola:2003bz, Amendola:2005rk, Giannantonio:2006ij, Bertacca:2007cv}. In more detail, if we take into account an ordinary Cold Dark Matter (CDM) component and a baryonic one together with the gCg, we find that $\alpha < 0.2$, \cite{Amendola:2003bz}. Withdrawing the CDM component, we lower the bound by an order of magnitude: $\alpha < 10^{-2}$, \cite{Amendola:2005rk}. Finally, for the case of the pure gCg we find an even tighter constraint: $\alpha < 10^{-4}$, \cite{Bertacca:2007cv}. Taking into account all these results, we may conclude that the gCg model is viable only when it is very similar, if not degenerate, to the $\Lambda$CDM. However, it must be pointed out that in the major part of the literature on the subject, the preliminary constraint $\alpha < 1$ is assumed. The reason for this assumption resides in the form of the gCg adiabatic speed of sound, which is the following: \begin{equation}\label{gCgsos} c_{\rm s}^{2} \equiv \frac{{\rm d} p}{{\rm d}\rho} = \frac{A\alpha}{A + B\left(1 + z\right)^{3\left(\alpha + 1\right)}}\;. \end{equation} If $z \to -1$ then $c_{\rm s}^{2} \to \alpha$. Therefore, causality would require $\alpha < 1$. However, the causality problem for UDM models should rather be addressed in terms of a microscopic theory, see \cite{Babichev:2007dw}. In the particular case of the gCg, for $\alpha > 1$ the authors of \cite{Gorini:2007ta} examine in some detail the causality issues and develop a suitable microscopic theory in which the signal velocity never exceeds the speed of light. Given this, it is natural to wonder how the gCg model behaves in the ``forbidden'' range $\alpha > 1$. In the literature few papers carry out such analysis, for example \cite{Amendola:2005rk, Gorini:2007ta, Fabris:2008hy, Fabris:2008mi, Urakawa:2009jb}. An interesting result is for instance that, in the framework of the ``baryon loophole'' above mentioned, i.e. considering the model constituted by the gCg plus baryons, the agreement of the baryon power spectrum with observation increases for $\alpha \gtrsim 3$, see \cite{Gorini:2007ta}. In \cite{Fabris:2008hy} and \cite{Fabris:2008mi} the authors have recently confirmed this behaviour. On the other hand, CMB constraints do not change significantly from the case $\alpha < 1$, in the sense that the parameter is again constrained to be very small, see \cite{Amendola:2005rk} and \cite{Urakawa:2009jb}. Even in those papers which analyse the gCg for $\alpha > 1$, the parameter space is probed up to a maximum finite value. For example, $\alpha < 6$ in \cite{Amendola:2005rk} or $\alpha < 10$ in \cite{Urakawa:2009jb}. Since the $\alpha > 1$ case has proved to be promising, at least in relation with LSS analysis, in this paper we dedicate ourselves to investigate the extreme limit of the gCg, i.e. the behaviour of the model for very large values of $\alpha$. Our analysis is principally based on the ISW effect because, as we have discussed in the above and as the authors of \cite{Bertacca:2007cv} have shown, it provides the strongest constraints for a UDM model. To this purpose, in section II we briefly outline the basic equations describing the ISW effect and, inspired by \cite{Bertacca:2007cv}, we present a simple method based on the M\'esz\'aros effect which we employ in Section III to find a qualitative constraint for large values of $\alpha$. Indeed, in section III we analyse the Jeans wavenumber of gCg perturbations and we find that if $\alpha \gtrsim 350$ the ISW effect can be potentially small. In Section IV we then confirm the results of Sec. III by directly calculating the ISW effect in the limit $\alpha \to \infty$ and showing that it is smaller than the one for $\alpha = 0$, i.e. for the $\Lambda$CDM (the calculation of the ISW effect for the $\Lambda$CDM has been performed for the first time by Kofman and Starobinsky in 1985, see \cite{Kofman:1985fp}). In section V we address the behaviour of the background expansion for large values of $\alpha$ and find that the evolution is characterised by an early dust-like phase abruptly interrupted by a de~Sitter (dS) one. We then analyse the 157 nearby SNIa of the Constitution set and find that the transition between the two phases takes place about a redshift $z_{\rm tr} = 0.22$, which is much more recent than the transition at $z_{\rm tr} = 0.79$ to the accelerated phase of expansion in the $\Lambda$CDM model. The last section is devoted to discussion and conclusions. \section{The ISW effect and the constraining method} In this paper we discuss only adiabatic perturbations and we assume a flat spatial geometry. The ISW effect contribution to the CMB angular power spectrum is given by the following formula \cite{Sachs:1967er}: \begin{equation}\label{ClISW} \frac{2l + 1}{4\pi}C_{l}^{\rm ISW} = \frac{1}{2\pi^{2}}\int_{0}^{\infty}\frac{{\rm d} k}{k}k^{3}\frac{\left|\Theta_{l}\left(\eta_{0},k\right)\right|^{2}}{2l + 1}\;, \end{equation} where $l$ is the multipole moment, $k$ is the wavelength number, $\eta_{0}$ is the present conformal time and \begin{equation}\label{ThetalISW} \frac{\Theta^{\rm ISW}_{l}\left(\eta_{0},k\right)}{2l + 1} = 2\int_{\eta_{*}}^{\eta_{0}}\Phi'\left(\eta,k\right){\rm j}_{l}\left[k\left(\eta_{0} - \eta\right)\right]{\rm d}\eta\; \end{equation} is the fractional temperature perturbation generated by the ISW effect, where $\eta_{*}$ is the last scattering conformal time, $\Phi\left(\eta,k\right)$ is the Fourier transformed gravitational potential and ${\rm j}_{l}$ is the spherical Bessel function of order $l$. The prime denotes derivation with respect to the conformal time $\eta$. For a more detailed description of the perturbations equations and of the integrals (\ref{ClISW}-\ref{ThetalISW}) we refer the reader to \cite{Sachs:1967er, Bardeen:1980kt, Mukhanov:1990me, Hu:1995em}. Following \cite{Mukhanov:1990me}, consider the Fourier transformed evolution equation for the gravitational potential: \begin{equation}\label{equ} u'' + k^{2}c_{\rm s}^{2}u - \frac{\theta''}{\theta}u = 0\;, \end{equation} where: \begin{equation} u \equiv \frac{2\Phi}{\sqrt{\rho + p}}\;, \qquad\mbox{ and }\qquad \theta \equiv \sqrt{\frac{\rho}{3(\rho + p)}}(1 + z)\;, \end{equation} and $c_{\rm s}$, $\rho$ and $p$ are, respectively, the adiabatic speed of sound, the energy density and the pressure defined for a generic cosmological model. We have chosen here units such that $8\pi G = c = 1$. For simplicity, here and in the following we do not explicit the $\left(\eta,k\right)$ dependence for $u$ and $\Phi$ and the $\eta$ dependence for $c_{\rm s}^{2}$, $\theta$, $\rho$, $p$ and $z$. They will therefore be always implied, unless otherwise stated. In \eqreff{equ} define \begin{equation}\label{kJ2def} k^{2}_{\rm J} \equiv \frac{\theta''}{c_{\rm s}^{2}\theta} \end{equation} as the square Jeans wavenumber. In general, when wavelengths smaller than the Jeans scale enter the Hubble horizon, they start to oscillate, affecting the CMB power spectrum and the matter one in ways not compatible with observation. For UDM models in particular, the authors of \cite{Bertacca:2007cv} have found a contribution to the ISW effect proportional to the fourth power of the speed of sound which generates in the CMB angular power spectrum a growth proportional to $l^{3}$ until $l \approx 25$ (the value $l \approx 25$ is related to the equivalence wave number $k_{\rm eq}$ which we discuss later). See \cite{Bertacca:2007cv} and also \cite{Hu:1995em} for more detail. Consequently, if we take into account only those scales for which \begin{equation}\label{cond} k^{2} < k_{\rm J}^{2}\;, \end{equation} then the gravitational potential does not oscillate. Of course if $c_{\rm s}^{2} = 0$, as for the $\Lambda$CDM model, the Jeans scale vanishes and condition (\ref{cond}) is satisfied for every $k$ at any time (remember that $k_{\rm J}^{2}$ is time dependent). On the other hand, this is not the only possible scenario. In fact, the cosmological scales which are important for the CMB and structure formation are those which entered the Hubble horizon after the matter-radiation equivalence epoch. Those which entered the horizon earlier had been damped by the dominating presence of radiation. This effect is known as M\'esz\'aros effect \cite{Hu:1995em, Meszaros:1974tb, Weinberg:2002kg, Coles:1995bd}. If we require that the relevant scales, i.e. $k < k_{\rm eq}$, must satisfy condition (\ref{cond}) we obtain: \begin{equation}\label{cond2} k^{2}_{\rm eq} < \frac{\theta''}{c_{\rm s}^{2}\theta}\;. \end{equation} This relation can be used to infer qualitative constraints upon a generic cosmological model. In the next section we will make use of it to find constraints on $\alpha$. As a remark, a scenario for which \eqreff{cond2} holds true without demanding a vanishing speed of sound is the so-called {\it fast transition}, introduced and investigated in great detail in \cite{Piattella:2009kt}. \section{Constraints on the generalised Chaplygin gas} The equivalence wavenumber $k_{\rm eq}$ has the following form: \begin{equation} k_{\rm eq}^{2} = \frac{H_{\rm eq}^{2}}{c^2\left(1 + z_{\rm eq}\right)^{2}}\;, \end{equation} where $H_{\rm eq}$ is the Hubble parameter evaluated at the equivalence redshift $z_{\rm eq}$. From the 5-year WMAP observation\footnote{\url{http://lambda.gsfc.nasa.gov/}}, the best fit values are $z_{\rm eq} = 3176^{+151}_{-150}$ and $k_{\rm eq} = 0.00968 \pm 0.00046$ $h$ Mpc$^{-1}$, where $h$ is the Hubble constant in 100 km s$^{-1}$ Mpc$^{-1}$ units. See also \cite{Komatsu:2008hk, Dunkley:2008ie}. The Hubble parameter is related to the Universe energy content by Friedmann equation, which for the pure gCg model has the following form: \begin{equation}\label{gcgH} \frac{H^{2}}{H_{0}^{2}} = \left[\bar{A} + \left(1 - \bar{A}\right)\left(1 + z\right)^{3(\alpha + 1)}\right]^{\frac{1}{\alpha + 1}}\;, \end{equation} where $\bar{A} \equiv A/(A + B)$ and $H_0$ is the Hubble constant. Notice that $\bar{A} = -w_0$, where $w_0$ is the present time equation of state parameter of the Universe. Let $z_{\rm tr}$ be the redshift at which the accelerated phase of expansion begins. From \eqreff{gcgH}, we calculate the following relation between $\bar{A}$ and $z_{\rm tr}$: \begin{equation}\label{Abaralpharel} \bar{A} = \frac{\left(1 + z_{\rm tr}\right)^{3\left(\alpha + 1\right)}}{2 + \left(1 + z_{\rm tr}\right)^{3\left(\alpha + 1\right)}}\;. \end{equation} From now on we make use of $\left(z_{\rm tr},\alpha\right)$ as independent parameters. Plugging Eqs. (\ref{gcgeos}), (\ref{gcgrhoevo}), (\ref{gCgsos}), (\ref{gcgH}) and (\ref{Abaralpharel}) into the definition (\ref{kJ2def}), we plot in Fig.~\ref{Fig1} the Jeans wavenumber as function of $\alpha$ for fixed $z = 0$ and $z_{\rm tr} = 0.79$ and the equivalence wavenumber, computed from \eqreff{gcgH}, as function of $\alpha$ for a fixed $z_{\rm eq} = 3176$. The value we have chosen for the transition redshift $z_{\rm tr}$ is the WMAP5 best fit, see \cite{Komatsu:2008hk, Dunkley:2008ie}. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth]{Figures/fig.eps} \caption{$k_{\rm J}$ (black curve) and $k_{\rm eq}$ (red ``quasi-horizontal'' line) as functions of $\alpha$. $k_{\rm J}$ is evaluated at $z = 0$ and $z_{\rm tr} = 0.79$, while $k_{\rm eq}$ is evaluated at $z_{\rm eq} = 3176$. The wavenumbers are in units $h$ Mpc$^{-1}$.} \label{Fig1} \end{center} \end{figure} The most intriguing feature of Fig.~\ref{Fig1} is that the Jeans wavenumber has a minimum value for $\alpha \approx 1$. For sufficiently small or sufficiently large values of $\alpha$ it grows and equals $k_{\rm eq}$. From our numerical computation we have inferred the following constraints: $\alpha \lesssim 10^{-3}$ and $\alpha \gtrsim 250$. It is also possible to obtain these results from approximations, but analytically. Indeed, when $\alpha \ll 1$ we expand in Taylor series $k_{\rm eq}^{2}$ and $\alpha k_{\rm J}^{2}$ and find: \begin{eqnarray} k^{2}_{\rm eq} &=& \frac{\left(1 + z_{\rm tr}\right)^{3} + 2\left(1 + z_{\rm eq}\right)^{3}}{\left(1 + z_{\rm eq}\right)^{2}\left[2 + \left(1 + z_{\rm tr}\right)^{3}\right]} + O(\alpha)\;,\\ \nonumber\\ \alpha k_{\rm J}^{2} &=& \frac{3\left[4 + \left(1 + z_{\rm tr}\right)^{3}\right]}{4\left(1 + z_{\rm tr}\right)^{3}} + O(\alpha)\;. \end{eqnarray} To leading order in $\alpha$, equating the above expressions gives the upper bound for small values of $\alpha$. For $z_{\rm eq} = 3176$ and $z_{\rm tr} = 0.79$: $\alpha \lesssim 10^{-3}$. Now expand $k_{\rm eq}^{2}$ in Taylor series for $\alpha \gg 1$: \begin{equation}\label{keqalphainf} k^{2}_{\rm eq} = \frac{1 + z_{\rm eq}}{\left(1 + z_{\rm tr}\right)^{3}} + O\left(\frac{1}{\alpha}\right)\;. \end{equation} The corresponding expansion for $k_{\rm J}^{2}$ is less immediate. Making use of the asymptotic forms of the speed of sound and of the Hubble parameter, which we will give in \eqreff{cs2alphainf} and in \eqreff{gcgH2alphainf}, we find: \begin{equation}\label{kJ2alphainf} k_{\rm J}^{2} = \left\{ \begin{array}{cl} x^{3\alpha}\left[\dfrac{6}{\alpha}\dfrac{x^4}{\left(1 + z_{\rm tr}\right)^2} + O\left(\dfrac{1}{\alpha^2}\right)\right] & \mbox{ for } x > 1\\ \\ \dfrac{9\alpha}{4}\dfrac{1}{\left(1 + z\right)^2}\left[1 + O\left(x^{3\alpha}\right)\right] & \mbox{ for } x < 1 \end{array} \right.\;, \end{equation} where we have defined \begin{equation} x \equiv \frac{1 + z}{1 + z_{\rm tr}}\;. \end{equation} From \eqreff{kJ2alphainf}, to leading order in $1/\alpha$ the Jeans wavenumber is an exponential function of $\alpha$ for $x > 1$; for $x < 1$, the expansion can be performed only with respect to $x^{3\alpha}$ and, to leading order, $k_{\rm J}^{2}$ grows linearly with $\alpha$. We take into account the latter instance, equate (\ref{kJ2alphainf}) to (\ref{keqalphainf}) (each expression considered to its leading order) and find $\alpha \gtrsim 250$ (for $z = 0$, $z_{\rm eq} = 3176$ and $z_{\rm tr} = 0.79$). Note that we have neglected the CDM, the baryon and the radiation components. Neglecting the CDM component is reasonable, because the gCg model aims to an unified description of Dark Matter and Dark Energy. Adding a CDM component would then spoil its purpose. Moreover, neglecting radiation is also reasonable, because the minimum value of the Jeans wavenumber is at late times ($z \approx 0$), where radiation is subdominant. For what concerns baryons, their presence would have the effect of lessening the average speed of sound, increasing thus the Jeans wavenumber and smoothing the constraints we have found. Nonetheless, at late times the baryon component is also subdominant with respect to the gCg one, so we can reasonably neglect it. On the other hand, in the calculation of $k_{\rm eq}^{2}$ we are not allowed to neglect both radiation and baryons, because at $z_{\rm eq} = 3176$ they are important. Therefore, since at early times the gCg and the $\Lambda$CDM model are indistinguishable, we use the WMAP5 result $k_{\rm eq}~=~0.00968$~$h$~Mpc$^{-1}$ and we find that $\alpha \lesssim 10^{-3}$ and $\alpha \gtrsim 350$. As we expected, taking into account the radiation contribution has the effect of increasing the lower bound for large values of $\alpha$. \section{Calculation of the ISW effect for the extreme limit of the generalised Chaplygin gas} In this section we calculate the integrals (\ref{ClISW}) and (\ref{ThetalISW}) in the limit $\alpha \to \infty$. To this purpose, in place of \eqreff{equ}, we employ the evolution equation for the gravitational potential $\Phi$, namely \begin{equation}\label{eqPhi} \Phi'' + 3\mathcal{H}\left(1 + c_{\rm s}^{2}\right)\Phi' + \left(2\mathcal{H}' + \mathcal{H}^{2} + 3\mathcal{H}^{2}c_{\rm s}^{2} + k^{2}c_{\rm s}^{2}\right)\Phi = 0\;, \end{equation} where $c_{\rm s}^2$ is the gCg adiabatic speed of sound defined in (\ref{gCgsos}) and $\mathcal{H} = a'/a$ is the Hubble parameter written in the conformal time (see \cite{Mukhanov:1990me} for more detail). In the limit of very large values of $\alpha$, from \eqreff{gCgsos} together with \eqreff{Abaralpharel}, the square speed of sound has the following asymptotic behaviour \begin{equation}\label{cs2alphainf} c_{\rm s}^{2} = \left\{ \begin{array}{cl} \dfrac{\alpha}{2}\left[x^{-3\alpha} + O\left(x^{-6\alpha}\right)\right] & \mbox{ for } x > 1\\ \\ \alpha\left[1 - 2x^{3\alpha} + O\left(x^{6\alpha}\right)\right] & \mbox{ for } x < 1 \end{array} \right.\;, \end{equation} with $c_{\rm s}^{2} = \alpha/3$ for $x = 1$. Friedmann equation (\ref{gcgH}) becomes: \begin{equation}\label{gcgH2alphainf} \frac{H^{2}}{H_0^2} = \left\{ \begin{array}{cl} x^{3}\left[1 + \dfrac{\ln2}{\alpha} + O\left(\dfrac{1}{\alpha^2}\right)\right] & \mbox{ for } x > 1\\ \\ 1 + \dfrac{2x^{3\alpha}}{\alpha} + O\left(\dfrac{x^{6\alpha}}{\alpha^2}\right) & \mbox{ for } x < 1 \end{array} \right.\;, \end{equation} with $\tfrac{H^{2}}{H_0^2} = 1 + \tfrac{\ln3}{\alpha} + O\left(\tfrac{1}{\alpha^2}\right)$ for $x = 1$. To leading order in $1/\alpha$, the solution of \eqreff{gcgH2alphainf} for $x > 1$ as a function of the conformal time has the following form: \begin{equation}\label{gcgH2alphainfsol1} a = \frac{1}{1 + z_{\rm tr}}\left(\frac{\eta}{\eta_{\rm tr}}\right)^2\;, \end{equation} where $\eta_{\rm tr}$ is the conformal time corresponding to the transition redshift $z_{\rm tr}$. For $x < 1$, let $\eta_0$ be the present epoch conformal time and normalise the scale factor as $a(\eta_0) = 1$; the corresponding solution of \eqreff{gcgH2alphainf} is: \begin{equation}\label{gcgH2alphainfsol2} a = \frac{1}{1 + \eta_0 - \eta}\;. \end{equation} Joining solutions (\ref{gcgH2alphainfsol1}) and (\ref{gcgH2alphainfsol2}) in $a(\eta_{\rm tr})$, we can link the transition and the present epoch conformal time to the transition redshift as follows: \begin{equation} \eta_0 - \eta_{\rm tr} = z_{\rm tr}\;. \end{equation} Note that the relevant contribution to the ISW effect comes only from solution (\ref{gcgH2alphainfsol2}). In fact: $i)$ the background solution (\ref{gcgH2alphainfsol1}) corresponds to a CDM dominated Universe and $ii)$ from (\ref{cs2alphainf}) for $x > 1$ the speed of sound is exponentially vanishing for $\alpha \to \infty$ so that we can reasonably assume that $c_{\rm s}^{2} \approx 0$. Therefore, if we substitute \eqreff{gcgH2alphainfsol1} and $c_{\rm s}^{2} = 0$ into \eqreff{eqPhi} we obtain the same evolution equation for the gravitational potential as the one in a CDM dominated Universe, see \cite{Mukhanov:1990me}. In this instance, neglecting the decaying mode, $\Phi' = 0$ and no ISW effect is produced. Write then \eqreff{eqPhi} combined with \eqreff{gcgH2alphainfsol2} and, from the leading order in $x^{3\alpha}$ in (\ref{cs2alphainf}) for $x < 1$, $c_{\rm s}^{2} = \alpha$: \begin{equation}\label{eqPhialphainf} \Phi'' + \frac{3\left(1 + \alpha\right)}{1 + \eta_0 - \eta}\Phi' + \left[\frac{3(1 + \alpha)}{(1 + \eta_0 - \eta)^2} + k^{2}\alpha\right]\Phi = 0\;. \end{equation} Defining $y \equiv 1 + \eta_0 - \eta$, we recast \eqreff{eqPhialphainf} in the following form: \begin{equation}\label{eqPhialphainfrecast} \ddot{\Phi} - \frac{3\left(1 + \alpha\right)}{y}\dot{\Phi} + \left[\frac{3\left(1 + \alpha\right)}{y^2} + k^{2}\alpha\right]\Phi = 0\;, \end{equation} where the dot denotes derivation with respect to $y$. Equation~(\ref{eqPhialphainfrecast}) can be solved exactly in terms of Bessel functions: \begin{equation}\label{solbessel} \Phi = y^{\frac{3\alpha}{2} + 2}\left[C_1{\rm J}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) + C_2{\rm Y}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right)\right]\;, \end{equation} where $C_1$ and $C_2$ are arbitrary integration constants. For large values of the order, the Bessel functions can be asymptotically expanded as follows \cite{AS}: \begin{equation} {\rm J}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) = \left(\frac{{\rm e}k\sqrt{\alpha}y}{3\alpha + 2}\right)^{3\alpha/2 + 1}\frac{1}{\sqrt{3\pi\alpha + 2\pi}}\left[1 - \frac{8}{\alpha} + O\left(\frac{1}{\alpha^2}\right)\right] \end{equation} and \begin{equation} {\rm Y}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) = -\left(\frac{{\rm e}k\sqrt{\alpha}y}{3\alpha + 2}\right)^{-3\alpha/2 - 1}\sqrt{\frac{4}{3\pi\alpha + 2\pi}}\left[1 + \frac{8}{\alpha} + O\left(\frac{1}{\alpha^2}\right)\right]\;. \end{equation} To leading order in $1/\alpha$, we plug the above asymptotic expansions into \eqreff{solbessel} and find: \begin{equation}\label{solbesselalphainf} \Phi = C_1\left(\frac{{\rm e}k}{3\sqrt{\alpha}}\right)^{3\alpha/2}\frac{y^{3\alpha}}{\sqrt{3\pi\alpha}} - 2C_2\left(\frac{{\rm e}k}{3\sqrt{\alpha}}\right)^{-3\alpha/2}\frac{y}{\sqrt{3\pi\alpha}}\;. \end{equation} We assume the following initial conditions on the potential $\Phi$ in $\eta = \eta_{\rm tr}$: $\Phi\left(\eta_{\rm tr},k\right) = \Phi_{\rm tr}(k)$ and $\Phi'\left(\eta_{\rm tr},k\right) = 0$. The reason for this choice is that up to $\eta = \eta_{\rm tr}$ the potential behaves like in a CDM dominated Universe, i.e. it is constant. Solution (\ref{solbesselalphainf}) then becomes: \begin{equation}\label{solbesselalphainf2} \frac{\Phi}{\Phi_{\rm tr}(k)} = \frac{1}{1 - 3\alpha}\left(\frac{y}{1 + z_{\rm tr}}\right)^{3\alpha} - \frac{3\alpha}{1 - 3\alpha}\frac{y}{1 + z_{\rm tr}}\;. \end{equation} For the calculation of the integrals (\ref{ClISW}) and (\ref{ThetalISW}) consider only the contribution proportional to $y$, i.e. \begin{equation}\label{solbesselalphainf2approx} \frac{\Phi}{\Phi_{\rm tr}(k)} \approx \frac{y}{1 + z_{\rm tr}}\;, \end{equation} since it is the dominant one for $\alpha \to \infty$. Moreover, assume that the primordial power spectrum $\Delta_{\rm R}^2$ is the Harrison-Zel'dovich scale invariant one and that it propagates invariated up to $\eta_{\rm tr}$. For convenience, define \begin{equation} D \equiv \frac{k^3\left|\Phi_{\rm tr}(k)\right|^2}{2\pi^2} = \frac{9}{25}\Delta_{\rm R}^2\;; \end{equation} combining the integrals (\ref{ClISW}) and (\ref{ThetalISW}) with the approximated solution (\ref{solbesselalphainf2approx}) and changing the integration variable from the conformal time to the redshift we write: \begin{equation}\label{ClISWalphainf} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{8l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{\infty}\frac{{\rm d} k}{k}\left[\int_{0}^{z_{\rm tr}}{\rm d} z\;{\rm j}_{l}(kz)\right]^2\;. \end{equation} Taking into account that ${\rm j}_{l}(kz) = \sqrt{\tfrac{\pi}{2kz}}\;{\rm J}_{l + 1/2}(kz)$, we write \eqreff{ClISWalphainf} as follows: \begin{equation}\label{ClISWalphainf2} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{4\pi l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{\infty}\frac{{\rm d} k}{k^2}\int_{0}^{z_{\rm tr}}\frac{{\rm d} u}{\sqrt{u}}\int_{0}^{z_{\rm tr}}\frac{{\rm d} v}{\sqrt{v}}\;{\rm J}_{l + 1/2}(ku){\rm J}_{l + 1/2}(kv)\;. \end{equation} Consider now the following case of the Weber-Schafheitlin type integrals \cite{AS}: \begin{equation}\label{WSformula} \int_{0}^{\infty}{\rm d} t\;\frac{{\rm J}_{l + 1/2}(at){\rm J}_{l + 1/2}(bt)}{t^2} = \frac{1}{4}\frac{b^{l + 1/2}}{a^{l - 1/2}}\frac{\Gamma(l)}{\Gamma(l + 3/2)\Gamma(3/2)}{\rm F}\left(l,-\frac{1}{2};l + \frac{3}{2};\frac{b^2}{a^2}\right)\;, \end{equation} where ${\rm F}$ is the Gauss hypergeometric function. Note that formula (\ref{WSformula}) holds true only if $b < a$. We perform the $k$ integration in \eqreff{ClISWalphainf2} according to \eqreff{WSformula} and find the following expression: \begin{equation}\label{ClISWalphainf3} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{4\sqrt{\pi}\;l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{z_{\rm tr}}\frac{{\rm d} u}{u^l}\int_{0}^{u}{\rm d} v\;v^l\frac{\Gamma(l)}{\Gamma(l + 3/2)}{\rm F}\left(l,-\frac{1}{2};l + \frac{3}{2};\frac{v^2}{u^2}\right)\;, \end{equation} where we have modified the integration range of $v$ in order to satisfy the condition $v < u$ and thus be allowed to apply \eqreff{WSformula}. Being the integrand function of \eqreff{ClISWalphainf2} symmetric with respect to the line $v = u$, in \eqreff{ClISWalphainf3} we have recovered the correct value of the integral by multiplying by a factor 2. Expanding ${\rm F}$ in a hypergeometric series and performing the $u$ and $v$ integrations, we find the following series expansion for the ISW contribution: \begin{equation}\label{ClISWalphainf4} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = -\frac{l(l + 1)z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\sum_{n=0}^{\infty}\frac{\Gamma(l + n)\Gamma(n - 1/2)}{(l + 2n + 1)\Gamma(n + 1)\Gamma(l + n + 3/2)}\;, \end{equation} which can be recast in the following more compact form: \begin{equation}\label{ClISWalphainf5} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{2\sqrt{\pi}\;z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\frac{\Gamma(l + 1)}{\Gamma(l + 3/2)}\;{}_{3}{\rm F}_{2}\left(l,-\frac{1}{2},\frac{l + 1}{2};l + \frac{3}{2}, \frac{l + 3}{2}; 1\right)\;, \end{equation} where ${}_{3}{\rm F}_{2}$ is a generalised hypergeometric function \cite{Erdelyi}. The asymptotic behaviour of \eqreff{ClISWalphainf5} for large values of $l$ has the following form: \begin{equation}\label{ClISWalphainfasympt} \frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} \sim \frac{2\pi z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\frac{1}{l}\;, \end{equation} which is computed in Appendix~\ref{App}. It can be also directly obtained from the calculations of Kofman and Starobinsky (see Eq.~(12) of \cite{Kofman:1985fp}). In Fig. \ref{FigISW} we plot \eqreff{ClISWalphainf5} and the relative asymptotic expansion \eqreff{ClISWalphainfasympt} as functions of $l$ for $z_{\rm tr} = 0.22$ and $z_{\rm tr} = 0.79$. The former value is the best fit from the SNIa data analysis performed in the next section. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth]{Figures/Iswztr022.eps}\\ \includegraphics[width=0.7\columnwidth]{Figures/ISWztr079.eps} \caption{Plot of the ISW contribution to the angular power spectrum $l(l + 1)C_l/2\pi$ normalised to $D$ (solid lines) and of its asymptotic form for large $l$'s (dashed lines). Upper panel: $z_{\rm tr} = 0.22$. Lower panel: $z_{\rm tr} = 0.79$.} \label{FigISW} \end{center} \end{figure} The black line in the lower panel of Fig. \ref{FigISW} could be associated to those drawn in Fig.~5 of \cite{Urakawa:2009jb}, where the ISW effect contribution is computed for the gCg up to $\alpha = 10$. Note that the $\alpha\to\infty$ contribution is smaller than the $\alpha = 0$ one, which corresponds to the $\Lambda$CDM case and was computed for the first time by Kofman and Starobinsky \cite{Kofman:1985fp}. We expected this result for the following reason. When $\alpha \to \infty$ the Jeans wavenumber diverges. Therefore, according to \cite{Bertacca:2007cv}, the contribution to \eqreff{ClISWalphainf} proportional to the fourth power of the speed of sound does not exist and the behaviour of $l(l + 1)C_l^{\rm ISW}/2\pi$ depends principally on the background evolution. The latter is pretty much similar to the $\Lambda$CDM one. The relevant difference is that for the gCg $\alpha\to\infty$ model the transition from the CDM-like phase to the dS one is very sharp whereas for the $\Lambda$CDM is much smoother. Since in the former case the CDM-like phase lasts longer, the intensity of the ISW effect is lesser. Finally, note also how in Fig.~5 of \cite{Urakawa:2009jb} the trend of a decreasing ISW effect contribution can be already distinguished for $\alpha = 10$ and large $l$'s. \section{Background evolution for large $\alpha$ and SNIa data analysis} In this section we address more quantitatively the behaviour of the background evolution for large values of $\alpha$. Let us consider Friedmann equation (\ref{gcgH2alphainf}) to leading order in $\alpha$: \begin{equation}\label{Hparam} \frac{H^{2}}{H_{0}^{2}} \sim \left\{ \begin{array}{cl} \left(\dfrac{1 + z}{1 + z_{\rm tr}}\right)^{3} & \mbox{ for } z > z_{\rm tr}\\ \\ 1 & \mbox{ for } z \leq z_{\rm tr} \end{array} \right.\;. \end{equation} As we pointed out in the previous section, \eqreff{Hparam} mimics the expansion of a pure CDM Universe for $z > z_{\rm tr}$ and the one of a dS Universe for $z \leqslant z_{\rm tr}$. Note that, within this scenario, the present equation of state parameter is $w_0 = -1$, in contrast with the $w_0 \approx -0.7$ of the $\Lambda$CDM model. We now analyse the background evolution given by \eqreff{Hparam} on the basis of the 157 nearby SNIa of the Constitution set \cite{Hicken:2009dk}. The supernovae data consist in an array of distance moduli $\mu$ defined as: \begin{equation}\label{mu} \mu = m - M = 5\log\left(\frac{D_{\rm L}}{{\rm Mpc}}\right) + 25\;, \end{equation} where $m$ and $M$ are, respectively, the apparent and the absolute magnitudes and $D_{\rm L}$ is the luminosity distance expressed in Mpc units: \begin{equation}\label{d} D_{\rm L}(z) = c(1 + z)\int_{0}^{z}\frac{{\rm d} z'}{H(z')} = \frac{c(1 + z)}{H_0}\int_{0}^{z}\frac{{\rm d} z'}{E(z')}\;, \end{equation} where $E(z)$ is the Hubble parameter normalised to $H_0$. The integral in \eqreff{d} can be exactly solved for the Hubble parameter given in \eqreff{Hparam}: \begin{equation}\label{ldist} \int_{0}^{z}\frac{{\rm d} z'}{E(z')} = \left\{ \begin{array}{cl} \left(3z_{\rm tr} + 2\right) - 2\left(1 + z\right)^{-1/2}\left(1 + z_{\rm tr}\right)^{3/2} & \mbox{ for } z > z_{tr}\\ \\ z & \mbox{ for } z \leq z_{tr} \end{array} \right.\;. \end{equation} Following \cite{Riess:1998cb}, we assume flat priors for $z_{\rm tr}$ and $h$ and assume that the distance moduli are normally distributed. The probability density function (PDF) of the parameters has then the following form \cite{Lupton1993}: \begin{equation}\label{pdf} p\left(z_{\rm tr},h|\mu_o\right) = Ce^{-\chi^2\left(h,z_{\rm tr}\right)/2}\;, \end{equation} where $\mu_o$ is the set of the observed distance moduli, \begin{equation}\label{chisquared} \chi^2\left(h,z_{\rm tr}\right) = \sum_{i=1}^n \left[\frac{\mu_{o,i} - 5\log\left(D_{\rm L}/{\rm Mpc}\right) - 25}{\sigma_{\mu_{o,i}}}\right]^2 \end{equation} and the normalisation constant $C$ has the following form: \begin{equation} \frac{1}{C} = \int\;{\rm d} z_{\rm tr}\int\;{\rm d} h\;e^{-\chi^2\left(h,z_{\rm tr}\right)/2}\;, \end{equation} where the integration ranges over the parameters are, in principle, $h \in (-\infty,\infty)$ and $z_{\rm tr} \in (-1,\infty)$. However, we choose the more reasonable ranges $z_{\rm tr} \in (0,2)$ and $h \in (0.5,0.9)$. In \eqreff{chisquared} $\sigma_{\mu_{o,i}}$ are the estimated errors in the individual distance moduli, including uncertainties in galaxy redshifts and also taking into account the dispersion in supernova redshifts due to peculiar velocities. After marginalization over $h$, i.e. integrating \eqreff{pdf} in $h \in (0.5,0.9)$, we show in Fig. \ref{Fig2} the PDF for the parameter $z_{\rm tr}$. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth]{Figures/pdfplot.eps} \caption{Plot of the PDF {\it vs} the parameter $z_{\rm tr}$ after marginalization over $h$.} \label{Fig2} \end{center} \end{figure} The most probable value is $z_{\rm tr} = 0.222$. At the 68\% confidence level $z_{\rm tr} \in \left(0.198,0.246\right)$, at the 95\% confidence level $z_{\rm tr} \in \left(0.174,0.270\right)$ and at the 99\% confidence level $z_{\rm tr} \in \left(0.154,0.290\right)$. \section{Summary and Conclusions} In the present paper we have investigated the production of ISW effect within the generalised Chaplygin gas cosmological model. Thanks to an argument based on the M\'esz\'aros effect it is possible to find the new constraint $\alpha \gtrsim 350$. For this range of values, the Jeans wavenumber is sufficiently large so that the resulting ISW effect is not strong. Indeed, through a direct calculation, we have found a confirmation of the above qualitative constraint because in the limit $\alpha \to \infty$ the ISW effect contribution to the CMB angular power spectrum is very similar to the one computed for $\alpha = 0$, i.e. for the $\Lambda$CDM model. We have then addressed the background evolution of the Universe for $\alpha \to \infty$ and we have found that the model behaves like CDM at early times and then abruptly passes to a dS phase. Taking advantage of the SNIa Constitution set analysis, we have placed the transition at a redshift $z_{\rm tr} = 0.22$. In conclusion, it seems that the gCg model has some chances of being viable not only for very small values of $\alpha$ but also for very large ones (we have here limited our discussion to the ISW effect only). However, it must be pointed out that in both cases a degeneracy problem appears: $i )$ for $\alpha \to 0$, it is well-known that the gCg model degenerates into the $\Lambda$CDM; $ii)$ for $\alpha \to \infty$ the degeneration takes place into a ``step-transition'' CDM-dS model. Note that in the second case, the degeneration is not complete. In fact the ``original'' CDM-dS model has a vanishing speed of sound. In the corresponding limit of the gCg, instead, the speed of sound diverges, so the scenario is completely different. A more complete analysis of such picture would therefore be interesting and could perhaps be performed in the framework of the {\it Cuscuton} model introduced in \cite{Afshordi:2006ad, Afshordi:2007yx}. \acknowledgments{I wish to thank V. Gorini, A. Yu. Kamenshchik, T. Kobayashi, U. Moschella, A.A. Starobinsky and Y. Urakawa for useful comments and suggestions and the Institute of Cosmology and Gravitation (ICG), Portsmouth (UK), for the kind hospitality during the final part of this project. I am indebted with D. Bertacca, S. L. Cacciatori and J. Fabris for invaluable discussions.} \newpage
2,877,628,089,300
arxiv
\section{Introduction} Frustration is becoming considerably more relevant in modern condensed-matter physics because nontrivial and fascinating phenomena often occur owing to the quantum nature in systems including frustrations. A typical frustrated magnet is the triangular-lattice antiferromagnet. This system includes regular triangles composed of bonds of antiferromagnetic interaction between nearest-neighbor spins. A few decades ago, the quantum Heisenberg antiferromagnet on the triangular lattice became one of the central issues as a candidate system of the spin liquid state\cite{Anderson_tri}; extensive studies have long been carried out\cite{Huse_Elser, Jolicour_LGuillou,Singh_Huse,Bernu1992,Bernu1994,Leung_Runge, Richter_lecture2004,Sakai_HN_PRBR,DYamamoto2013}. Further, good experimental realizations were reported\cite{Shirata_S1tri_2011,Shirata_Shalf_PRL2012}. Distortion effects along one direction\cite{Weihong1999,Yunoki2006, Starykh2007,Heidarian2009,Reuther2011,Weichselbaum2011,Ghamari2011, Harada2011,s1tri_LRO} and randomness effects\cite{KWatanabe_JPSJ2015,Shimokawa_JPSJ2015} have also been investigated. Recently, it is widely believed that the ground state of the triangular-lattice antiferromagnet reveals a spin-ordered state of the so-called 120-degree structure without a magnetic field. If a magnetic field is applied to this system, it is well known that a magnetization plateau appears at one-third of the saturation magnetization in the zero-temperature magnetization curve, although the corresponding classical case does not show this plateau. The spins at the plateau are considered to be collinear, and the state is called the up-up-down state. In the magnetization curves, this system shows a Y-shaped spin state between the 120-degree-structured state under no field, and the up-up-down state in the magnetization plateau under a field. On the other hand, a similar up-up-down state is also realized in the spin model on the dice lattice as its ground state without a magnetic field\cite{Dice_Jagannathan}. The dice lattice is a bipertite one; frustration disappears and the Lieb-Mattis (LM) theorem holds\cite{Lieb_Mattis}. Therefore, the up-up-down state of this model is the ferrimagnetic one based on the LM theorem. It is worth emphasizing that this state originates only from the lattice structure in spite of the fact that a field is not applied. Note additionally that the dice lattice is obtained by the removal of parts of the interaction bonds in the triangular lattice. Under these circumstances, we are faced with a question: what is the spin state in the Heisenberg antiferromagnet when one continuously controls the interaction bonds between triangular and dice lattices? Note here that the controlling of the bonds corresponds to the $\sqrt{3}\times\sqrt{3}$ distortion in the triangular lattice. The purpose of this letter is to clarify the behavior of the change in the spin state in the $\sqrt{3}\times\sqrt{3}$-distorted triangular lattice. The present numerical-diagonalization results provide us with a new route to change a spin state from the 120-degree-structured state to the up-up-down spin state without applying a magnetic field. The Hamiltonian studied in this letter is given by \begin{eqnarray} {\cal H} &=& \sum_{i \in \mbox{B},j \in \mbox{B}^{\prime}} J_{1} \mbox{\boldmath $S$}_{i}\cdot\mbox{\boldmath $S$}_{j} \nonumber \\ & & +\sum_{i \in \mbox{A},j \in \mbox{B}} J_{2} \mbox{\boldmath $S$}_{i}\cdot\mbox{\boldmath $S$}_{j} +\sum_{i \in \mbox{A},j \in \mbox{B}^{\prime}} J_{2} \mbox{\boldmath $S$}_{i}\cdot\mbox{\boldmath $S$}_{j} . \label{Hamiltonian} \end{eqnarray} Here, $\mbox{\boldmath $S$}_{i}$ denotes the $S=1/2$ spin operator at site $i$. In this study, we consider the case of isotropic interaction in spin space. The site $i$ is assumed to be the vertices of the lattice illustrated in Fig.~\ref{fig1}. The number of spin sites is denoted by $N_{\rm s}$. The vertices are divided into three sublattices A, B, and B$^{\prime}$; each site $i$ in the A sublattice is linked by six interaction bonds $J_{2}$ denoted by thick lines; each site $i$ in the B or B$^{\prime}$ sublattice is linked by three interaction bonds $J_{2}$ and three interaction bonds $J_{1}$, denoted by thin lines. We denote the ratio of $J_{2}/J_{1}$ by $r$. We consider that all interactions are antiferromagnetic, namely, $J_{1} > 0$ and $J_{2} > 0$. Energies are measured in units of $J_{1}$; hereafter, we set $J_{1}=1$ and examine the case of $J_{2} \ge J_{1}$. Note that for $J_{1}=J_{2}$, namely, $r=1$, the present lattice is identical to the triangular lattice, where the ground state is well known as a nonmagnetic state. For $J_{1}\rightarrow 0$, namely, $r\rightarrow\infty$, on the other hand, the network of the vertices becomes the dice lattice. The finite-size clusters that we treat in the present study are depicted in Fig.~\ref{fig1}(c)-(f). We examine the cases of $N_{\rm s}=9$, 12, 21, 27, and 36 under the periodic boundary condition and the case of $N_{\rm s}= 37$ under the open boundary condition. In the former cases, $N_{\rm s}/3$ is an integer; therefore, the number of spin sites in a sublattice is the same irrespective of sublattices. The clusters in the former cases are rhombic and have an inner angle $\pi/3$; this shape allows us to capture two dimensionality well. We calculate the lowest energy of ${\cal H}$ in the subspace belonging to $\sum _j S_j^z=M$ by numerical diagonalizations based on the Lanczos algorithm and/or the Householder algorithm. The numerical-diagonalization calculations are unbiased against any approximations; one can therefore obtain reliable information of the system. The energy is denoted by $E(N_{\rm s},M)$, where $M$ takes an integer or a half odd integer up to the saturation value $M_{\rm sat}$ ($=N_{\rm s}/2$). We define $M_{\rm spo}$ as the largest value of $M$ among the lowest-energy states, because we focus our attention on spontaneous magnetization. Note, first, that for cases of odd $N_{\rm s}$, the smallest $M_{\rm spo}$ cannot vanish; the result of $M_{\rm spo}=1/2$ in the ground state indicates that the system is nonmagnetic. We also use the normalized magnetization $m=M_{\rm spo}/M_{\rm sat}$. Part of the Lanczos diagonalizations were carried out using a MPI-parallelized code, which was originally developed in the study of Haldane gaps\cite{HN_Terai}. The usefulness of our program was confirmed in large-scale parallelized calculations\cite{kgm_gap,s1tri_LRO, HN_TSakai_kgm_1_3,HN_TSakai_kgm_S,HN_YHasegawa_TSakai_dist_shuriken}. \begin{figure}[b] \begin{center} \includegraphics[width=0.34\textwidth]{hnkn_fg1.eps} \end{center} \caption{ Distorted triangular lattice and finite-size clusters. Panel (a) illustrates sites A, B, and B$^{\prime}$ as well as the unit cell of the system. Panel (b) depicts the classical picture. The rhombuses in panel (c) illustrate finite-size clusters for $N_{\rm s}=9$ and 36 under the periodic boundary condition; rhombuses in panel (d) for $N_{\rm s}=12$ and 27; rhombus in panel (e) for $N_{\rm s}=21$. Panel (f) illustrates the $N_{\rm s}=37$ cluster under the open boundary condition. } \label{fig1} \end{figure} Before observing our diagonalization results for the quantum case, let us consider the classical case composed of classical vectors with length $S$. A probable spin state is depicted in Fig.~\ref{fig1}(b). For a given $r$, minimizing the energy determines the angle $\theta$ related to $M_{\rm spo}$. One obtains $M_{\rm spo}/M_{\rm sat}=1/3$ for $r \ge 2$ and $M_{\rm spo}/M_{\rm sat}=(r-1)/3$ for $1 < r \le 2$. The same spin state was discussed in ref.~\ref{Nishiwaki_jpsj2011}. \begin{figure}[b] \begin{center} \includegraphics[width=0.34\textwidth]{hnkn_fg2.eps} \end{center} \caption{(Color) (a) $M$-dependence of the ground-state energy for $N_{\rm s}=36$. (b) $r$-dependence of the spontaneous magnetization for various system sizes. Violet diamonds, yellow reversed triangles, green triangles, blue squares, and red circles denote results for $N_{\rm s}=9$, 12, 21, 27, and 36 under the periodic boundary condition. Light blue closed circles denote results for $N_{\rm s}=37$ under the open boundary condition. } \label{fig2} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=0.34\textwidth]{hnkn_fg3.eps} \end{center} \caption{ System-size dependence of the critical ratios for various system sizes. The squares and diamonds denote the results for $r_{\rm c1}$ and $r_{\rm c2}$, respectively. } \label{fig3} \end{figure} Now, we present our results for the quantum case. First, our data for the lowest energy in each subspace of $M$ is shown in Fig.~\ref{fig2}(a), which depicts the case for $N_{\rm s}=36$. This figure reveals whether a spontaneous magnetization occurs, and its magnitude if it occurs. For $r=1.2$, the energy for $M=0$ is lower than the energies for a larger $M$, which indicates that the ground state is nonmagnetic. For $r=1.6$, on the other hand, the energies for $M=0$ to 3 are numerically identical, which means that the system shows a spontaneous magnetization, and this magnetization is $M_{\rm spo}=3$. For $r=2$, the energies for $M=0$ to 6 are numerically identical, and the spontaneous magnetization $M_{\rm spo}=6$. Since the saturation is $M_{\rm sat}=18$ in the $N_{\rm s}=36$ system, $m=1/3$ suggests that the LM ferrimagnetic state is realized. Figure~\ref{fig2}(a) strongly suggests that the present system shows gradual magnetization owing to the distortion $r > 1$ between the nonmagnetic state and the LM ferrimagnetic state. These intermediate states may be interpreted as a collapse of ferrimagnetism occurring in the dice-lattice antiferromagnet. Note that we have so far investigated a collapse of ferrimagnetism occurring in the Lieb-lattice antiferromagnet by various distortions. The distorted kagome-lattice antiferromagnet shows a similar intermediate state\cite{collapse_ferri2d,Shimokawa_JPSJ}, which will be compared to later. In other various distortions, such intermediate states were not detected so far\cite{shuriken_lett, HN_kgm_dist,HNakano_Cairo_lt,Isoda_Cairo_full,HN_TSakai_JJAP_RC}. Next, we examine the intermediate state in detail. Our results are depicted in Fig.~\ref{fig2}(b). For $N_{\rm s}=9$, the nonmagnetic state of $M_{\rm spo}=1/2$ and the LM ferrimagnetic state of $M_{\rm spo}=3/2=(1/3)M_{\rm sat} $ are neighboring with each other without an intermediate $M_{\rm spo}$; however, this behavior comes from the smallness of $N_{\rm s}$. For a larger $N_{\rm s}$, there appear intermediate-$M_{\rm spo}$ states between the nonmagnetic state of $M_{\rm spo}=0$ or 1/2 and the LM ferrimagnetic state of $M_{\rm spo}=(1/3)M_{\rm sat}$. Note here that the states of all possible $M_{\rm spo}$ are realized between the smallest $M_{\rm spo}$ and $(1/3)M_{\rm sat}$ irrespective of $N_{\rm s} \ge 12$. Another marked behavior is that the range of the ratio $r$ of the intermediate $M_{\rm spo}$ gradually widens as $N_{\rm s}$ is increased. In order to clarify this behavior, we plot the $N_{\rm s}$-dependences of the critical ratios depicted in Fig.~\ref{fig3}. Here, we define $r_{\rm c1}$ as the value of $r$ at which the ground state changes from $M_{\rm spo}=0$ or 1/2 to the next $M_{\rm spo}$, and $r_{\rm c2}$ as the value of $r$ at which the ground state changes to $M_{\rm spo}=(1/3)M_{\rm sat}$ from $M_{\rm spo}=(1/3)M_{\rm sat}-1$. Note that for $N_{\rm s}=9$, $r_{\rm c1}=r_{\rm c2}$ as mentioned above. One can easily observe that $r_{\rm c2}$ shows a very weak system size dependence. It is expected that an extrapolated value is $r_{\rm c2} \sim 1.9$. On the other hand, $r_{\rm c1}$ gradually decreases as $N_{\rm s}$ is increased. Although the dependence is not smooth, our numerical data suggest that an extrapolated value of $r_{\rm c1}$ is very close to $r=1$ corresponding to the case of the undistorted triangular lattice. It is difficult to determine $\lim_{N_{\rm s}\rightarrow \infty} r_{\rm c1} $ precisely from the present samples of finite sizes. To determine a reliable consequence concerning whether this limit is equal to 1 or different from 1, further investigations will be required. Note that even if this limit equals 1, such a consequence is consistent with the modern understanding of the triangular-lattice antiferromagnet, revealing the 120-degree spin structure in the ground state. Let us compare the $r$-dependence of $M_{\rm spo}$ with the classical case, shown in Fig.~\ref{fig2}(b). Our numerical data under the periodic boundary condition agree well with the solid line in the classical case illustrated in Fig.~\ref{fig1}(b). This agreement seems to suggest that the intermediate-$M_{\rm spo}$ spin states in the quantum case can be understood based on the classical picture. In the following, let us examine whether this classical picture is valid in the quantum case from the viewpoint of the local magnetization $\langle S_{i}^{z}\rangle$. Here, the symbol $\langle {\cal O} \rangle$ represents the expectation value of the operator ${\cal O}$ with respect to the lowest-energy state within the subspace characterized by a magnetization $M_{\rm spo}$. \begin{figure}[b] \begin{center} \includegraphics[width=0.34\textwidth]{hnkn_fg4.eps} \end{center} \caption{ Site-dependence of the local magnetizations $\langle S_{i}^{z}\rangle$ for the system of $N_{\rm s}=37$. Distance $d$ is the distance between site $i$ and the center of the system with the open boundary, measured in units of the distance between two nearest-neighbor sites in the triangular lattice for $r=1$. } \label{fig4} \end{figure} Figure \ref{fig4} depicts the site-dependence of the local magnetizations for the system of $N_{\rm s}=37$ under the open boundary condition. Owing to this boundary condition, spin sites in each sublattice are not equivalent. The sites are divided into groups of equivalent sites characterized by the distance from the center of the cluster. Thus, we present our results in Fig.~\ref{fig4} as a function of the distance. We study the results for the case under the open boundary condition to compare a similar intermediate-$M_{\rm spo}$ state reported in the distorted kagome-lattice antiferromagnet\cite{collapse_ferri2d,Shimokawa_JPSJ}, in which the local magnetizations show a nontrivial incommensurate modulation suggesting non-Lieb-Mattis ferrimagnetism. Note that the realizations of such incommensurate-modulation states were originally reported in several one-dimensional systems \cite{Ivanov_Richter,Yoshikawa_Miya_JPSJ2005,Hida_JPSJ2007, Hida_JPCM2007,Hida_Takano_PRB2008,Montenegro_Coutinho_PRB2008, Hida_Takano_Suzuki_JPSJ2010,Shimokawa_Nakano_JPSJ2011, Furuya_Giamarchi_PRB2014,Hida_JPSJ2016}. Therefore, in this study, we focus on identifying the relationship between these incommensurate-modulation states and the intermediate states. We confirm that the intermediate-$M_{\rm spo}$ states appear in the case under the open boundary condition as depicted in the results in Fig.~\ref{fig2}(b). Note that $m$ corresponding to the LM ferrimagnetism does not agree with 1/3 when $N_{\rm s}/3$ is not an integer. For $r=2.2$ in the present system, no significant behavior corresponding to incommensurate modulation is observed in our numerical data away from the boundary of the cluster when one excludes results on the open boundary, although a small boundary effect penetrates into the inside of the cluster. This is consistent with the fact that in this case, the LM ferrimagnetic state is realized. For $r=1.8$ in the present system, on the other hand, an intermediate-$M_{\rm spo}$ state appears. Even in such a state, our numerical data away from the boundary of the cluster do not show the behavior of incommensurate modulation. Therefore, our present results do not detect a direct evidence of the intermediate-$M_{\rm spo}$ state in the present system showing non-Lieb-Mattis ferrimagnetism. However, future studies under open boundary conditions should be carried out to clarify the relationship between these incommensurate-modulation states and the intermediate states in the present study. \begin{figure}[b] \begin{center} \includegraphics[width=0.34\textwidth]{hnkn_fg5.eps} \end{center} \caption{(Color) $r$-dependence of local magnetizations for $r\gtrsim r_{\rm c1}$. Solid lines represent the local magnetization from the classical picture. Squares denote the numerical results for the local magnetizations of B or B$^{\prime}$ sites; triangles denote those for A sites. Green closed symbols and red open symbols are for $N_{\rm s}=27$ and 36, respectively. } \label{fig5} \end{figure} Next, we examine the $r$-dependence of the local magnetizations under the periodic boundary condition. Our numerical results for $N_{\rm s}=27$ and 36 in the region of $r\gtrsim r_{\rm c1}$ are depicted in Fig.~\ref{fig5}. Note first that under this boundary condition, all spin sites in each sublattice are equivalent. Therefore, the numerical results of $\langle S_{i}^{z} \rangle$ at equivalent sites agree with each other within numerical errors. This is why how to present the numerical results of $\langle S_{i}^{z} \rangle$ is different between Figs.~\ref{fig4} and \ref{fig5}. A significant feature in Fig.~\ref{fig5} is that the results from the two sizes agree well with each other although our numerical results show step-like behaviors, which originate from a finite-size effect, as in Fig.~\ref{fig2}(b). Owing to this serious finite-size effect, it is quite difficult to extrapolate finite-size $\langle S_{i}^{z} \rangle$ for a given fixed $r$ to the limit of $N_{\rm s}\rightarrow\infty$. Let us consider why we limit the region to be $r \gtrsim r_{\rm c1}$, in which our date are presented in Fig.~\ref{fig5}. In this region, spontaneous magnetization occurs; therefore, the $z$-axis has a specific role, which means that the examination of the local magnetizations in Fig.~\ref{fig5} contributes considerably to our understanding of the magnetic structure of the intermediate states. If the spontaneously magnetized intermediate states show the same structure as the Y-shaped classical one, A-sublattice spins are supposed to be antiparallel to external field and B/B$^{\prime}$-sublattice spins are supposed to have components that are opposite to the A-sublattice spins. In Fig.~\ref{fig5}, the lines from the Y-shaped classical picture are also illustrated. A marked behavior in the classical picture is that the down spin at an A-sublattice site maintains its local magnetization to be $-1/2$. On the other hand, in our numerical data, the local magnetization of an A-sublattice spin for the quantum systems gradually increase when $r$ is decreased from $r=2$ to $r=r_{\rm c1}$. Certainly, we have to be careful when the comparison is carried out between the classical system and the present behaviors of the finite-size quantum systems. This causes the mixing effect of magnetized states from a quantum nature just at $r=1$, which makes it difficult to detect a difference in the local magnetizations between the classical and quantum cases. However, for $r \gtrsim r_{\rm c1}$ away from $r=1$; such a difficulty does not occurs. An important difference of the quantum case from the classical one is that the local magnetization of an A-sublattice spin shows a significant dependence on $r$ in the quantum case. Generally speaking, there are two sources for changes of the local magnetization; one is a deviation of the spin amplitude and the other is the spin angle measured from the axis of the spontaneous magnetization. In the Y-shaped spin state within the classical picture, an A-sublattice spin is antiparallel to the axis of the spontaneous magnetization; namely, the spin angle vanishes. Recall here the spin amplitude of the 120-degree structure of the undistorted-triangular-lattice antiferromagnet from the spin-wave theory\cite{Jolicour_LGuillou}, namely $\langle S_{z} \rangle = 0.239$. In the present cases for $r\gtrsim r_{\rm c1}$, we are approaching the unfrustrated case of the dice lattice; therefore the possibility that the spin amplitude becomes smaller than $\langle S_{z} \rangle = 0.239$ is quite low. Under this circumstance, we focus our attention on the numerical results of $N_{s}=36$ around $r=1.3$; we have $| \langle S_{i}^{z} \rangle | \sim 0.19$ for the A-sublattice spin. Such a small value of $| \langle S_{i}^{z} \rangle |$ cannot be explained only from the deviated spin amplitude without a change of the spin angle. Therefore, the spin state around $r=1.3$ of the quantum case is considered to be different from the Y-shaped spin state within the classical picture. The present analysis of the spin structure in the intermediate state is only the first step. For a deeper understanding of the spin structure, future investigations including a two-point correlation function and a chirality of the intermediate state are necessary. Finally, we would like to comment on the experimental situation. Tanaka and Kakurai reported magnetic phase transitions of RbVBr$_{3}$, which shows a structure of a distorted triangular lattice, although the ratio of the interactions is consequently considered to be $r<1$\cite{Tanaka_Kakurai}. Nishiwaki {\it et al}. studied RbFeBr$_{3}$, which also shows $r<1$\cite{Nishiwaki_jpsj2011}. A discovery of a new material with $1< r < 2$ would give useful information concerning the intermediate state of ferrimagnetism from experiments. In summary, we investigated the ground-state properties of the spin-$1/2$ Heisenberg antiferromagnet on the triangular lattice with a distortion by the numerical-diagonalization method. Under the conditions that the undistorted case is common with the triangular-lattice antiferromagnet without a magnetic field, and that the same up-up-down spin state is commonly realized both in the distorted case of the present model and in the $m$=1/3-plateau state of the triangular-lattice antiferromagnet under a magnetic field, we find that spontaneous magnetization grows along a new route to the $m$=1/3 up-up-down state due to the distortion of the lattice, which is different from the well-known route in the magnetization process of the triangular-lattice antiferromagnet. We are now examining this new state with intermediate spontaneous magnetization in more detail; the results will be published elsewhere. We wish to thank Professors H.~Sato, K.~Yoshimura, N. Todoroki, and Miss A. Shimada for fruitful discussions. We wish to thank Dr. James Harries for his critical reading of our manuscript. This work was partly supported by JSPS KAKENHI Grant Numbers 16K05418, 16K05419, and 16H01080(JPhysics). Nonhybrid thread-parallel calculations in numerical diagonalizations were based on TITPACK version 2 coded by H. Nishimori. Some of the computations were performed using facilities of the Department of Simulation Science, National Institute for Fusion Science; Institute for Solid State Physics, The University of Tokyo; and Supercomputing Division, Information Technology Center, The University of Tokyo. This work was partly supported by the Strategic Programs for Innovative Research; the Ministry of Education, Culture, Sports, Science and Technology of Japan; and the Computational Materials Science Initiative, Japan.
2,877,628,089,301
arxiv
\section*{Appendix} \begin{figure} \[ \!\!\!\!\!\!\!\!\!\includegraphics[height=17cm]{treeTranslation} \] \caption{Steps involved in translating $T_3$ to an NFA.\label{fig:treeTranslation}} \end{figure} \begin{figure} \label{fig:diningphilos} \[ \includegraphics[height=8cm]{philodecomposition} \] \caption{Decomposition of three dining philosophers.\label{fig:philodecomp}} \end{figure} In order to prove compositionality we first need to prove a small, technical lemma. \begin{lemma}\label{lem:minimalDecomposition} Suppose that $N:k\to l$ and $M:l\to m$ are nets with boundaries and $(U,V)$ is a non-trivial synchronisation. Then there exists a mutually independent family $\{(U_i,V_i)\}_{i\in I}$ of minimal synchronisations with $U=\bigcup_{i\in I}U_i$ and $V=\bigcup_{i\in I} V_i$. \end{lemma} \begin{proof} We argue by induction on $|U+V|$. If $(U,V)$ is minimal then the singleton family $\{(U,V)\}$ satisfies the requirements. Otherwise there exists a minimal synchronisation $(U',V')\subseteq (U,V)$. Now since there is at most one transition connected to each point on the boundary, we have $\target{U'}\cap \target{(U\backslash U')}=\varnothing$ and, similarly, $\source{V'}\cap\target{(V\backslash V')}=\varnothing$. Since $\target{U}=\source{V}$, we must also have $\target{(U\backslash U')}=\source{(V\backslash V')}$ and thus $(U\backslash U',V\backslash V')$ is a synchronisation. By the inductive hypothesis, there exists a mutually independent family $\{(U_i, V_i)\}_{i\in I}$, and so $\{(U',V')\}\cup \{(U_i, V_i)\}_{i\in I}$ fulfils the requirements. \qed \end{proof} \paragraph{Proof of Theorem~\ref{thm:compositionality}.} \begin{proof} ($\Rightarrow$) If $\marking{N\mathrel{;} M}{X\uplus Y} \dtrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'\uplus Y'}$ then there exists mutually independent set of minimal synchronisations $W\subseteq \minsynch{N}{M}$ with $\source{W}=\alpha$ and $\target{\alpha}=\beta$. Consider $U\Defeq \bigcup_{(X,Y) \in W} X \subseteq T_N$ and $V\Defeq \bigcup_{(X,Y)\in W} Y \subseteq T_M$. Since each $(X,Y)\in W$ is a synchronisation, we have $\target{X}=\source{Y}$ and so $\target{U}=\source{V}$. By definition, in each $(X,Y)\in W$, $X$ and $Y$ are mutually independent in, respectively, $N$ and $M$. Since $W$ is mutually independent, if $(X,Y)\neq (X',Y')\in W$ we have $\preandpost{(X,Y)}\cap \preandpost{(X',Y')}=\varnothing$, so $(\preandpost{X}+\preandpost{Y}) \cap (\preandpost{X'}+\preandpost{Y'}) = \varnothing$ and thus both $\preandpost{X}\cap \preandpost{X'} =\varnothing$ and $\preandpost{Y}\cap \preandpost{Y'}=\varnothing$. It follows that $U$ and $V$ are mutually independent in $N$ and $M$, respectively, and letting $\gamma\Defeq \target{U}(=\source{V})$ we have $\marking{N}{X}\dtrans{\alpha}{\gamma}\marking{N}{X'}$ and $\marking{M}{Y}\dtrans{\gamma}{\beta}\marking{M}{Y'}$ as required. ($\Leftarrow$) If $\marking{N}{X}\dtrans{\alpha}{\gamma} \marking{N}{X'}$ and $\marking{M}{Y}\dtrans{\gamma}{\beta}\marking{M}{Y'}$ for some $\alpha\in\{0,1\}^k$, $\beta\in\{0,1\}^m$, $\gamma\in\{0,1\}^l$ then there exists mutually independent $U\subseteq T_N$ with $\source{U}=\alpha$, $\target{U}=\gamma$, and mutually independent $V\subseteq T_M$ with $\source{V}=\gamma$, $\target{V}=\beta$. In particular, $(U,V)$ is a synchronisation and so, using the conclusion of Lemma~\ref{lem:minimalDecomposition}, there exists a mutually independent family $\{(U_i,V_i)\}_{i\in I}$ of minimal synchronisations with $\bigcup_i U_i=U$ and $\bigcup_i V_i =V$. This family witnesses the transition $\marking{N\mathrel{;} M}{X\uplus Y}\dtrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'\uplus Y'}$. \qed \end{proof} \paragraph{Proof of Corollary~\ref{cor:traces}.} \begin{proof} Simple induction on $p$, using the conclusion of Theorem~\ref{thm:compositionality}. \qed \end{proof} \begin{lemma}\label{lem:decompWeakTrace} Suppose that $N:k\to l$ and $M:l\to m$ are nets with boundaries. If there is a trace \[ \marking{N\mathrel{;} M}{X\uplus Y} (\xrightarrow{\epsilon_{k,m}})^* \marking{N\mathrel{;} M}{X'\uplus Y'}\] then there exists $p\in \rring{N}$, $\gamma_i\in\{0,1\}^l$ for $1\leq i\leq p$ and traces \[ \marking{N}{X} \dtrans{0^k}{\gamma_1} \marking{N}{X_1} \dots \dtrans{0^k}{\gamma_k} \marking{N}{X'} \] \[ \marking{M}{Y} \dtrans{\gamma_1}{0^m}\marking{M}{Y_1} \dots \dtrans{\gamma_k}{0^m}\marking{M}{Y'}. \] \end{lemma} \begin{proof} Induction on the length of the trace, using the conclusion of Theorem~\ref{thm:compositionality}. \qed \end{proof} \paragraph{Proof of Theorem~\ref{thm:weakCompositionality}.} \begin{proof} (i) Suppose that $\marking{N\mathrel{;} M}{X\uplus Y} \dTrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'\uplus Y'}$ for some $\alpha\in\{0,1\}^k$ and $\beta\in\{0,1\}^n$. Then, by definition, there exist $X''\uplus Y''$, $X'''\uplus Y'''$ with \[ \marking{N\mathrel{;} M}{X\uplus Y} (\epstrans{k}{m})^* \marking{N\mathrel{;} M}{X''\uplus Y''} \dtrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'''\uplus Y'''} (\epstrans{k}{m})^* \marking{N\mathrel{;} M}{X'\uplus Y'} \] Now we use the conclusions of Lemma~\ref{lem:decompWeakTrace} and Theorem~\ref{thm:compositionality} to obtain the required traces. (ii) If $\marking{N}{X}\dTrans{\alpha}{\gamma} \marking{N}{X'}$ and $\marking{M}{Y}\dTrans{\gamma}{\beta}\marking{M}{Y'}$ for some $\alpha\in\{0,1\}^k$, $\beta\in\{0,1\}^m$, $\gamma\in\{0,1\}^l$, then there exist $p_N,q_N,p_M,q_M\in\rring{N}$, $X'',X'''\subseteq P_N$, $Y'',Y'''\subseteq P_M$ and traces \[ \marking{N}{X} (\xrightarrow{\epsilon_{k,l}})^{p_N} \marking{N}{X''} \dtrans{\alpha}{\gamma} \marking{N}{X'''} (\xrightarrow{\epsilon_{k,l}})^{q_N} \marking{N}{X'} \] \[ \marking{M}{Y} (\xrightarrow{\epsilon_{l,m}})^{p_M} \marking{M}{Y''} \dtrans{\gamma}{\beta} \marking{M}{Y'''} (\xrightarrow{\epsilon_{l,m}})^{q_M} \marking{M}{Y'} \] Now, using the fact that each net in any marking can make $\epsilon$ transition and remain in the same marking (witnessing the firing of the empty set of transitions), we can use Theorem~\ref{thm:compositionality} to obtain: \begin{multline*} \marking{N\mathrel{;} M}{X\uplus Y} (\xrightarrow{\epsilon_{k,m}})^{p_N} \marking{N\mathrel{;} M}{X''\uplus Y} (\xrightarrow{\epsilon_{k,m}})^{p_M} \\ \marking{N\mathrel{;} M}{X''\uplus Y''} \dtrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'''\uplus Y'''} \\ (\xrightarrow{\epsilon_{k,m}})^{q_N} \marking{N\mathrel{;} M}{X'\uplus Y'''} (\xrightarrow{\epsilon_{k,m}})^{q_M} \marking{N\mathrel{;} M}{X'\uplus Y'} \end{multline*} and thus $\marking{N\mathrel{;} M}{X\uplus Y}\dTrans{\alpha}{\beta}\marking{N\mathrel{;} M}{X'\uplus Y'}$ as required. \qed \end{proof} \paragraph{Proof of Theorem~\ref{thm:correctness}.} \begin{proof} We prove this by structural induction on $t$. The base case, when $t$ is a variable, trivially holds. The interesting inductive case is $t\mathrel{;} t'$. We must show that $\epsmin{ \epsminpr{t}_{(\mathcal{V},\mathcal{F})} \mathrel{;} \epsminpr {t'}_{(\mathcal{V},\mathcal{F})} }$ ($\dagger$) is isomorphic to $\epsmin{ \NFA{ \semanticsOf{t\mathrel{;} t'}_\mathcal{V} } { \mrk{t\mathrel{;} t'}{\mathcal{I}} } { \mrk{t\mathrel{;} t'}{\mathcal{F}} } }$. Using the definitions of $\semanticsOf{-}_\mathcal{V}$ and $\mrk{-}{}$: \begin{multline}\label{eq:simplification} \epsmin{ \NFA{\semanticsOf{t\mathrel{;} t'}_\mathcal{V}} {\mrk{t\mathrel{;} t'}{\mathcal{I}} } {\mrk{t\mathrel{;} t'}{\mathcal{F}} } }\\ =\epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}\mathrel{;} \semanticsOf{t'}_\mathcal{V}} {\mrk{t}{\mathcal{I}} \uplus \mrk{t'}{\mathcal{I}}} { \mrk{t}{\mathcal{F}} \uplus \mrk{t'}{\mathcal{F}} } } \end{multline} The inductive hypothesis gives us that \begin{equation}\label{eq:ind1} \epsminpr{t}_{(\mathcal{V},\mathcal{F})} \cong \epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}} {\mrk{t}{\mathcal{I}} } {\mrk{t}{\mathcal{F}} } } \end{equation} and \begin{equation}\label{eq:ind2} \epsminpr{t'}_{(\mathcal{V},\mathcal{F})} \cong \epsmin{ \NFA{\semanticsOf{t'}_\mathcal{V}} {\mrk{t'}{\mathcal{I}} } {\mrk{t'}{\mathcal{F}} } } \end{equation} Substituting~ \eqref{eq:ind1} and~\eqref{eq:ind2} in ($\dagger$), and using~\eqref{eq:simplification}, our task reduces to showing that: \begin{multline} \epsmin{ \epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}} {\mrk{t}{\mathcal{I}} } {\mrk{t}{\mathcal{F}} } } \mathrel{;} \epsmin{ \NFA{\semanticsOf{t'}_\mathcal{V}} {\mrk{t'}{\mathcal{I}} } {\mrk{t'}{\mathcal{F}} } } } \\ \cong \epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}\mathrel{;} \semanticsOf{t'}_\mathcal{V}} {\mrk{t}{\mathcal{I}} \uplus \mrk{t'}{\mathcal{I}}} {\mrk{t}{\mathcal{F}} \uplus \mrk{t'}{\mathcal{F}} } } \end{multline} To do this, it is sufficient to show that \begin{equation} \label{eq:withepsmin} \epsclose{ \epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}} {\mrk{t}{\mathcal{I}} } {\mrk{t}{\mathcal{F}} } } \mathrel{;} \epsmin{ \NFA{\semanticsOf{t'}_\mathcal{V}} {\mrk{t'}{\mathcal{I}} } {\mrk{t'}{\mathcal{F}} } } } \end{equation} and \begin{equation}\label{eq:otherone} \epsclose{ \NFA{ \semanticsOf{t}_\mathcal{V}\mathrel{;} \semanticsOf{t'}_\mathcal{V}} {\mrk{t}{\mathcal{I}} \uplus \mrk{t'}{\mathcal{I}} } {\mrk{t}{\mathcal{F}} \uplus \mrk{t'}{\mathcal{F}} } } \end{equation} recognise the same language, where $\epsclose{-}$ means $\epsilon$-closure. But~\eqref{eq:withepsmin} recognises the same language as \begin{equation}\label{eq:withoutepsmin} \epsclose{ \NFA{\semanticsOf{t}_\mathcal{V}} {\mrk{t}{\mathcal{I}} } {\mrk{t}{\mathcal{F}} } \mathrel{;} \NFA{\semanticsOf{t'}_\mathcal{V}} {\mrk{t'}{\mathcal{I}} } {\mrk{t'}{\mathcal{F}} } } \end{equation} and now the translation between paths in~\eqref{eq:otherone} and~\eqref{eq:withoutepsmin} follows directly from the conclusion of Theorem~\ref{thm:weakCompositionality}. \qed \end{proof} \begin{figure} \[ \includegraphics[height=10cm]{phrow3min} \] \caption{Fixed point reached at minimal DFA for $PhRow_3$, error state not drawn.\label{fig:philofixedpoint}} \end{figure} \section{Conclusions and future work} \label{sec:discussion} We have introduced a new technique for reachability in bounded Petri nets, based on \textit{(i)} structural decomposition using a recently developed compositional algebra and \textit{(ii)} avoiding state explosion by focusing only on interactions between component nets, forgetting internal state. Our technique depends on finding efficient decompositions and works best when the computation reaches fixpoints w.r.t. interactions on boundaries in composed systems, as illustrated in the examples that we have highlighted. We have proved that the technique is correct, implemented it and performed a number of experiments. Finally, we have developed and implemented an algorithm for automatic decomposition of nets that performs adequately on a number of examples. In future work we plan to improve our decomposition algorithm and characterise the class of nets to which our approach is suited. Additionally, by using the full algebra~\cite{Soboci'nski2010,Bruni2012} of nets, in particular, the possibility of connecting several transitions to the same boundary port, we hope to alleviate some of the problems identified in \S\ref{sec:implementation}. We also plan to generalise our approach to other models: for example by examining symbolic representations of the algebras of P/T nets in~\cite{Bruni2011,Bruni2012} we hope to extend our technique to coverability. \section{Implementation and experimental results} \label{sec:implementation} Our implementation has been written in Haskell, and is available for download\footnote{\url{http://users.ecs.soton.ac.uk/os1v07/ICALP13}}. The high level view of our algorithm is: \begin{enumerate} \item As input, take an ordinary marked net $N$ (considered as a net with boundaries $N:0\to 0$) and a target marking, given place-wise, to be checked for reachability. Concretely, each place is labelled with `Yes,' (token must be present) `No' (token must be absent) or `Don't care.' \item Using an automatic decomposition procedure (described in \S\ref{sec:decomposition}), we decompose the net, obtaining a wiring decomposition (as introduced in \S\ref{sec:correctness}) enhanced with additional information to enable memoisation. \item Taking advantage of memoisation---to eliminate duplicate computations---traverse the wiring decomposition tree to compute $\epsminpr{-}$: \begin{enumerate} \item At leaves, we have (typically, small) nets with boundaries, and the local desired marking. We use the procedure described in \S\ref{sec:netstonfas} to generate the NFA that corresponds to the net and apply $\epsilon-$closure and minimisation, described in \S\ref{sec:minimisation}. \item At a composition node, we generate the NFAs corresponding to each sub-tree, and compose them using the variant of product-construction discussed in \S\ref{sec:intro}, finally $\epsilon-$closing and minimising the resulting NFA. \item At a tensor node, we generate the NFAs corresponding to each sub-tree, combine them using the standard product construction on NFAs, and perform minimisation. \end{enumerate} \end{enumerate} The experimental results given in Figs.\ \ref{fig:bnandtnexperiments} and \ref{fig:philotimes} are given for pre-constructed decompositions, that is, only step 3 of the algorithm is performed. The results in \figref{fig:tndecomposetimes} were obtained using the implementation of the full algorithm. \subsection{Decomposer} \label{sec:decomposition} Our net decomposition algorithm attempts to find decompositions via two simple approaches: first we look for a net-transition that, when removed, results in two disconnected nets. If many such transitions exist then we take the one that results in the most balanced (in number of places) decomposition. An example is the balanced decomposition in \figref{fig:decompositions}. If such a transition cannot be found, we look for a place that, once removed, results in two disconnected nets. This results in a `$\mathrel{;}$' node (that results from removing the place) followed by a `$\otimes$' node (that composes the two disconnected nets). Again, if many such places exist, we look for one that results in the most balanced decomposition. An example of this decomposition strategy is the decomposition in \figref{fig:treenet}. Both searches are quadratic in the size of the net. If neither a suitable transition nor place is found, we remove a place that results in the smallest boundary, after decomposition. The time taken to decompose the net $T_n$ is given in \figref{fig:tndecomposetimes}; in this example the time to decompose the net dominates. Note that, given a net, a decomposition must be computed (or given as input) only once, whence different various initial markings and desired markings can be considered. \subsection{NFA $\epsilon-$closure and minimisation} \label{sec:minimisation} Our approach relies on ignoring internal computations to reduce the state space to be explored. To produce minimal DFAs for an input NFA, we apply epsilon closure, and minimisation, as detailed in \S\ref{sec:weakclosure}. We perform epsilon closure through a variant of the subset-construction on NFAs, which constructs the NFA of sets of states reachable through $\epsilon-$ or standard transitions, starting from the $\epsilon-$closure of the initial states of the input NFA. To perform minimisation we employ the well-known algorithm of Brzozowski~\cite{Brzozowski1962}. A notable implementation detail is that we use a variant of Reduced Ordered Binary Decision Diagrams (ROBDD, commonly written as BDD) to encode the transition relation of the NFA---the labels of our transitions are binary strings and thus any state $x\in X$ gives rise to a function $\{0,1\}^{k+l}\to \mathcal{P}(X)$. Traditionally, BDDs are used to provide compact representations for functions $\{0,1\}^n\to \{0,1\}$, but we found it a straightforward exercise to generalise from the boolean algebra of the booleans to the boolean algebra of subsets. \subsection{Experimental results and discussion} In addition to the results in \figref{fig:bnandtnexperiments} we considered a standard net encoding of the dining philosopher problem. Given the nets in \figref{fig:philocomponents}, let $PhRow_1 \Defeq (ph \mathrel{;} fk)$, $PhRow_{k+1} \Defeq (ph \mathrel{;} (fk \mathrel{;} PhRow_k))$. Then a table of $n$ dining philosophers can be obtained as: \begin{equation}\label{eq:philoexpression} Ph_n\Defeq d_3\mathrel{;} ((i_3 \otimes PhRow_n) \mathrel{;} e_3) \qquad (\text{see \figref{fig:philodecomp}}). \end{equation} Running times, when checking for deadlock in $Ph_n$, are given in \figref{fig:philotimes}. The slow growth w.r.t. $n$ illustrates the fact that our technique works well when a fixed point is quickly reached when traversing a wiring decomposition, for example, the right decomposition of $B_n$ in \figref{fig:decompositions} reaches a fixed point after one `$\mathrel{;}$' node in the wiring decomposition. The fixed point for \eqref{eq:philoexpression} is reached when calculating $PhRow_3$: the resulting minimal DFA has 10 states, as shown in \figref{fig:philofixedpoint}. Intuitively, this means that while one can distinguish between 1, 2 and $\geq$3 philosophers via interaction on the boundary, all $PhRow_k$ reduce to the same minimal DFA for $k\geq 3$. Our procedure takes advantage of this: memoisation of compositions means that we minimise only once. \begin{figure} \[ \includegraphics[height=2.5cm]{philosopherAndFork} \] \caption{Component nets of philosopher decomposition.} \label{fig:philocomponents} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.45\linewidth} \input{tresSD} \caption{Time to deconstruct $T_n$ (as per \figref{fig:treenet}) and generate the minimal DFA.} \label{fig:tndecomposetimes} \end{subfigure}% \hspace{0.5cm} \begin{subfigure}[t]{0.45\linewidth} \centering \input{pres} \caption{Time to generate minimal DFA for $Ph_n$, defined in~\eqref{eq:philoexpression}.} \label{fig:philotimes} \end{subfigure} \caption{Example NFA construction times for $B_n$ and $T_n$.} \end{figure} Many nets are not amenable to efficient decomposition and are unsuitable for our technique. For instance, our implementation performs poorly when input nets are cliques, nets where every place is connected to every other by a transition, or in general, on ``densely connected'' nets. One reason why our technique is infeasible for such nets is because two factors influence the size of the generated NFA from a net $N:k\to l$: \textit{(i)} the number of places---if $N$ has $n$ places, this can translate to potentially $2^n$ NFA-states, and \textit{(ii)} the size of the net boundaries, since it implies an alphabet of size up to $2^{(k + l)}$. In fact, even with hand constructed decompositions, our implementation fails to terminate even for very small cliques due to large boundaries in any decomposition. \section*{Introduction} \label{sec:intro} We introduce a novel technique for checking reachability in 1-bounded Petri nets. Our approach relies on a structural decomposition of nets, using the algebra of \emph{nets with boundaries} developed in~\cite{Soboci'nski2010,Bruni2011,Bruni2012} and the algebra of labelled transition systems (LTS) originally developed in~\cite{Katis1997a}. After explaining the intuitions and some motivating examples, we prove the technique correct, discuss our implementation and report on experimental results. Many asynchronous systems are regular in their structure, in the sense that they can be considered as a suitable composition of several identical, communicating components. In many such systems, the communication between individual components can be characterised using relatively small (w.r.t. the size of the global state space) amounts of information, and as a consequence, the reachability of a particular global state can be checked locally. The algebra of nets with boundaries allows us to capture precisely how separate “component nets” communicate with each other. \begin{figure} \[ \includegraphics[height=2.5cm]{bookshelf.pdf} \] \caption{The net $B_4$ and a “cut” along its transition $t_2$.\label{fig:bookshelf}} \end{figure} To illustrate the ideas that underlie our approach we introduce the simple, well-known\footnote{For example, see~\cite[Fig.\ 6]{Esparza2002}.} bounded buffer net, $B_n$, illustrated in the left part of \figref{fig:bookshelf}. We wish to check whether the “opposite” marking is reachable—that is, the places in the lower row are to be marked and the places in the upper row are to be unmarked. Taking a global view, a simple calculation confirms that the length of the firing sequence necessary to reach the desired marking is quadratic in $n$ (see~\figref{fig:decompositionTimes}). We will, instead, check for reachability locally, component-wise, so imagine that the net is “split” into two nets $N_0$ and $N_1$, sharing the transition $t_2$, as in the right part of \figref{fig:bookshelf}. \begin{remark}\label{rmk:policies} Observe (1) that $N_0$ and $N_1$ can proceed independently to reach the desired local marking, only “synchronising” on $t_2$ and (2) the “synchronisation policy” is quite simple to describe. Indeed, $N_1$ can fire its local copy of $t_2$ an arbitrary number (including 0) of times during a successful computation; $N_0$ can reach its desired marking after firing its copy of $t_2$ at least twice, after which $t_2$ can be fired an arbitrary additional number of times. These two “policies” are clearly compatible, meaning that the entire net can reach its global desired marking. \end{remark} To make the above intuitions precise, we recall the algebra of nets introduced in~\cite{Soboci'nski2010}. We will use a non-standard graphical representation of nets, more suited for illustrating the operations of the algebra: $B_4$ is rendered with the alternative graphical notation in the left-most diagram of \figref{fig:bookshelfalternative}. Transitions are represented using undirected links and each link can be connected to an arbitrary number of ports. Each place has two ports: one for incoming transitions, illustrated with a triangle pointing into the place, and one for outgoing transitions, illustrated with a triangle pointing out of the place. Thus the pre-set of a transition is the set of places to which it is connected via their outgoing port, and its post-set is the set of places to which it is connected via their incoming port. Transitions can also be connected to \emph{boundary ports}, which serve as an interface between nets with boundaries. The net $B_4$ can be expressed as the composition $\top \mathrel{;} b_1\mathrel{;} b_1\mathrel{;} b_1\mathrel{;} b_1 \mathrel{;} \bot$; the individual components $\top$, $b_1$ and $\bot$ are illustrated in \figref{fig:bookshelfalternative}. The operation ‘$\mathrel{;}$’ that composes two nets along a compatible, common boundary is defined formally in \S\ref{sec:composition}. \begin{figure} \[ \includegraphics[height=2cm]{bookshelfalternative.pdf} \] \caption{Obtaining $B_4$ as a composition of nets $\top$, $b_1$ and $\bot$. \label{fig:bookshelfalternative}} \end{figure} Each component net with boundaries, together with its initial marking and desired local marking, can be translated to a non-deterministic finite automaton (NFA), with states being the reachable markings, and transitions the boundary interactions observed when net transitions fire. The initial state is the initial marking and the final state is the desired marking. We illustrate\footnote{All illustrations of automata were generated with GraphViz~(\url{http://www.graphviz.org}). For space-efficiency, transitions are annotated with sets: $\{x,\,y\}$, representing the existence of two transitions, labelled respectively $x$ and $y$. We use $*$ in the labels as shorthand for any choice of $0$ and $1$.} this translation in \figref{fig:piecestonfas}. For example, in the translation of $b_1$, state $0$ corresponds to the initial marking and state $1$ to the desired complementary marking. The labels of transitions are, in general, pairs of binary strings $\alpha$ and $\beta$, written $\alpha / \beta$, representing interaction on the left ($\alpha$) and the right ($\beta$) boundaries. The concept of “interaction on a boundary” is important and we will explain it further below. To guarantee compositionality, we must use an underlying \emph{step} firing semantics of nets, i.e. a transition in the NFA witnesses the firing of a (possibly empty) set of independent transitions within the component net. Returning to the translation of $b_1$: the $0/0$ labelled NFA-transitions in state $0$ and $1$ witness the possibility of no behaviour (i.e. the empty set of net-transitions firing) with the $0/0$ label signifying that no net-transitions connected to either boundary were fired. The NFA-transition $0\dtrans{0}{1}1$ witnesses that the right hand side net-transition has fired and produced the desired marking. The fact that the fired transition is connected to the port on the right boundary is recorded by $1$ in the transition label. The remaining NFA-transition is symmetric. \begin{figure} \[ \includegraphics[height=2.5cm]{piecestonfas.pdf} \] \caption{Translation to NFAs.\label{fig:piecestonfas}} \end{figure} The principle of compositionality, proved in Theorem~\ref{thm:compositionality}, is illustrated in \figref{fig:compositionality}: given two $b_1$ nets, we can obtain the NFA representing their (composite) behaviour in two ways: 1) compose two $b_1$ nets to form the net $b_1\mathrel{;} b_1$, and then generate its NFA, or equivalently, 2) generate the two (identical) NFAs for each $b_1$ and compose them, using a variant\footnote{$(a,b)\dtrans{\alpha}{\beta}(a’,b’)$ iff $\exists \gamma.\; a\dtrans{\alpha}{\gamma}a’ \mathrel{\wedge} b\dtrans{\gamma}{\beta}b’$.} of the product construction. Compositionality ensures that the diagram commutes, in other words, the global behaviour of the composition of the two nets is completely determined by the behaviour of the individual nets, when synchronised along their common boundary. \begin{figure} \vspace{-1em} \[ \includegraphics[height=5cm]{compositionality} \] \caption{\vspace{-1em}Compositionality at work.\label{fig:compositionality}} \end{figure} The NFA generated for $b_n=b_1\mathrel{;} \dots\mathrel{;} b_1$ ($b_1$ composed $n$ times) has $2^n$ states, thus directly computing the automaton for $b_n$ is feasible only for small $n$. Fortunately, to generate a correct NFA of the composite net, it is sufficient to capture how each component net must interact on its boundaries in order to reach its local desired marking---its “synchronisation policy”. To do this, we close the NFA with respect to internal ($\epsilon$-) moves---those transitions labelled solely with $0$s, signifying no interaction at the boundaries---to obtain an automaton with the same states, but with transitions being paths $a(\dtrans{0}{0})^*\dtrans{a}{b}(\dtrans{0}{0})^* b$. We then minimise the new NFA, obtaining a deterministic automaton (DFA), with an “error” state that is reached whenever an illegal (i.e. not in the behaviour of the underlying net) interaction is observed on the boundaries. This DFA minimally represents the entire behaviour (assuming that an observer may only observe traces) of the net, w.r.t. interactions on its boundaries. Note that the states of the NFA obtained from a net are 1-1 with the reachable markings of the underlying net; in general, this is not the case after $\epsilon$-closure and minimisation: the states of the minimal DFA merely capture the ``protocol'' the net must follow when interacting with its environment, in order to arrive at the desired marking. Indeed, for $b_n$, the resulting minimal DFA has $n+2$ states. Of course, computing the minimisation of an NFA can be very expensive---in the worse case, triple exponential in the number of places of the original net---our strategy is thus roughly to decompose nets as far as possible (thereby only minimising small NFAs) and take advantage of any regular, repetitive structure in the net, via memoisation. As discussed, compositionality guarantees correctness---the fact that the square in \figref{fig:minimisation}, illustrating the process for $B_4$, commutes is a consequence of Theorems~\ref{thm:weakCompositionality} and~\ref{thm:correctness}. \begin{figure} \[ \includegraphics[height=4.5cm]{minimisation} \] \caption{Minimising $B_4$, compositionally.\label{fig:minimisation}} \end{figure} The applicability of our approach depends on finding ``good'' decompositions of nets. For $B_n\Defeq \top\mathrel{;} b_n\mathrel{;} \bot$, there are many potential decompositions: the optimal\footnote{All experiments were run on an Intel i7-2600 3.40GHz CPU, 16GB of RAM, running 64-bit Ubuntu Linux.} is the 1st decomposition in \figref{fig:decompositions}, which corresponds to the algebra term $(\top \mathrel{;} (b_1\mathrel{;} (\dots \mathrel{;} (b_1\mathrel{;} \bot)\dots )$. Indeed, the composition of $b_1$ and $\bot$ minimises to the trivial accepting automaton; \figref{fig:translation} contains illustrative translation steps of the different decompositions of $B_4$. In \textit{(i)} the composition of the automaton for $b_1$ is composed with the automaton for $\bot$: after minimisation we again obtain the automaton for $\bot$. Thus the procedure reaches a fixed point after the first step, as illustrated in \textit{(ii)}. This fact formally captures the intuition about $N_1$ given in Remark~\ref{rmk:policies}. For this decomposition, memoisation guarantees that the composition and minimisation is performed only once. In particular, this means that checking reachability for $B_n$, given this decomposition, is \emph{linear} in $n$. However, other decompositions do not lead to such good performance. In particular, consider the 2nd decomposition of \figref{fig:decompositions}; here, memoisation does not help (we obtain a different NFA composition after each step) and we must perform minimisation after each composition, as illustrated in steps \textit{(iii)} and \textit{(iv)} of \figref{fig:decompositions}. \begin{figure} \[ \includegraphics[height=3cm]{decompositions} \] \caption{Three decompositions of $B_n$, to which we refer as, respectively, right, left and balanced.\label{fig:decompositions}} \end{figure} Our automated approach to deconstructing $B_n$ (discussed in \S\ref{sec:decomposition}) produces the 3rd (balanced) decomposition of \figref{fig:decompositions}. In this particular case we decompose by identifying a transition that connects two components of similar size. This decomposition, while not optimal, allows frequent use of memoisation, reducing the amount of computation. A table of running times for the construction of a minimal DFA for $B_n$, following the three decompositions of \figref{fig:decompositions}, is given in \figref{fig:decompositionTimes}. \begin{figure} \[ \includegraphics[height=9cm]{translation} \] \caption{Translation of the decompositions in \figref{fig:decompositions}. (i),(ii) initial steps using the right decomposition; (iii), (iv) initial steps using the left decomposition; (v) final step using the balanced decomposition of $B_4$. \label{fig:translation}} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.45\linewidth} \input bres \caption{Time to construct minimal DFA for $B_n$ with the three decompositions illustrated in \figref{fig:decompositions}.\label{fig:decompositionTimes}} \end{subfigure}% \hspace{0.5cm} \begin{subfigure}[t]{0.45\linewidth} \input tres \caption{Time to construct minimal DFA for $T_n$, using the decomposition described in \figref{fig:treenet}. \label{fig:treeDecompositionTimes}\\} \end{subfigure} \caption{NFA construction times for $B_n$ and $T_n$.\label{fig:bnandtnexperiments}} \end{figure} We have illustrated how the operation `$\mathrel{;}$' allows decomposition of the net $B_n$ in order to exhibit its the regular structure. We will briefly consider a second example that illustrates the use of the second operation of the algebra, ‘$\otimes$’. Consider the net in \figref{fig:treenet}, where we want to check whether all the places can be marked; N.B this net is not 1-safe, but 1-boundedness means that a transition is blocked if there is a token present in its post-set. Our automated procedure constructs the decomposition illustrated in the right part of \figref{fig:treenet}. In \figref{fig:treeTranslation} we illustrate the steps involved in calculating the minimal DFA for $T_3$, and give a table of experimental results in \figref{fig:treeDecompositionTimes}. \begin{figure} \[ \includegraphics[height=3cm]{treeNets} \] \caption{The net $T_3$, in traditional and alternative graphical notation, and its decomposition. \label{fig:treenet}} \end{figure} \paragraph{Structure of the paper.} In \S\ref{sec:theory} we study the foundations of our technique and prove it correct. In \S\ref{sec:implementation} we discuss our implementation and give additional experimental results. Connections with related work are in \S\ref{sec:related}, and we conclude with directions for future research in \S\ref{sec:discussion}. Due to space constraints, proofs and non-essential figures have been moved to the appendix. \section{Nets with boundaries} \label{sec:theory} In this section we give the theoretical underpinnings of our technique, harnessing the compositionality of the algebra of nets with boundaries in order to prove its correctness. \paragraph{Notational conventions.} For $n\in\rring{N}$ let $\underline{n}=\{0,1,\dots,n-1\}$. We write $2^X$ for the powerset of $X$. We write $X+Y$ for the set $\{(x,0)\;|\;x\in X\}\cup\{(y,1)\;|\;y\in Y\}$. Given $\mathcal{U}\subseteq 2^X$ and $\mathcal{V}\subseteq 2^Y$ we write $\mathcal{U}\uplus\mathcal{V}=\{\,U + V\;|\;U\in\mathcal{U},\,V\in\mathcal{V}\,\}\subseteq 2^{X+Y}$. We identify binary strings $\alpha=\alpha_0\alpha_1\dots\alpha_{k-1}$ of length $k$ with subsets of $\underline{k}$ in the obvious way: $\alpha_i=1$ iff $i\in \alpha$. \begin{definition} A \emph{net with boundaries} $N:k\to l$ is $(P, T, k, l, \pre{-}, \post{-},\source{-},\target{-})$ where: \begin{itemize} \item[-] $P$ is the set of places, $T$ is the set of transitions \item[-] $k,l\in\rring{N}$ are, respectively, the left and the right boundaries \item[-] $\pre{-},\post{-}: T \to 2^P$ give, respectively, the pre- and post-sets of each transition \item[-] $\source{-}:T\to 2^{\underline{k}}$ and $\target{-}:T\to 2^{\underline{l}}$ connect each transition to, resp., the left and the right boundary. \end{itemize} Additionally, we assume\footnote{That is, at most one transition can be connected to any place on the boundary. This assumption allows us to simplify the definition of composition of nets; for the more general case see~\cite{Bruni2012}.} that for any $t\neq t'\in T$, $\source{t}\cap\source{t'}=\varnothing$ and $\target{t}\cap\target{t'}=\varnothing$. Ordinary Petri nets can be considered as nets $N:0\to 0$ with no boundaries.\end{definition} We must use step semantics of nets instead of the more common interleaving semantics to guarantee compositionality; we will illustrate this in Remark~\ref{rmk:step}. Let $\preandpost{t}\Defeq \pre{t}\cup \post{t}$. Transitions $t\neq t'\in T$ are said to be \emph{independent} when $\preandpost{t}\cap\preandpost{t'}=\varnothing$. A set $U\subseteq T$ is said to be \emph{mutually independent} (MI) when for all $u\neq u'\in U$, $u$ and $u'$ are independent. For sets of transitions $U\subseteq T$ we will abuse notation and write $\pre{U}=\bigcup_{u\in U}\pre{u}$, and similarly for $\post{U}$, $\source{U}$ and $\target{U}$. Each net with boundaries $N:k\to l$ determines an LTS whose transitions witness the step semantics of the underlying net, originally described by Katis et al~\cite{Katis1997b}. For the 1-bounded case, the labels are pairs of binary strings of length $k$ and $l$, respectively. The states are markings of $N$, denoted by $\marking{N}{X}$, where $X\subseteq P$. The transition relation is defined: \begin{equation* \marking{N}{X} \dtrans{\alpha}{\beta} \marking{N}{X'} \Leftrightarrow\exists\text{ MI }U\subseteq T, \pre{U}\subseteq X,\,\post{U}\cap X=\varnothing,\, X'= (X\backslash \pre{U})\cup \post{U},\, \source{U}=\alpha,\, \target{U}=\beta \end{equation*} \subsection{Composition of nets with boundaries} \label{sec:composition} Suppose that $N:k\to l$ and $M:l\to m$ are nets with boundaries. A \emph{synchronisation} is a pair $(U,V)$ where $U\subseteq T_N$ and $V\subseteq T_M$ are MI sets of transitions, with $\target{U}=\source{V}$. Given synchronisations $(U,V)$ and $(U',V')$, we say $(U,V)\subseteq (U',V')$ exactly when $U\subseteq U'$ and $V\subseteq V'$. The \emph{trivial synchronisation} is $(\varnothing,\varnothing)$. A synchronisation $(U,V)$ is said to by \emph{minimal} when it is non trivial and, for all synchronisations $(U',V')$, if $(U',V')\subseteq (U,V)$ then $(U',V')$ is trivial. The set of minimal synchronisations of $N$ and $M$ is denoted $\minsynch{N}{M}$. The composed net $N\mathrel{;} M: k\to m$ has: \begin{itemize} \item[-] $P_N + P_M$ as its set of places, \item[-] $\minsynch{M}{N}$ as its set of transitions. Given $(U,V)\in \minsynch{M}{N}$ we let $\pre{(U,V)}\Defeq\pre{U}\uplus\pre{V}$, $\post{(U,V)}\Defeq\post{U}\uplus\post{V}$, $\source{(U,V)}\Defeq\source{U}$ and $\target{(U,V)}\Defeq\target{V}$. \end{itemize} Examples of compositions of the net $B_n:0\to 0$ are given in Figs.~\ref{fig:bookshelfalternative} and~\ref{fig:decompositions}. Another example is given in \figref{fig:stepComposition}, with the resulting transition arising from the minimal synchronisation $(\{t_1,t_2\},\{t_3\})$. \begin{figure} \[ \includegraphics[height=1.8cm]{stepComposition} \] \caption{Illustration of composition of two nets.\label{fig:stepComposition}} \end{figure} \begin{remark}\label{rmk:step} The example in \figref{fig:stepComposition} illustrates the necessity for step semantics in order for compositionality to hold. Indeed, in the composition $N_0;N_1$ we have the transition $\marking{N_0;N_1}{\{0\}}\dtrans{}{}\marking{N_0;N_1}{\{1\}}$ that witnesses the firing of its transition. This transition decomposes into $\marking{N_0}{\{0\}}\dtrans{}{11}\marking{N_0}{\{1\}}$ and $\marking{N_1}{\varnothing}\dtrans{11}{}\marking{N_1}{\varnothing}$. The first of these requires the \emph{simultaneous} firing of $t_1$ and $t_2$ in $N_0$; thus if we had considered interleaving semantics then compositionality would fail in this example. \end{remark} The next result is a special case of \cite[Theorem~3.6]{Bruni2012}, where a more general algebra of nets is considered. We will rely on this to prove the correctness of our technique in Theorems~\ref{thm:weakCompositionality} and~\ref{thm:correctness}. \begin{theorem}[Compositionality\label{thm:compositionality}] Suppose that $N:k\to l$ and $M:l\to m$ are nets with boundaries. The following holds for all $X,X'\subseteq P_N$, $Y,Y'\subseteq P_M$, $\alpha\in\{0,1\}^k$ and $\beta\in\{0,1\}^m$: \[ \marking{N\mathrel{;} M}{X\uplus Y} \dtrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'\uplus Y'} \quad\Leftrightarrow\quad \exists\gamma\in\{0,1\}^l.\; \marking{N}{X}\dtrans{\alpha}{\gamma}\marking{N}{X'} \ \wedge\ \marking{M}{Y}\dtrans{\gamma}{\beta}\marking{M}{Y'} \] \qed \end{theorem} The conclusion of Theorem~\ref{thm:compositionality} implies that, for instance, bisimilarity is a congruence w.r.t. to ‘$\mathrel{;}$'. For the purposes of reachability checking, traces are sufficient. \begin{corollary}\label{cor:traces} There exists a trace $\marking{N\mathrel{;} M}{X\uplus Y}\dtrans{\alpha_1}{\beta_1}\dots\dtrans{\alpha_p}{\beta_p} \marking{N\mathrel{;} M}{X'\uplus Y'}$ iff there exist traces $\marking{N}{X}\dtrans{\alpha_1}{\gamma_1}\dots\dtrans{\alpha_p}{\gamma_p} \marking{N}{X'}$ and $\marking{M}{Y}\dtrans{\gamma_1}{\beta_1}\dots\dtrans{\gamma_p}{\beta_p} \marking{N}{X'}$. \qed \end{corollary} In particular, to check for reachability in a composed net, it suffices to find computations in the components that agree on their shared boundary. \smallskip The other operation on nets with boundaries is $\otimes$, which can be understood as a parallel composition of nets. Given $N:k\to l$ and $M:m\to n$, $M\otimes N: k+m \to l+n$ has: \begin{itemize} \item $P_N+P_M$ as its set of places, \item $T_N+T_M$ as its set of transitions. $\pre{(t,0)}\Defeq \{\,(p,0) \;|\; p\in \pre{t}\,\}$, $\pre{(t,1)}\Defeq \{\,(p,1) \;|\; p\in\pre{t}\,\}$, and similarly for $\post{(t,0)}$ and $\post{(t,1)}$. Instead $\source{(t,0)}=\source{t}$ while $\source{(t,1)}=\{\,k+i\;|\;i\in \source{t}\,\}$; similarly $\target{(t,0)}=\target{t}$ and $\target{(t,1)}=\{\,l+i\;|\;i\in \target{t}\,\}$. \end{itemize} Compositionality also holds w.r.t. $\otimes$: $\marking{M\otimes N}{X+Y}\dtrans{\alpha\gamma}{\beta\delta} \marking{M\otimes N}{X'+Y'}$ iff $\marking{M}{X}\dtrans{\alpha}{\beta}\marking{M}{X'}$ and $\marking{N}{Y}\dtrans{\gamma}{\delta}\marking{N}{Y'}$. Due to space constraints we omit the details here; they are straightforward as there is no interaction between the two nets. \subsection{From nets with boundaries to NFAs} \label{sec:netstonfas} By an NFA with boundaries $A:k\to l$ we mean an NFA $A$ with set of labels $\{0,1\}^k\times\{0,1\}^l$, written $\alpha/\beta$, where $\alpha\in\{0,1\}^k$ and $\beta\in\{0,1\}^l$. Given NFA with boundaries $A:k\to l$ and $B:l\to m$, the NFA with boundaries $A\mathrel{;} B:k\to m$ is obtained by a variant of the product construction where $(x,y)\dtrans{\alpha}{\beta}(x',y')$ iff there exists $\gamma\in\{0,1\}^l$ such that $x\dtrans{\alpha}{\gamma} x'$ and $y\dtrans{\gamma}{\beta}y'$. Given NFA with boundaries $A:k\to l$ and $B:m\to n$, the NFA with boundaries $A\otimes B:k+m\to l+n$ is obtained via another variant of the product construction: here $(x,y)\dtrans{\alpha\gamma}{\beta\delta}(x',y')$ iff $x \dtrans{\alpha}{\beta} x'$ and $y\dtrans{\gamma}{\delta} y'$. The algebra of automata with boundaries described above is an instance of Span(Graph)~\cite{Katis1997a}. Given a net with boundaries $N:k\to l$, and non-empty sets $\mathcal{X},\mathcal{Y}\subseteq 2^{2^{P_N}}$ of, respectively, \emph{initial} and \emph{final} markings, we can consider its labelled transition system as an NFA, written $\NFA{N}{\mathcal{X}}{\mathcal{Y}}$, that has initial states $\mathcal{X}$ and final states $\mathcal{Y}$. If $N:k\to l$ does not have any places then $\NFA{N}{\{\varnothing\}}{\{\varnothing\}}$ has exactly one state, which is an accept state (see NFA for $\top$, $\bot$ in \figref{fig:piecestonfas}). The following is immediate. \begin{proposition} Given $N:k\to l$, initial and final markings $\mathcal{X},\,\mathcal{Y}$, a marking in $\mathcal{Y}$ is reachable from a marking in $\mathcal{X}$ iff $L(\NFA{N}{\mathcal{X}}{\mathcal{Y}})\neq\varnothing$. \qed \end{proposition} We also have the following as an immediate consequence of Theorem~\ref{thm:compositionality}: \[ \NFA{N\mathrel{;} M:k\to m}{\mathcal{X}\uplus \mathcal{X'}}{\mathcal{Y}\uplus\mathcal{Y'}} \cong (\NFA{N:k\to l}{\mathcal{X}}{\mathcal{Y}}) \mathrel{;} (\NFA{M:l\to m}{\mathcal{X'}}{\mathcal{Y'}}) \] and in particular the two automata accept the same language. \subsection{Weak closure and minimisation} \label{sec:weakclosure} Hiding internal computations in individual component nets is crucial for the performance of our technique. The procedure is akin to the $\tau$-reflexive-transitive closure of an LTS $L$, which yields an LTS $L'$ on which bisimilarity agrees with weak-bisimilarity on $L$, in the sense of Milner~\cite{Milner1989}. Let $\epsilon_{k,l}=0^k/0^l$. Sometimes we will write simply $\epsilon$ when $k$ and $l$ are clear from the context. Notice that given any net $N:k\to l$, for each marking $X$ there is a transition $\marking{N}{X}\epstrans{k}{l}\marking{N}{X}$ that arises from firing the empty set of net-transitions. In general, transitions $\marking{N}{X}\epstrans{k}{l}\marking{N}{X'}$ witness the firing of ``internal'' net-transitions in $N$, ie those that are not connected to any boundary port. The \emph{weak} transition system induced by $N:k\to l$ has transitions: \begin{equation}\label{eq:weakSemantics} \marking{N}{X} \;\dTrans{\alpha}{\beta}\; \marking{N}{X'} \ \Leftrightarrow\ \exists X'', X'''.\; \marking{N}{X} (\epstrans{k}{l})^* \marking{N}{X''},\, \marking{N}{X''} \dtrans{\alpha}{\beta} \marking{N}{X'''},\, \marking{N}{X'''} (\epstrans{k}{l})^* \marking{N}{X'} \end{equation} Note that the above notion of weak transition differs from that considered in~\cite{Bruni2012} but is close to the weak transitions of~\cite{Soboci'nski2009a}. \begin{theorem}[Compositionality w.r.t. weak semantics\label{thm:weakCompositionality}] Suppose that $N:k\to l$ and $M:l\to m$ are nets with boundaries. Then for all $X,X'\subseteq P_N$, $Y,Y'\subseteq P_M$, $\alpha\in\{0,1\}^k$, $\beta\in\{0,1\}^m$: \begin{enumerate}[(i)] \item if $\marking{N\mathrel{;} M}{X + Y} \dTrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X' + Y'}$ then $\exists\,p,q\in\rring{N}$, $\gamma,\gamma_i,\gamma_j'\in \{0,1\}^l$ for $1\leq i \leq p$ and $1\leq j\leq q$ \[ \!\!\!\!\!\!\!\!\!\!\!\!\! \marking{N}{X}\dtrans{\!0^k\!}{\!\gamma_1\!}\dots\dtrans{\!0^k\!}{\!\gamma_p\!} \dtrans{\!\alpha\!}{\!\gamma\!} \dtrans{\!0^k\!}{\!\gamma'_1\!}\dots \dtrans{\!0^k\!}{\!\gamma'_q\!} \marking{N}{X'} \text{ and } \marking{M}{Y}\dtrans{\!\gamma_1\!}{\!0^m\!}\dots\dtrans{\!\gamma_p\!}{\!0^m\!} \dtrans{\!\gamma\!}{\!\beta\!} \dtrans{\!\gamma'_1\!}{\!0^m\!} \dots \dtrans{\!\gamma'_q\!}{\!0^m\!} \marking{M}{Y'}. \] \item if $\marking{N}{X}\dTrans{\alpha}{\gamma} \marking{N}{X'}$ and $\marking{M}{Y}\dTrans{\gamma}{\beta}\marking{M}{Y'}$ for some $\gamma\in\{0,1\}^l$ then $\marking{N\mathrel{;} M}{X + Y}\dTrans{\alpha}{\beta} \marking{N\mathrel{;} M}{X'+ Y'}$. \qed \end{enumerate} \end{theorem} Given an NFA with boundaries $A:k\to l$, let $\epsmin{A}:k\to l$ denote the DFA obtained by $\epsilon_{k,l}$-closure and minimisation. \begin{remark} Recall that any ordinary net $N$ can be considered as a net with boundaries $N:0\to 0$. Now $\epsmin{\NFA{N}{\mathcal{X}}{\mathcal{Y}}}:0\to 0$ is one of two DFAs: the DFA with one accept state (if a marking in $\mathcal{Y}$ is reachable from some marking in $\mathcal{X}$) and the DFA with one non-accept state (if no markings in $\mathcal{Y}$ are reachable from any marking in $\mathcal{X}$). \end{remark} Given an ordinary Petri net $N$, initial markings $\mathcal{X}$ and final markings $\mathcal{Y}$, a simple but extremely inefficient way of checking the reachability of a marking is thus to directly compute $\epsmin{\NFA{N}{\mathcal{X}}{\mathcal{Y}}}$ and check whether the single state in the resulting DFA is an accept state. Our technique for checking reachability is based on computing this DFA using a structural decomposition of $N$, which, when combined with memoisation, can result in fast execution times. \subsection{Correctness} \label{sec:correctness} Here we give a formal account of our technique and prove it correct, using the previous results in this section. A \emph{wiring expression} is a syntactic term formed from the following grammar \[ T \ ::=\ x \ |\ T \mathrel{;} T \ |\ T \otimes T \] where the leaves $x$ are variables. A \emph{variable assignment} $\mathcal{V}$ is a map that takes variables to nets with boundaries. Given a pair $(t,\,\mathcal{V})$ of a wiring expression $t$ and variable assignment $\mathcal{V}$, its semantics $\semanticsOf{t}_{\mathcal{V}}$ is a net with boundaries, defined recursively in the obvious way: $\semanticsOf{x}_\mathcal{V} \Defeq \mathcal{V}(x)$, $\semanticsOf{t_1 \mathrel{;} t_2}_\mathcal{V} \Defeq \semanticsOf{t_1}_\mathcal{V} \mathrel{;} \semanticsOf{t_2}_\mathcal{V}$ and $\semanticsOf{t_1\otimes t_2}_\mathcal{V} \Defeq \semanticsOf{t_1}_\mathcal{V}\otimes\semanticsOf{t_2}_\mathcal{V}$. We implicitly assume that variable assignments are compatible with $t$, in the sense that only nets with a common boundary are composed; we leave out the formal details, which are straightforward. Given a net $N:k\to l$, we say that $(t,\,\mathcal{V})$ is a \emph{wiring decomposition} of $N$ if $\semanticsOf{t}_\mathcal{V}\cong N$. Given a wiring decomposition $(t,\,\mathcal{V})$ of $N:k\to l$, together with maps $\mathcal{I}$, $\mathcal{F}$ called, respectively, \emph{initial markings} and \emph{final markings}, that take each variable $x$ to a set of markings of the net $\mathcal{V}(x)$, define $\epsminpr{t}_{(\mathcal{V},\mathcal{I},\mathcal{F})}$ recursively: \begin{align*} \epsminpr{x}_{(\mathcal{V},\mathcal{I},\mathcal{F})} &\Defeq \epsmin{\NFA{\mathcal{V}(x)}{\mathcal{I}(x)}{\mathcal{F}(x)}},\,\\ \epsminpr{t\mathrel{;} t'}_{(\mathcal{V},\mathcal{I},\mathcal{F})} &\Defeq \epsmin{ \epsminpr{t}_{(\mathcal{V},\mathcal{I},\mathcal{F})} \mathrel{;} \epsminpr{t'}_{(\mathcal{V},\mathcal{I},\mathcal{F})}},\,\\ \epsminpr{t \otimes t'}_{(\mathcal{V},\mathcal{I},\mathcal{F})} &\Defeq \epsmin{ \epsminpr{t}_{(\mathcal{V},\mathcal{I},\mathcal{F})} \otimes \epsminpr{t'}_{(\mathcal{V},\mathcal{I},\mathcal{F})} }. \end{align*} The function $\epsminpr{-}$ is the formalisation of our approach, taking a wiring decomposition, together with initial and final markings to a minimal DFA. Sets of markings of the leaf nets given by $\mathcal{I}$ and $\mathcal{F}$ can be combined to form a set of markings $\mrk{t}{\mathcal{I}}$ of $\semanticsOf{t}_{\mathcal{V}}$ in an obvious way: $\mrk{x}{\mathcal{I}} \Defeq \mathcal{I}(x)$, $\mrk{t\mathrel{;} t'}{\mathcal{I}} \Defeq \mrk{t}{\mathcal{I}} \uplus \mrk{t'}{\mathcal{I}}$, $\mrk{t\otimes t}{\mathcal{I}} \Defeq \mrk{t}{\mathcal{I}} \uplus \mrk{t'}{\mathcal{I}}$ (and similarly for $\mathcal{F}$.) \begin{theorem}[Correctness]\label{thm:correctness} Suppose $(t,\mathcal{V})$ is a wiring decomposition of $N:k\to l$, $\mathcal{I}$ initial markings and $\mathcal{F}$ final markings. Then $\epsminpr{t}_{(\mathcal{V},\mathcal{I},\mathcal{F})} \cong \epsmin{ \NFA{\semanticsOf{t}_\mathcal{V}}{\mrk{t}{\mathcal{I}}}{\mrk{t}{\mathcal{F}} } }$.\qed \end{theorem} An example application of Theorem~\ref{thm:correctness} is the commutativity of the diagram in \figref{fig:minimisation}. Note that we have not discussed how to obtain a wiring decomposition, starting from a net $N:k\to l$. As demonstrated in \figref{fig:decompositionTimes}, different decompositions result in markedly different performance. Our automated procedure for obtaining a decomposition is described in \S\ref{sec:decomposition}. \section{Related work} \label{sec:related} \paragraph{Algebras of nets and automata.} The algebra of automata with boundaries used in this paper is an instance of the algebra of Span(Graph)~\cite{Katis1997a}, developed by {R.F.C.} Walters and collaborators: in fact, a translation from nets to this algebra was already present in~\cite{Katis1997b}. The goal of the more recent work~\cite{Soboci'nski2010, Bruni2011, Bruni2012} was to lift this algebra to the level of nets in a compositional way, study the resulting behavioural equivalences and explore connections with process algebra. A theme of our work is to ignore state and focus on external interactions: here we were inspired by the ideas of Milner~\cite{Milner1989}. Conceptually related approaches in semantics of programming languages include~\cite{Reddy1996,Ghica2003}. \paragraph{Reachability} in bounded, finite state Petri nets is a widely-studied problem and there are several well-known approaches to mitigating the impact of state-explosion (it follows from~\cite{Cheng1993} that the problem is PSPACE-complete.) Due to space constraints we are able to offer only cursory overviews and comparisons of techniques that are most related to our approach. A well-known technique is partial order reduction: in a seminal paper, McMillan~\cite{McMillan1995} used the unfolding construction~\cite{Nielsen1980} in order to analyse reachability in Petri nets by generating finite complete prefixes, that is, initial parts of unfoldings that suffice for reachability. The algorithm to compute the finite complete prefix was later improved~\cite{Esparza2002,Khomenko2003}. Unfoldings (and finite complete prefixes) carry more information about the computations of nets than merely reachability, for instance, allowing LTL model checking~\cite{Esparza2001}. For an overview of the extensive field see~\cite{Esparza2008}. A finite complete prefix must be constructed prior to a reachability analysis, analogously to our construction of a wiring decomposition prior to translation. Because of the different nature of the two approaches, it is difficult to offer a thorough analysis of the relative performance of the two approaches: on some of the examples we have considered the performance of our implementation is competitive (compare \figref{fig:decompositionTimes} with~\cite[Table 1]{Esparza2002}.) Another technique, known as symmetry reduction~\cite{Starke1991,Schmidt2000}, exploits symmetries in the state space: the goal is, roughly, to build a reduced reachability graph in order to visit only one representative from each orbit. Our use of memoisation is similar in spirit to symmetry reduction, since we only need to translate any particular wiring decomposition once. In experiments ($B_n$, $T_n$, $Ph_n$ and others) our implementation often performs well in identifying unreachable configurations; this is because in many systems the reasons for a configuration being unreachable are ``local''. Here our approach contrasts with techniques such as unfolding or symmetry reduction where (efficient representations of) explicit reachability graphs are constructed.
2,877,628,089,302
arxiv
\section*{Introduction} Radiative nonleptonic kaon decays represent a source of information on the structure of the weak interactions at low energies, and provide crucial tests of the Chiral Perturbation Theory (ChPT). The current paper presents new results related to study of the $K^\pm\to\pi^\pm e^+e^-$, $K^\pm\to\pi^\pm\gamma\gamma$, and $K^\pm\to\pi^\pm\gamma e^+e^-$ decays by the NA48/2 experiment at the CERN SPS. The flavour-changing neutral current process $K^\pm\to\pi^\pm e^+e^-$, induced at one-loop level in the Standard Model and highly suppressed by the GIM mechanism, has been described by the ChPT\cite{ek87}; several models predicting the form factor characterizing the dilepton invariant mass spectrum and the decay rate have been proposed\cite{da98,du06}. The decay is fairly well explored experimentally: it was first studied at CERN\cite{bl75}, followed by BNL E777\cite{al92} and E865\cite{ap99} measurements. The $K^\pm\to\pi^\pm\gamma\gamma$ and $K^\pm\to\pi^\pm\gamma e^+e^-$ decays similarly arise at one-loop level in the ChPT. The decay rates and spectra have been computed at leading and next-to-leading orders\cite{da96,ga99}, and strongly depend on a single theoretically unknown parameter $\hat c$. The experimental knowledge of these processes is rather poor: before the NA48/2 experiment, only a single observation of 31 $K^\pm\to\pi^\pm\gamma\gamma$ candidates was made\cite{ki97}, while the $K^\pm\to\pi^\pm\gamma e^+e^-$ decay was not observed at all. The paper is organized as follows. In Section 1, a description of the NA48/2 experiment is given. Section 2 is devoted to a rather detailed description of the $K^\pm\to\pi^\pm e^+e^-$ analysis and its preliminary results, which is the main topic of the paper. Section 3 briefly presents the preliminary results of the $K^\pm\to\pi^\pm\gamma\gamma$ analysis; a more detailed discussion is reserved for the Moriond QCD 2008 conference. Section 4 briefly presents the final results of the $K^\pm\to\pi^\pm\gamma e^+e^-$ analysis, which have recently been published\cite{ba08}. Finally the conclusions follow. \section{The NA48/2 experiment} The NA48/2 experiment, designed to excel in charge asymmetry measurements\cite{ba07}, is based on simultaneous $K^+$ and $K^-$ beams produced by 400 GeV/$c$ primary SPS protons impinging at zero incidence angle on a beryllium target of 40 cm length and 2 mm diameter. Charged particles with momentum $(60\pm3)$ GeV/$c$ are selected by an achromatic system of four dipole magnets with zero total deflection (`achromat'), which splits the two beams in the vertical plane and then recombines them on a common axis. Then the beams pass through a defining collimator and a series of four quadrupoles designed to produce focusing of the beams towards the detector. Finally the two beams are again split in the vertical plane and recombined in a second achromat. The layout of the beams and detectors is shown schematically in Fig.~\ref{fig:beams}. \begin{figure}[t] \special{psfile=na48_2_beam.eps voffset=-143 hoffset=0 hscale=41 vscale=41 angle=0} \vspace{50mm} \caption{\it Schematic lateral view of the NA48/2 beam line (TAX17,18: motorized beam dump/collimators used to select the momentum of the $K^+$ and $K^-$ beams; FDFD/DFDF: focusing set of quadrupoles, KABES1--3: kaon beam spectrometer stations), decay volume and detector (DCH1--4: drift chambers, HOD: hodoscope, LKr: EM calorimeter, HAC: hadron calorimeter, MUV: muon veto). The vertical scales are different in the two parts of the figure. \label{fig:beams} } \end{figure} The beams then enter the decay volume housed in a 114 m long cylindrical vacuum tank with a diameter of 1.92 m for the first 65 m, and 2.4 m for the rest. Both beams follow the same path in the decay volume: their axes coincide within 1~mm, while the transverse size of the beams is about 1~cm. With $7\times 10^{11}$ protons incident on the target per SPS spill of $4.8$~s duration, the positive (negative) beam flux at the entrance of the decay volume is $3.8\times 10^7$ ($2.6\times 10^7$) particles per pulse, of which $5.7\%$ ($4.9\%$) are $K^+$ ($K^-$). The $K^+/K^-$ flux ratio is about $1.8$. The fraction of beam kaons decaying in the decay volume at nominal momentum is $22\%$. The decay volume is followed by a magnetic spectrometer housed in a tank filled with helium at nearly atmospheric pressure, separated from the vacuum tank by a thin ($0.31\%X_0$) Kevlar composite window. A thin-walled aluminium beam pipe of 16~cm outer diameter traversing the centre of the spectrometer (and all the following detectors) allows the undecayed beam particles and the muon halo from decays of beam pions to continue their path in vacuum. The spectrometer consists of four drift chambers (DCH): DCH1, DCH2 located upstream, and DCH3, DCH4 downstream of a dipole magnet. The magnet provides a horizontal transverse momentum kick $\Delta p=120~{\rm MeV}/c$ for charged particles. The DCHs have the shape of a regular octagon with a transverse size of about 2.8 m and a fiducial area of about 4.5 m$^2$. Each chamber is composed of eight planes of sense wires arranged in four pairs of staggered planes oriented horizontally, vertically, and along each of the two orthogonal $45^\circ$ directions. The spatial resolution of each DCH is $\sigma_x=\sigma_y=90~\mu$m. The nominal spectrometer momentum resolution is $\sigma_p/p = (1.02 \oplus 0.044\cdot p)\%$ ($p$ in GeV/$c$). The magnetic spectrometer is followed by a plastic scintillator hodoscope (HOD) used to produce fast trigger signals and to provide precise time measurements of charged particles. The hodoscope has a regular octagonal shape with a transverse size of about 2.4~m. It consists of a plane of horizontal and a plane of vertical strip-shaped counters. Each plane consists of 64 counters arranged in four quadrants. Counter widths (lengths) vary from 6.5 cm (121 cm) for central counters to 9.9 cm (60 cm) for peripheral ones. The HOD is followed by a liquid krypton electromagnetic calorimeter (LKr)\cite{ba96} used for photon detection and particle identification. It is an almost homogeneous ionization chamber with an active volume of 7 m$^3$ of liquid krypton, segmented transversally into 13248 projective cells, 2$\times$2 cm$^2$ each, by a system of Cu$-$Be ribbon electrodes, and with no longitudinal segmentation. The calorimeter is $27X_0$ deep and has an energy resolution $\sigma(E)/E=0.032/\sqrt{E}\oplus0.09/E\oplus0.0042$ ($E$ in GeV). Spatial resolution for a single electromagnetic shower is $\sigma_x=\sigma_y=0.42/\sqrt{E}\oplus0.06$ cm for the transverse coordinates $x$ and $y$. The LKr is followed by a hadronic calorimeter (HAC) and a muon detector (MUV), both not used in the present analysis. A detailed description of the components of the NA48 detector can be found elsewhere\cite{fa07}. The NA48/2 experiment took data during two runs in 2003 and 2004, with about 60 days of effective running each. About $18\times10^9$ events were recorded in total. In order to simulate the detector response, a detailed GEANT-based\cite{geant} Monte Carlo (MC) simulation is employed, which includes full detector geometry and material description, stray magnetic fields, DCH local inefficiencies and misalignment, detailed simulation of the kaon beam line, and time variations of the above throughout the running period. Radiative corrections are applied to kaon decays using the PHOTOS package\cite{photos}. \boldmath \section{$K^\pm\to\pi^\pm e^+e^-$ analysis} \unboldmath The $K^\pm\to\pi^\pm e^+e^-$ rate is measured relatively to the abundant $K^\pm\to\pi^\pm\pi^0_D$ normalization channel (with $\pi^0_D\to e^+e^-\gamma$). The final states of the signal and normalization channels contain identical sets of charged particles. Thus electron and pion identification efficiencies, potentially representing a significant source of systematic uncertainties, cancel in the first order. \subsection{Event selection} Three-track vertices (compatible with the topology of $K^\pm\to\pi^\pm e^+e^-$ and $K^\pm\to\pi^\pm\pi^0_D$ decays) are reconstructed using the Kalman filter algorithm\cite{fr87} by extrapolation of track segments from the upstream part of the spectrometer back into the decay volume, taking into account the measured Earth's magnetic field, stray field due to magnetization of the vacuum tank, and multiple scattering in the Kevlar window. A large part of the selection is common to the signal and normalization modes. It requires a presence of a vertex satisfying the following criteria. \begin{itemize} \item Total charge of the three tracks: $Q=\pm1$. \item Vertex longitudinal position is inside fiducial decay volume: $Z_{\rm vertex}>Z_{\rm final~collimator}$. \item Particle identification is performed using the ratio $E/p$ of track energy deposition in the LKr to its momentum measured by the spectrometer. The vertex is required to be composed of one pion candidate ($E/p<0.85$), and two opposite charge $e^\pm$ candidates ($E/p>0.95$). No discrimination of pions against muons is performed. \item The vertex tracks are required to be consistent in time (within a 10~ns time window) and consistent with the trigger time, to be in DCH, LKr and HOD geometric acceptance, and to have momenta in the range $5~{\rm GeV}/c<p<50~{\rm GeV}/c$. Track separations are required to exceed 2~cm in the DCH1 plane to suppress photon conversions, and to exceed 15~cm in the LKr plane to minimize particle misidentification due to shower overlaps. \end{itemize} If multiple vertices satisfying the above conditions are found, the one with the best fit quality is considered. The following criteria are then applied to the reconstructed kinematic variables to select the $K^\pm\to\pi^\pm e^+e^-$ candidates. \begin{itemize} \item $\pi^\pm e^+e^-$ momentum within the beam nominal range: $54~{\rm GeV}/c<|\vec p_{\pi ee}|<66~{\rm GeV}/c$. \item $\pi^\pm e^+e^-$ transverse momentum with respect to the measured beam trajectory: $p_T^2<0.5\times 10^{-3}~({\rm GeV}/c)^2$. \item $\pi^\pm e^+e^-$ invariant mass: $475~{\rm MeV}/c^2<M_{\pi ee}<505~{\rm MeV}/c^2$. \item Suppression of the $K^\pm\to\pi^\pm\pi^0_D$ background defining the visible kinematic region: $z=(M_{ee}/M_K)^2>0.08$, which approximately corresponds to $M_{ee}>140$~MeV/$c^2$. \end{itemize} Independently, a presence of a LKr energy deposition cluster (photon candidate) satisfying the following principal criteria is required to select the $K^\pm\to\pi^\pm\pi^0_D$ candidates. \begin{itemize} \item Cluster energy $E>3$~GeV, cluster time consistent with the vertex time, sufficient transverse separations from track impact points at the LKr plane ($R_{\pi\gamma}>30$~cm, $R_{e\gamma}>10$~cm). \item $e^+e^-\gamma$ invariant mass compatible with a $\pi^0$ decay: $|M_{ee\gamma}-M_{\pi^0}|<10$~MeV/$c^2$. \item The same conditions on reconstructed $\pi^\pm e^+e^-\gamma$ total and transverse momenta as used for $\pi^\pm e^+e^-$ momentum in the $K^\pm\to\pi^\pm e^+e^-$ selection. \item $\pi^\pm e^+e^-\gamma$ invariant mass: $475~{\rm MeV}/c^2<M_{\pi ee\gamma}<510~{\rm MeV}/c^2$. \end{itemize} \subsection{Signal and normalization samples} The reconstructed $\pi^\pm e^+e^-$ invariant mass spectrum is presented in Fig.~\ref{fig:mk} (left plot). The $\pi^\pm e^+e^-$ mass resolution is $\sigma_{\pi ee}=4.2$~MeV/$c^2$, in agreement with MC simulation. The $e^+e^-$ mass resolution computed by MC simulation is $\sigma_{ee}=2.3$~MeV/$c^2$. \begin{figure}[t] \vspace{57mm} \special{psfile=mpiee-bw.eps voffset=0 hoffset=0 hscale=30 vscale=30 angle=0}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \special{psfile=fit-bw.eps voffset=0 hoffset=0 hscale=30 vscale=30 angle=0} \caption{\it Left: reconstructed spectrum of $\pi^\pm e^+e^-$ invariant mass; data (dots) and MC simulation (filled area). Right: the computed $d\Gamma_{\pi ee}/dz$ (background subtracted, trigger efficiencies corrected for) and the results of fits according to the considered models. \label{fig:mk}} \end{figure} In total 7,146 $K^\pm\to\pi^\pm e^+e^-$ candidates are found in the signal region. After the kinematical suppression of the $\pi^0_D$ decays, residual background contamination mostly results from particle misidentification (i.e. $e^\pm$ identified as $\pi^\pm$ and vice versa). The following relevant background sources were identified with MC simulations: (1) $K^\pm\to\pi^\pm\pi^0_D$ with misidentified $e^\pm$ and $\pi^\pm$; (2) $K^\pm\to\pi^0_De^\pm\nu$ with a misidentified $e^\pm$ from the $\pi^0_D$ decay. Background estimation by selecting the strongly suppressed\cite{ap00} lepton number violating $K^\pm\to\pi^\mp e^\pm e^\pm$ (``same-sign'') candidates was considered the most reliable method. For the above two background sources, the expected mean numbers and kinematic distributions of the selected same-sign candidates are identical to those of background events (up to a negligible acceptance correction). In total 44 events pass the same-sign selection, which leads to background estimation of $(0.6\pm0.1)\%$. This result was independently confirmed with MC simulation of the two background modes. In total $12.228\times 10^6$ $K^\pm\to\pi^\pm\pi^0_D$ candidates are found in the signal region. The only significant background source is the semileptonic $K^\pm\to\pi^0_D\mu^\pm\nu$ decay. Its contribution is not suppressed by particle identification cuts, since no $\pi$/$\mu$ separation is performed. The background contamination is estimated to be 0.15\% by MC simulation. \subsection{Trigger chain and its efficiency} Both $K^\pm\to\pi^\pm e^+e^-$ and $K^\pm\to\pi^\pm\pi^0_D$ samples (as well as $K^\pm\to3\pi^\pm$) are recorded via the same two-level trigger chain. At the first level (L1), a coincidence of hits in the two planes of the HOD in at least two of the 16 non-overlapping segments is required. The second level (L2) is based on a hardware system computing coordinates of hits from DCH drift times, and a farm of asynchronous processors performing fast track reconstruction and running a selection algorithm, which basically requires at least two tracks to originate in the decay volume with the closest distance of approach of less than 5 cm. L1 triggers not satisfying this condition are examined further and accepted nevertheless if there is a reconstructed track not kinematically compatible with a $\pi^\pm\pi^0$ decay of a $K^\pm$ having momentum of 60 GeV/$c$ directed along the beam axis. The NA48/2 analysis strategy for non-rare decay modes involves direct measurement of the trigger efficiencies using control data samples of downscaled low bias triggers collected simultaneously with the main triggers. However direct measurements are not possible for the $K^\pm\to\pi^\pm e^+e^-$ events due to very limited sizes of the corresponding control samples. Dedicated simulations of L1 and L2 performance (involving, in particular, the measured time dependencies of local DCH and HOD inefficiencies) were used instead. The simulated efficiencies and their kinematic dependencies were compared against measurements for the abundant $K^\pm\to\pi^\pm\pi^0_D$ and $K^\pm\to\pi^\pm\pi^+\pi^-$ decays in order to validate the simulations. The simulated values of L1 and L2 inefficiencies for the selected $K^\pm\to\pi^\pm\pi^0_D$ sample are $\varepsilon_{L1}=0.37\%$, $\varepsilon_{L2}=0.80\%$. The values of the integral trigger inefficiencies for the $K^\pm\to\pi^\pm e^+e^-$ sample depend on the a priori unknown form factor; the corrections are applied differentially in bins of dilepton invariant mass. Indicative values of inefficiencies computed assuming a realistic linear form factor with a slope $\delta=2.3$ are $\varepsilon_{L1}=0.06\%$, $\varepsilon_{L2}=0.42\%$. The $K^\pm\to\pi^\pm\pi^0_D$ sample is affected by larger inefficiencies due to a smaller invariant mass of the $e^+e^-$ system, which means that the leptons are geometrically closer. \subsection{Theoretical input} The decay is supposed to proceed through one photon exchange, resulting in a spectrum of the $z=(M_{ee}/M_K)^2$ kinematic variable sensitive to the form factor $W(z)$\cite{da98}: \begin{equation} \frac{d\Gamma}{dz}=\frac{\alpha^2M_K}{12\pi(4\pi)^4} \lambda^{3/2}(1,z,r_\pi^2)\sqrt{1-4\frac{r_e^2}{z}} \left(1+2\frac{r_e^2}{z}\right)|W(z)|^2, \label{theory} \end{equation} where $r_e=m_e/M_K$, $r_\pi=m_\pi/M_K$, and $\lambda(a,b,c)=a^2+b^2+c^2-2ab-2ac-2bc$. On the other hand, the spectrum of the angle $\theta_{\pi e}$ between $\pi$ and $e^+$ in the $e^+e^-$ rest frame is proportional to $\sin^2\theta_{\pi e}$, and is not sensitive to $W(z)$. The following parameterizations of the form factor $W(z)$ are considered in the present analysis. \begin{enumerate} \item Linear: $W(z)=G_FM_K^2f_0(1+\delta z)$ with free normalization and slope $(f_0,\delta)$. \item Next-to-leading order ChPT\cite{da98}: $W(z)=G_FM_K^2(a_++b_+z)+W^{\pi\pi}(z)$ with free parameters $(a_+,b_+)$, and an explicitly calculated pion loop term $W^{\pi\pi}(z)$. \item The Dubna version of ChPT parameterization involving meson form factors: $W(z)\equiv W(M_a,M_\rho,z)$\cite{du06}, with resonance masses ($M_a$, $M_\rho$) treated as free parameters. \end{enumerate} The goal of the analysis is extraction of the form factor parameters in the framework of each of the above models, and computation of the corresponding branching fractions ${\rm BR}_{1,2,3}$. \subsection{Fitting procedure} The values of $d\Gamma_{\pi ee}/dz$ in the centre of each $i$-bin of $z$, which can be directly confronted to the theoretical predictions (\ref{theory}), are then computed as \begin{equation} (d\Gamma_{\pi ee}/dz)_i = \frac{N_i-N^B_i}{N_{2\pi}}\cdot \frac{A_{2\pi}(1-\varepsilon_{2\pi})}{A_i(1-\varepsilon_i)} \cdot {\rm BR}(K^\pm\to\pi^\pm\pi^0)\cdot{\rm BR}(\pi^0_D)\cdot \frac{\Gamma_K}{\Delta z}. \end{equation} Here $N_i$ and $N^B_i$ are the numbers of observed $K^\pm\to\pi^\pm e^+e^-$ candidates and background events in the $i$-th bin, $N_{2\pi}$ is the number of $K^\pm\to\pi^\pm\pi^0_D$ events (background subtracted), $A_i$ and $\varepsilon_i$ are geometrical acceptance and trigger inefficiency in the $i$-th bin for the signal sample (computed by MC simulation), $A_{2\pi}=2.94\%$ and $\varepsilon_{2\pi}=1.17\%$ are those for $K^\pm\to\pi^\pm\pi^0_D$ events, $\Gamma_K$ is the nominal kaon width\cite{pdg}, $\Delta z$ is the chosen width of the $z$ bin, ${\rm BR}(K^\pm\to\pi^\pm\pi^0)=(20.64\pm0.08)\%$ (FlaviaNet average\cite{an08}), ${\rm BR}(\pi^0_D)=(1.198\pm0.032)\%$ (PDG average\cite{pdg}). The computed values of $d\Gamma_{\pi ee}/dz$ vs $z$ are presented in Fig.~\ref{fig:mk} (right plot) along with the results of the fits to the three considered models. ${\rm BR}(K^\pm\to\pi^\pm e^+e^-)$ in the full kinematic range corresponding to each model are then computed using the measured parameters, their statistical uncertainties, and correlation matrices. In addition, a model-independent branching fraction ${\rm BR_{mi}}$ in the visible kinematic region $z>0.08$ is computed by integration of $d\Gamma_{\pi ee}/dz$. ${\rm BR_{mi}}$ is to a good approximation equal to each of the model-dependent BRs computed in the restricted kinematic range $z>0.08$. \subsection{Systematic uncertainties} The following sources of systematic uncertainties were studied. 1. Particle identification. Imperfect MC description of electron and pion identification inefficiencies $f_e$ and $f_\pi$ can bias the result only due to the momentum dependence of the inefficiencies, due to identical charged particle composition, but differing momentum distributions of the signal and normalization final states. Inefficiencies were measured for the data to vary depending on particle momentum in the ranges $1.6\%<f_\pi<1.7\%$ and $1.1\%<f_e<1.7\%$ in the analysis track momentum range. Systematic uncertainties due to these momentum dependencies not perfectly described by MC were conservatively estimated assuming that MC predicts momentum-independent $f_e$ and $f_\pi$. 2. Beam line description. Despite the careful simulation of the beamline including time variations of its parameters, the residual discrepancies of data and MC beam geometries and spectra bias the results. To evaluate the related systematic uncertainties, variations of the results with respect to variations of cuts on track momenta, LKr cluster energies, total and transverse momenta of the final states $\pi^\pm e^+e^-(\gamma)$, and track distances from beam axis in DCH planes were studied. 3. Background subtraction. As discussed above, the same-sign event spectrum is used for background estimation in the $\pi^\pm e^+e^-$ sample. The method has a limited statistical precision (with an average of 2 same-sign event in a bin of $z$). Furthermore, the presence of the component with two $e^+e^-$ pairs (due to both $\pi^0_D$ decays and external conversions) with a non-unity expected ratio of same-sign to background events biases the method. The uncertainties of the measured parameters due to background subtraction were conservatively taken to be equal to the corrections themselves. 4. Trigger efficiency. As discussed earlier, the corrections for trigger inefficiencies were evaluated by simulations. In terms of decay rates, L1 and L2 corrections have similar integral magnitudes of a few $10^{-3}$. No uncertainty was ascribed to the L1 correction, due to relative simplicity of the trigger condition. On the other hand, the uncertainty of the L2 efficiency correction was conservatively taken to be equal to the correction itself. 5. Radiative corrections. Uncertainties due to the radiative corrections were evaluated by variation of the lower $\pi^\pm e^+e^-$ invariant mass cut. 6. Fitting method. Uncertainties due to the fitting procedure were evaluated by variation of the $z$ bin width. 7. External input. Substantial uncertainties arise from the external input, as ${\rm BR}(\pi^\pm\pi^0_D)$ is experimentally known only with 2.7\% relative precision\cite{pdg}. The only parameter not affected by an external uncertainty is the linear form factor slope $\delta$ describing only the shape of the spectrum. \begin{table}[tb] \begin{center} \begin{tabular}{@{}r@{~~}r@{~~}r@{~~}r@{~$\pm$~}l@{~}r@{~$\pm$~}l@{~~}r@{~~}r} \hline Parameter&$e,\pi$&Beam &\multicolumn{2}{c}{Background} &\multicolumn{2}{c}{Trigger} &Rad. &Fitting\\ &ID &spectra &\multicolumn{2}{c}{subtraction}&\multicolumn{2}{c}{efficiency}&corr.&method\\ \hline $\delta$ & 0.01& 0.04&$-0.04$ &0.04 &$-0.03$ &0.03 & 0.05&0.03\\ $f_0$ &0.001&0.006&$0.002$ &0.002&$0.000$ &0.001 &0.006&0.003\\ $a_+$ &0.001&0.005&$-0.001$&0.001&$-0.001$&0.002 &0.005&0.004\\ $b_+$ &0.009&0.015&$0.017$ &0.017&$0.016$ &0.015 &0.015&0.010\\ $M_a$/GeV&0.004&0.009&$0.008$ &0.008&$0.006$ &0.006 &0.009&0.006\\ $M_b$/GeV&0.002&0.003&$0.003$ &0.003&$0.003$ &0.003 &0.004&0.002\\ \hline ${\rm BR}_{1,2,3}\!\!\times\!\!10^7$&0.02&0.02&$-0.01$&0.01&$-0.02$&0.01&0.01&0.02\\ ${\rm BR_{mi}}\!\!\times\!\!10^7$ &0.02&0.01&$-0.01$&0.01&$-0.02$&0.01&0.01&n/a\\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{Summary of corrections and systematic uncertainties (excluding the external ones).} \label{tab:syst} \end{table} The applied corrections and the systematic uncertainties (excluding the external ones presented later) are summarized in Table~\ref{tab:syst}. \subsection{Results and discussion} The measured parameters of the considered models and the corresponding BRs in the full $z$ range, as well the model-independent ${\rm BR_{mi}}(z>0.08)$, with their statistical, systematic, and external uncertainties are presented in Table~\ref{tab:results}. The correlation coefficients between the pairs of model parameters, not listed in the table, are $\rho(\delta,f_0)=-0.963$, $\rho(a_+,b_+)=-0.913$, and $\rho(M_a,M_\rho)=0.998$. \begin{table}[tb] \begin{center} \begin{tabular}{@{}r@{~$=$~}r@{~$\pm$~}r@{~$\pm$~}r@{~$\pm$~}r@{~$=$~}r@{~$\pm$~}r@{}} \hline $\delta$ &$2.35$ &$0.15_{\rm stat.}$ &$0.09_{\rm syst.}$ &$0.00_{\rm ext.}$ &$2.35$ &0.18\\ $f_0$ &$0.532$ &$0.012_{\rm stat.}$&$0.008_{\rm syst.}$&$0.007_{\rm ext.}$&$0.532$&0.016\\ ${\rm BR}_1\times10^7$ &$3.02$ &$0.04_{\rm stat.}$ &$0.04_{\rm syst.}$ &$0.08_{\rm ext.}$ &$3.02$ &0.10\\ \hline $a_+$ &$-0.579$&$0.012_{\rm stat.}$&$0.008_{\rm syst.}$&$0.007_{\rm ext.}$&$-0.579$&0.016\\ $b_+$ &$-0.798$&$0.053_{\rm stat.}$&$0.037_{\rm syst.}$&$0.017_{\rm ext.}$&$-0.798$&0.067\\ ${\rm BR}_2\times10^7$ &$3.11$ &$0.04_{\rm stat.}$ &$0.04_{\rm syst.}$ &$0.08_{\rm ext.}$ &$3.11$ &0.10\\ \hline $M_a/{\rm GeV}$ &$0.965$ &$0.028_{\rm stat.}$&$0.018_{\rm syst.}$&$0.002_{\rm ext.}$&$0.965$&0.033\\ $M_\rho/{\rm GeV}$ &$0.711$ &$0.010_{\rm stat.}$&$0.007_{\rm syst.}$&$0.002_{\rm ext.}$&$0.711$&0.013\\ ${\rm BR}_3\times10^7$ &$3.15$ &$0.04_{\rm stat.}$ &$0.04_{\rm syst.}$ &$0.08_{\rm ext.}$ &$3.15$ &0.10\\ \hline ${\rm BR_{mi}}\times10^7$&$2.26$ &$0.03_{\rm stat.}$ &$0.03_{\rm syst.}$ &$0.06_{\rm ext.}$ &$2.26$ &0.08\\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{Results of fits to the three considered models, and the model-independent ${\rm BR_{mi}}(z>0.08)$.} \label{tab:results} \end{table} Fits to all the three models are of reasonable quality, however the linear form-factor model leads to the smallest $\chi^2$. The data sample is insufficient to distinguish between the models considered. The obtained form factor slope $\delta$ is in agreement with the previous measurements based on $K^+\to\pi^+e^+e^-$\cite{al92,ap99} and $K^\pm\to\pi^\pm\mu^+\mu^-$\cite{ma00} samples, and further confirms the contradiction of the data to meson dominance models\cite{li99}. The obtained $f_0$, $a_+$ and $b_+$ are in agreement with the only previous measurement\cite{ap99}. The measured parameters $M_a$ and $M_\rho$ are a few \% away from the nominal masses of the resonances\cite{pdg}. The branching ratio in the full kinematic range, which is computed as the average between the two extremes corresponding to the models (1) and (3), and includes an uncertainty due to extrapolation into the inaccessible region $z<0.08$, is \begin{displaymath} {\rm BR}\!=\!(3.08\pm0.04_{\rm stat.}\pm0.04_{\rm syst.}\pm0.08_{\rm ext.}\pm0.07_{\rm model})\times10^{-7}\!=\! (3.08\pm0.12)\times10^{-7}. \end{displaymath} It should be stressed that a large fraction of the uncertainty of this result is correlated with the earlier measurements. A comparison to the precise BNL E865 measurement\cite{ap99} dismissing correlated uncertainties due to external BRs and model dependence, and using the same external input, shows a $1.4\sigma$ difference. In conclusion, the obtained BR is in agreement with the previous measurements. Finally, a first measurement of the direct CP violating asymmetry of $K^+$ and $K^-$ decay rates in the full kinematic range was obtained by performing BR measurements separately for $K^+$ and $K^-$ and neglecting the correlated uncertainties: $\Delta(K_{\pi ee}^\pm)=({\rm BR}^+-{\rm BR}^-)/({\rm BR}^++{\rm BR}^-)=(-2.1\pm1.5_{\rm stat.}\pm 0.3_{\rm syst.})\%$. The result is compatible to no CP violation. However its precision is far from the theoretical expectation\cite{da98} of $|\Delta(K_{\pi ee}^\pm)|\sim 10^{-5}$. \boldmath \section{$K^\pm\to\pi^\pm\gamma\gamma$ analysis} \unboldmath The $K^\pm\to\pi^\pm\gamma\gamma$ rate is measured relatively to the $K^\pm\to\pi^\pm\pi^0$ normalization channel. The signal and normalization channels have identical particle composition of the final states, and the only cut differing for the two channels is the one on the $\gamma\gamma$ invariant mass. The used trigger chain involves the so called ``neutral trigger'' based on requirement of minimal number of energy deposition clusters in the LKr calorimeter. About 40\% of the total NA48/2 data sample have been analyzed, and 1,164 $K^\pm\to\pi^\pm\gamma\gamma$ decay candidates (with background contamination estimated by MC to be 3.3\%) are found, which has to be compared with the only previous measurement\cite{ki97} involving 31 decay candidates. The reconstructed spectrum of $\gamma\gamma$ invariant mass in the accessible kinematic region $M_{\gamma\gamma}>0.2$~GeV/c$^2$ is presented in Fig.~\ref{fig:pigg}, along with a MC expectation assuming ChPT ${\cal O}(p^6)$ distribution\cite{da96} with a realistic parameter $\hat c=2$. ChPT predicts an enhancement of the decay rate (cusp-like behaviour) at the $\pi\pi$ mass threshold $m_{\gamma\gamma}\approx280$~MeV/c$^2$, independently of the value of the $\hat c$ parameter. The observed spectrum provides the first clean experimental evidence for this phenomenon. \begin{figure}[t] \vspace{60mm} \special{psfile=mgg_lin.eps voffset=0 hoffset=50 hscale=44 vscale=44 angle=0} \caption{\it The reconstructed spectrum of $\gamma\gamma$ invariant mass for the $K^\pm\to\pi^\pm\gamma\gamma$ decay (dots), and its comparison to MC expectation assuming ChPT ${\cal O}(p^6)$ distribution with $\hat c=2$ (filled area). \label{fig:pigg}} \end{figure} As the first step of the analysis, the partial width of the decay was measured assuming the ChPT ${\cal O}(p^6)$ shape with a fixed parameter $\hat c=2$. The following preliminary result, which is in agreement with the ChPT computation for $\hat c=2$, was obtained: \begin{displaymath} {\rm BR}=(1.07\pm0.04_{\rm stat.}\pm0.08_{\rm syst.})\times 10^{-6}. \end{displaymath} A combined fit of the $m_{\gamma\gamma}$ spectrum shape and the decay rate is foreseen to measure the $\hat c$ parameter. \boldmath \section{$K^\pm\to\pi^\pm\gamma e^+e^-$ analysis} \unboldmath The $K^\pm\to\pi^\pm\gamma e^+e^-$ rate is measured relatively to the $K^\pm\to\pi^\pm\pi^0_D$ normalization channel. The signal and normalization channels have identical particle composition of the final states. The same trigger chain as for the collection of $K^\pm\to\pi^\pm e^+e^-$ is used. With the full NA48/2 data sample analyzed, 120 $K^\pm\to\pi^\pm\gamma e^+e^-$ decay candidates (with the background estimated by MC to be 6.1\%) are found in the accessible kinematic region $M_{\gamma ee}>0.26$~GeV/c$^2$. This is the first observation of this decay mode. The reconstructed spectrum of $\gamma e^+e^-$ invariant mass is presented in Fig.~\ref{fig:pigee}, along with MC expectations for background contributions. The spectrum provides another evidence for the rate enhancement at the $\pi\pi$ mass threshold. \begin{figure}[t] \vspace{55mm} \special{psfile=Signal_eeg.eps voffset=0 hoffset=50 hscale=44 vscale=44 angle=0} \caption{\it The reconstructed spectrum of $\gamma e^+e^-$ invariant mass for the $K^\pm\to\pi^\pm\gamma e^+e^-$ decay (dots), and MC background expectations (filled areas). \label{fig:pigee}} \end{figure} The final results of the analysis have recently been published\cite{ba08}. The model-independent partial width in the accessible kinematic region is measured to be \begin{displaymath} {\rm BR}(M_{\gamma ee}>0.26~{\rm GeV}/c^2)=(1.19\pm0.12_{\rm stat.}\pm0.04_{\rm syst.})\times 10^{-8}. \end{displaymath} The ChPT parameter $\hat c$ assuming ${\cal O}(p^4)$ distibution\cite{ga99} was measured to be $\hat c=0.90\pm0.45$. \section*{Conclusions} A precise study of the $K^\pm\to\pi^\pm e^+e^-$ decay has been performed. The data sample and precision are comparable to world's best ones, the preliminary results are in agreement with the previous measurements, and the first limit on CP violating charge asymmetry has been obtained. A precise study of the $K^\pm\to\pi^\pm\gamma\gamma$ has been performed. The first clear evidence for a rate enhancement at $\pi\pi$ mass threshold has been obtained. The preliminary measurement of BR agrees with the ChPT prediction. A detailed spectrum shape study is foreseen. The first observation of the $K^\pm\to\pi^\pm\gamma e^+e^-$ decay, and measurement of its parameters, including the BR, have been performed. The $M_{\gamma ee}$ spectrum provides an independent evidence for the cusp at the $\pi\pi$ mass threshold.
2,877,628,089,303
arxiv
\section{Introduction} Modern deep neural networks often have a huge amount of trainable parameters, especially in complex image classification and recognition tasks. For instance, AlexNet[8] has 62 million trainable parameters, ResNet-50[9] has over 23 million trainable parameters, VGGNet[7] has 138 million trainable parameters. Thus, a fast, effective optimization algorithm for learning those parameters will not only improve model performance but also greatly reduce the cost of time and money. Adam[4] is a stochastic optimization algorithm applied widely to train deep neural networks, it has the advantages of RMSProp[10], Momentum, and incorporates adaptive learning rate for learning different parameters. Recently, AdaBelief[1] and Padam[5] are introduced among the community. These two algorithms are proposed to improve the performance of the original Adam optimizer in certain situations. In this study, we analyze these two newly proposed algorithms and compare them with traditional methods Adam, SGD with momentum. We evaluate these 4 different optimizer algorithms by respectively applying each of them into a simplified version of AlexNet,VGGNet, ResNet and examine their performances of classifying images of EMINIST[6] on these different CNN infrastructures using multi-class cross-entropy loss and micro f1 score. \section{Related Work} \subsection{SGD + momentum} SGD with momentum is a way to address the problem SGD has with local optima. By adding a momentum term determined by the previous gradient, SGD will be accelerated in the relevant direction, so it is easier to overcome local optima to search for the global, and converge faster than a normal SGD.[3] \subsection{Conventional Adam} Adam is a replacement optimization algorithm for stochastic gradient descent for training deep learning models. Adam combines the best properties of the Momentum and RMSProp[10] algorithms to provide an optimization algorithm with an adaptive learning-rate that can handle sparse gradients on noisy problems.[4] \subsection{AdaBelief} AdaBelief is a modified version of Adam, it utilizes "belief" to automatically select the proper step size at each gradient update. The algorithm claimed to have two properties, fast convergence as in adaptive gradient methods, and good generalization as in the SGD family.[1] \subsection{Padam} Padam (partially adaptive momentum estimation) [5] is a modified version of Adam. It tries to close the generalization gap of adaptive gradient methods by introducing a partially adaptive parameter which also resolves the “small learning rate dilemma”(initial learning rate for adaptive methods often small) for adaptive methods and allows for faster convergence[5]. Padam is showed empirically that it achieves the fastest convergence speed while generalizing as well as SGD with momentum[5]. \section{Methods and Algorithm} We evaluate the performances of these different optimization algorithms empirically with EMNIST Dataset[6]. AlexNet[8] and simplified versions of VGG[7], ResNet[9] are trained by Algorithm \ref{TrainModel} on the training set using all optimization methods described above with the same initialization. We set learning rate for all optimization algorithms to 0.0005 and the rest of the hyper-parameters default except for SGD we have momentum=0.9. All of the models are trained for 30 epochs with batch size of 64. Given imbalanced labels, performances are evaluated empirically using micro f1 score. For all models, we modify the input channel to be 1 (all samples have only one channel) and output dimension to be 47. All experiments are conducted using Pytorch[2]. Due to the time constraint and computational cost, all experiments are conducted for only one random seed. \begin{algorithm} \caption{Training A Model With An Optimizer} \label{TrainModel} \begin{algorithmic}[1] \REQUIRE $d$ is the data of EMNIST; $epochSize$ is the epoch size; $batchSize$ is the batchSize; $opt$ is Optimizer Algorithm; $model$ is the training Model; \STATE $(trainSet, validationSet) \gets $ split($d$) \FOR {each epoch $e=1,2,...,epochSize$} \STATE $(Source_1, Target_1), (Source_{2}, Target_{2}), ... \gets $(split $trainSet$ in equal parts of batchSize) \FOR {each batch $b=1,2,...,batchSize$} \STATE $prediction \gets TrainModel(model, Source_b, Target_b)$ \STATE $loss \gets CalculateCrossEntropyLoss(prediction, Target_b)$ \STATE $PerformBackwardPropagation(loss)$ \STATE $UpdateModelParameters(opt)$ \STATE $ClearUpGradient(opt)$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} We apply the following 3 kinds of models to modify and train from scratch : \begin{itemize} \item \textbf{AlexNet}: We use full AlexNet[8].(as shown in Figure \ref{alexnet}) \item \textbf{VGG}: We shrink the standard VGG11[7] model to avoid overfit. In each of 5 blocks, we only leave one convolutional layer and one max pool layer. We also reduce the kernel number in each layer to $\frac{1}{4}$ (as shown in Figure \ref{vgg_structure}). \item \textbf{ResNet}: We only use the first layer of the ResNet18[9] (as shown in Figure \ref{resnet}), as the original ResNet18 structure is too heavy for this task (overfit the training set at epoch 2) and too computationally expensive. \end{itemize} \section{Experiments and Results} \begin{table}[h] \centering \begin{tabular}{lllllll} \toprule Optimizers & \multicolumn{2}{c}{AlexNet} & \multicolumn{2}{c}{VGG} & \multicolumn{2}{c}{ResNet}\\ \cmidrule{2-3} \cmidrule{4-5} \cmidrule{6-7}\\ {} & Best Test f1 Score & Epoch \# & Best Test f1 Score & Epoch \# & Best Test f1 Score & Epoch \#\\ \midrule SGD & 0.90106 & 30 & 0.90202 & 26 & 0.90450 & 22\\ Adam & 0.90397 & 11 & 0.90267 & 19 & 0.89720 & 5\\ AdaBelief & 0.90388 & 12 & 0.90278 & 23 & 0.89873 & 16 \\ Padam & 0.90591 & 14 & 0.90196 & 30 & 0.90282 & 30 \\ \bottomrule \end{tabular} \caption{Best Validation f1 scores and corresponding epoch numbers of them for each combination of models and optimizers on EMNIST dataset} \label{models_results} \end{table} \subsection{AlexNet} From Table \ref{models_results}, we find out that Padam[5] reached the best f1 score in test data at epoch 14. As shown in Figure \ref{alexnet_images}, we can observe that Adam[4] and AdaBelief[1] optimizer begin to show the sign of overfitting at epoch 3, their train losses are flattened and test losses start to fluctuate. On the other hand, train losses for both SGD and Padam smoothly decrease for all 30 epochs, however, start from epoch 14, Padam begins to overfit quickly, the test loss begins to increase, while test loss for SGD just start to be flattened, as shown in f1 score curves (Figure \ref{alexnet_images}).Therefore, in this task, Padam achieved the best performance at epoch 14. However, it shows sign of overfitting in the test loss and test f1 score curve, whereas SGD with Momentum keep slowly increasing performance. SGD with Momentum may perform best after it converges. \begin{figure}[htp] \flushleft \includegraphics[scale=0.3]{alexnet_images.png} \caption{AlexNet Evaluation} \label{alexnet_images} \end{figure} \subsection{VGG} As shown in Figure \ref{vgg_images}, the test performance of model is not stable. It is observed that the trends for Adam and AdaBelief, Padam and SGD with Momentum show pairwise similarity. In the beginning, Adam and AdaBelief perform better than SGD with Momentum and Padam. And then they are surpassed due to overfit.In epoch 30, test loss of SGD with Momentum and Padam is less than the smallest loss of Adam and AdaBelief before Adam and AdaBelief begin to show the sign of overfit. On the other hand, although among 30 epochs the best test f1 score is achieved by AdaBelief (as shown in Table \ref{models_results}), SGD with Momentum or Padam may surpass it in the following epochs. Therefore, Padam or SGD with Momentum may perform best after it converges \begin{figure}[htp] \flushleft \includegraphics[scale=0.3]{vgg_images.png} \caption{VGG Evaluation} \label{vgg_images} \end{figure} \subsection{ResNet} From table \ref{models_results} we can see that the best test f1 score in 30 epochs is achieved using Adam. In the training and test loss plot (figure \ref{res_images}), we observe that in general the trends for Adam and AdaBelief, Padam and SGD with Momentum show pairwise similarity.And Padam shows a slightly advantage of test loss over SGD with Momentum. Even though Adam has the best test f1 score at epoch 11, both Adam and AdaBelief start to show signs of overfitting on the test loss plot. On the other hand, the test losses for both Padam and SGD + Momentum decrease smoothly. The results above are consistent across f1 scores (figure \ref{res_images}). Hence, we can conclude that in this task, the local optima is achieved using Adam, but Adam and AdaBelief show signs of overfitting whereas Padam and SGD with Momentum leans smoothly, it is possible that Padam and SGD with Momentum outperform Adam and AdaBelief after they converge. \begin{figure}[htp] \flushleft \includegraphics[scale=0.3]{res_images.png} \caption{ResNet Evaluation} \label{res_images} \end{figure} \section{Discussion and Conclusion} From the experiments, we can conclude that Adam and AdaBelief have similar performance on this task, at the same time, Padam and SGD + Momentum have similar performance on this task. In general, the learning curves for SGD and Padam are much smoother than the learning curves for Adam and AdaBelief. In most cases, Adam and AdaBelief reach the local optima within 30 epochs but start to overfit right after, whereas Padam and SGD + Momentum leans smoothly, it is possible that Padam and SGD + Momentum outperform Adam and AdaBelief after they converge. Besides the results, the limitations in this project are obvious. Firstly, the results are not averaged over multiple random seeds because of the computational cost and time constraint. This problem can be solved by run multiple random trails with different random seeds and average the results over them. Secondly, the models we use in this project are modified versions of the original models, the experiment results may not be reproducible when we have the complete models. Lastly, we only run this experiment with limited epochs and one set of hyperparameters, our results may be local and may not carry across a different set of hyperparameters. \section{Contributions} \begin{itemize} \item \textbf{Zhaoyang Zhu}: benchmark algorithm propetyping, training ResNet from scratch \item \textbf{Haozhe Sun}: benchmark algorithm propetyping, training AlexNet from scratch \item \textbf{Chi Zhang}: benchmark algorithm propetyping, training VGG from scratch \end{itemize} \section{References} \small [1] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha Dvornek1, Xenophon Papademetris\ \& James S. Duncan1\ (2020) AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [2] Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 8024–8035 (Neural Information Processing Systems, 2019). [3] Sebastian Ruder. An overview of gradient descent optimization algorithms. https://ruder.io/optimizing-gradient-descent/index.html [4] Diederik P. Kingma\ \& Jimmy Ba\ (2015) ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION. arXiv preprint arXiv:1412.6980v9 [5] Jinghui Chen and Quanquan Gu. Closing the generalization gap of adaptive gradient methods in training deep neural networks. arXiv preprint arXiv:1806.06763, 2018. [6] Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre van Schaik. EMNIST: an extension of MNIST to ´ handwritten letters. arXiv preprint arXiv:1702.05373, 2017. [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1106–1114, 2012. papers.nips.cc/paper/4824- imagenet-classification-with-deep-convolutionalneural-networks.pdf. [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of CVPR, pages 770–778, 2016. arxiv.org/abs/1512.03385. [10] Tieleman T and Hinton E 2012 Lecture 6.5 - rmsprop, COURSERA: Neural networks for machine learning \newpage \section*{Appendix} \begin{figure}[htp] \centering \includegraphics[width=13.5cm]{AlexNet.png} \caption{AlexNet Structure} \label{alexnet} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=15cm, height=20cm]{vgg_model.png} \caption{VGG Structure} \label{vgg_structure} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=15cm]{resnet.png} \caption{ResNet Structure} \label{resnet} \end{figure} \end{document}
2,877,628,089,304
arxiv
\section{Introduction} Coalescence of a pair of droplets in a concentrated emulsion flowing through a 2D microchannel can trigger a cascade of similar events in their neighbourhood, giving rise to an avalanche that propagates through the emulsion \cite{Bremond2011, Gunes2010a} (See figure \ref{fig:schematic-model} A, for snapshots of the propagating coalescence avalanche). Since the coalescence process depends sensitively on various parameters like the film thickness of the liquid phase between the droplets, instantaneous velocities, etc. which vary dynamically across the emulsion, the avalanche is observed to propagate stochastically. Bremond et. al.~\cite{Bremond2011} even measured the probability associated with this local propagation as a function of the relative orientation of the droplets. Using this measure in a stochastic agent based model, we simulate the propagation of coalescence avalanches in concentrated 2D emulsions. We find that avalanches either propagate autocatalytically to destabilise the entire emulsion, or prematurely stop cascading leaving it relatively stable~\cite{DannyRaj2017}. The avalanche dynamics depends on the size of the system, the aspect ratio of the packing, how the droplets are oriented with respect to each other locally~\cite{Bremond2011}, the number of avalanches triggered~\cite{DannyRaj2018}, fluid properties which in turn affect the overall propensity for coalescence propagation~\cite{DannyRaj2016, Baret2009}. In our previous investigations of the phenomenon, we studied coalescence propagation only on closely packed assemblies of monodisperse droplets (of the same size). For these configurations, the total number of neighbours for each droplet were a constant except for those at the edge of the assembly. However, in real microfluidic applications, droplets self organise to form different arrangements, which could be randomly close-packed, with differences in the neighbour configuration between droplets even in the bulk of the assembly. Depending on the application, droplets may not always be monodisperse; and could be of different sizes. Polydisperse emulsions could significantly alter the neighbourhood configuration of droplets. An important question then arises: how sensitive is the avalanche dynamics to the underlying droplet configuration, when material composition and all other properties are kept the same? If the propagation is indeed sensitive to how the droplets are packed, then in dense flowing conditions, like in droplet-based incubators \cite{Frenz2009, Dai2016b}, where droplets reorganise dynamically during the flow as they move through the channel (or engineered to do so), the stability of the emulsions, \textit{i.e.} the propensity to form large avalanches, becomes a function of time, making it hard to operate these devices stably. Also, Bremond et al \cite{Bremond2011} hypothesised that polydisperse emulsions have a higher propensity to result in phase inversion inside a microchannel---where cascades of coalescence events could result in the inversion of the droplet and continuous phases locally in the emulsion. They showed that coalescence events leading to local phase inversion were limited in monodisperse emulsions due to the largely anisotropic propagation of the avalanche; for phases to invert, propagation should happen in small closed paths during which the continuous phase can be engulfed within the other coalescing phase. Therefore, to understand these systems in order to operate them stably or to control the overall propagation of coalescence avalanches, we need to investigate the role of droplet configuration on the emulsion stability. To this end, we generate a variety of different droplet configurations and study their stability using a stochastic agent based model for coalescence propagation. Then, we bring graph theory to formally characterise the underlying droplet configuration---as a graph with \textit{nodes} representing droplets and \textit{edges} connecting droplets to their immediate neighbours---and investigate how the structural properties of the graph influences propagation. We then build a data-driven model that relate the topology of the droplet packing to emulsion stability. \begin{figure*} \centering \includegraphics[width=\linewidth]{ Schematic_and_StochasticModel.pdf} \caption{A - Snapshots from the experiments conducted by \cite{Bremond2011} (courtesy: Nicholas Bremond, ESPCI France). The large snapshot corresponds to the state before the onset of the avalanche. The smaller images show the time lapse snapshots of the propagating avalanche. The inset (in the box) illustrates the local propagation rule. When droplets 1 and 2 coalesce, a nearby droplet 3 coalesces with a probability based on its orientation $\theta$, as shown in the plot. B - Stochastic agent based model takes different droplet configurations as input, along with the local propagation rule in A, and predicts the probability of an avalanche via a Monte Carlo study for each of the configurations ($P(A)$) as a function of its size ($A$).} \label{fig:schematic-model} \end{figure*} \section{Modelling the stochastic coalescence avalanches.} \subparagraph{Challenges with a first-principles approach.} Coalescence of droplets is a multi-scale phenomenon~\cite{Chan2011, Janssen2011}. The continuous-phase film between two droplets has to drain and become thin enough to be unstable to perturbations which leads to its collapse that allows the droplet-interfaces to make contact. The drainage process is rather complicated: a range of different interface configurations are formed as the thin film drains~\cite{Vakarelski2010}. However, once these droplets touch, they form high curvature regions that lead to large surface tension forces that pull the droplets together. In the system we are interested in, droplets coalesce via a counter-intuitive mechanism: upon \textit{decompression}. After droplets that are sufficiently close to each other get pulled away, a low pressure region is formed that pulls the interfaces together initiating contact between the interfaces~\cite{Bremond2008}. Since these observations are reported in systems with large amounts of surfactant, there is reason to believe that coalescence is facilitated by the surfactant concentration gradients on the droplet interfaces. To completely resolve these structures and capture the different stages as droplets coalesce, one requires very-fine time and space resolutions in their simulations. Furthermore, to simulate a coalescence avalanche, one has to capture, \begin{inlineenum} \item self-organisation: the motion of droplets as they move through the microchannel, \item nucleation events: coalescence between a pair of droplets which initiates a cascade of coalescence events, \item dynamic processes in coalescence: interactions between dynamically growing coalesced clusters formed due to multiple coalescence events. \end{inlineenum} These factors make any kind of first-principles approach to modelling coalescence avalanches computationally expensive, prohibiting the study at a system-level. \subparagraph{Need for a simple model.} Our goal in this study is to understand how propagation of avalanches depend on the way droplets are packed together. Hence, what we need is a model that will take the droplet configuration as input and simulate the stochastic propagation of an avalanche. The model should incorporate a measure of how coalescence events lead to newer events through the nearby droplets. For example, the probability associated with local propagation measured by Bremond and co-workers~\cite{Bremond2011} can be incorporated into such a model. Also, since the avalanche propagates stochastically, it is important that we have a computationally simple model that allows us to generate independent realisations of the avalanche propagation (Monte Carlo study) to estimate the expected properties of the propagation phenomenon. \subparagraph{Stochastic agent based model.} We model coalescence propagation as a stochastic branching process on a group of droplets packed together in a tight configuration~\cite{DannyRaj2016}; here a branch emerges when two droplets coalesce and the branch grows (or propagates) stochastically via neighbours that are in close proximity to the recently coalesced droplets. The process continues till all the newly formed branches either stop propagating or there are no more droplets to coalesce. The droplets are assumed to be stationary during the entire propagation since, speeds associated with the propagation of the coalescence cascade is generally an order higher than the movement speeds. Propagation is carried out on a randomly packed droplet configuration that is assembled using the algorithm in~\cite{Xu2005, Desmond2009}. We used the code provided by the authors of~\cite{Desmond2009} which can be found in~\cite{DesmondCodeRcp}, to produce dense mono- and bidisperse randomly packed droplet configurations (shown in figure~\ref{fig:schematic-model} B). Note that the stochastic agent based framework can be extended to account for the dynamic motion of the droplets and the coalesced clusters. However, in this article, we hold on to the simplifying assumption that droplets are static which aids our current interest: to understand how the avalanche dynamics depends on the configuration, of how droplets are packed. For a detailed algorithm and implementation of the branching process the readers are referred to~\cite{DannyRaj2016}. \subparagraph{Local propagation rule.} A pair of droplets is chosen randomly and allowed to coalesce. This initiates similar coalescence events in its neighbourhood with a probability $\alpha \times G(\theta)$; here, $\theta$ is a measure of the local orientation of the droplets participating in the propagation and $\alpha$ is a parameter that varies with fluid properties such as viscosity, surface tension, etc. The form of $G(\theta)$ and the definition of $\theta$, are illustrated in figure~\ref{fig:schematic-model}~A. This measure was experimentally computed by ref~\cite{Bremond2011} after analysing over 2000 coalescence events in different parts of the 2D emulsion. The form resembles a cosine function, which is the component of the pulling force experienced by a new droplet due to coalescence of a pair of droplets~\cite{DannyRaj2016}. $G(\theta)$ favours propagation along the orientation of the coalescing pair ($\theta = 0$), which gives rise to avalanches that propagate as fingers through the 2D emulsion. \subparagraph{Fluid properties and critical transitions.} The role of fluid characteristics on the propagation is implicitly captured using the parameter $\alpha$. When $\alpha \simeq 1$, every new coalescence event has the means to initiate more such events through their neighbours resulting in a cascade of coalescence events (conditions same as the experiments of ref~\cite{Bremond2011}). When $\alpha$ is very small, coalescence events do not propagate and the emulsion is stable. Therefore, a critical $\alpha_c$ exists which marks this transition from system-size spanning avalanches to a stable regime. Similar qualitative transitions based on surfactant concentration were reported by Baret and co-workers~\cite{Baret2009}. We find that the structure of the observed $G(\theta)$, which favours finger-like propagation events leads to a system size dependence of the critical transition $\alpha_c = f(N)$ \cite{DannyRaj2017}. \subparagraph{Monte Carlo study} For every droplet configuration we perform a Monte-Carlo study ($\sim 10^5$ simulations) of the stochastic agent based model; every run generates an independent realisation of the stochastically propagating coalescence avalanche. From these independent runs, we compute the probability of occurrence of an avalanche $P(A)$, as a function of its size ($A$). \section{Results and Discussion} \subparagraph{Stability of emulsions.} The structure of $P(A)$, the probability of an avalanche of size $A$, sheds light on the nature of propagation and the resultant stability of the emulsion. In our previous investigations (\cite{DannyRaj2016, DannyRaj2018}), where the propagation was studied on a hexagonally close packed arrangement of droplets, we observed $P(A)$ to have a non-monotonic shape with a maximum value at very small values of $A$ and a second peak at a large value of $A$ (red curves in figure~\ref{fig:AvalancheProbability_agentbasedmodel}). The second peak indicates that a significant fraction of the avalanches propagate through the entire emulsion, destabilising it. One could call an emulsion `stable' if the second peak can be avoided. One way to do this would be to make $\alpha<\alpha_c$, which reduces the propensity associated with local propagation giving rise to coalescence events that do not propagate. This requires changing the fluids used, the surfactant concentration, etc. Another way to reduce the second peak in $P(A)$, without having to change the fluid-system, is by changing the aspect ratio of the droplet configuration. Arranging the droplets in a slender configuration increases the chance of the propagating front to encounter the boundary more often, which reduces the chances associated with a system-level propagation. \begin{figure} \centering \includegraphics[width=\linewidth]{AvalancheProbability.pdf} \caption{Probability of an avalanche $P(A)$ as a function of its size $A$ plotted for randomly packed monodisperse emulsion (main panel) and bidisperse emulsions (side panels). The top panel in the side corresponds to a small level of bidispersity ($30:70$ composition of droplets with size ratio $1.1:1$) and the bottom panel a large level of bidispersity ($50:50$ composition of droplets with size ratio $2:1$). These are same as the droplet configurations in figure \ref{fig:schematic-model} B. The thick red line is the $P(A)$ corresponding to the hexagonally close packed configuration (results from ref~\cite{DannyRaj2016}); the thin black lines correspond to the highest and lowest mean avalanches in a given category; the thin grey lines show the $P(A)$ for the rest. All the simulations here correspond to $\alpha = 1$.} \label{fig:AvalancheProbability_agentbasedmodel} \end{figure} \subparagraph{Propagation depends on the packing.} When droplets of the same size are randomly packed, we observe significant variation in $P(A)$ across different droplet-configurations even when the parameters $\alpha$ and the aspect ratio of the droplet assembly are a constant (see thin lines in figure~\ref{fig:AvalancheProbability_agentbasedmodel}). The thick line (in red) corresponds to the monodisperse hexagonally close packed configuration (hcp) of droplets (same as the results in ref~\cite{DannyRaj2016}). When droplets are monodisperse and randomly packed, they exhibit the most variation about the hcp configuration, with few configurations even exhibiting a higher propagation than hcp. However, when droplets are bidisperse, we find both the variation in $P(A)$ between different configurations and their mean propensity for propagation of large avalanches (second peak height) to decrease with increasing bidispersity (see side panels in figure~\ref{fig:AvalancheProbability_agentbasedmodel}). A configuration's level of bidispersity can be tuned by changing both the ratio of radii ($sr$) and the proportion of the two types of droplets ($nr$). \subparagraph{Emulsions as graphs.} When a pair of droplets coalesce, they lead to more such events in their neighbourhood. Hence, the number of neighbours available in their immediate vicinity and the angles they make with the recently coalesced pair determine the probabilities associated with propagation. To understand the sustained propagation of an avalanche, one has to not just investigate the nearest neighbours, but also their neighbours, and so on. In other words, a system-level characterisation of the droplet configuration is essential to understand why a certain droplet-packing either favours or hinders the propagation of an avalanche. Tools from \textit{graph theory} offer a convenient formalism to analyse such system-level aspects of droplet configurations. A concentrated emulsion can be thought of as a \textit{graph} where the \textit{nodes} correspond to the droplets and an \textit{edge} connects every pair of nearby droplets through which coalescence can propagate (see inset of figure \ref{fig:Emulsion_as_graphs}). In this context, the question of interest takes the form: Can one understand the avalanche propagation dynamics from the topology of the underlying graph? Coalescence avalanches can be interpreted as a cascade on the graph. If $x_i[t]$ is the probability that a node $i$ is `coalesced' at time $t$, then one can write the coalescence propagation equation as follows: \begin{equation} x_i[t] = 1 - \prod_{j\in \mathcal{N}_i} \Big(1 - w_{ji} x_{j}[t-1]\Big) \label{eqn:markovchain} \end{equation} The RHS of Eq~\ref{eqn:markovchain} quantifies the probability associated with propagation from any of the possible neighbours of $i$. Here, $w_{ji}$ is the probability that coalescence will propagate from $j$ to $i$; $\mathcal{N}_i$ refers to neighbourhood of $i$. $w_{ji}$ is non-zero only when there is an edge connecting $j$ and $i$. This equation is similar in spirit to the time-evolution equation of a discrete-state, discrete-time Markov chain~\cite{ross2014probability}, where $x_i[t]$ is the probability of the Markov chain in state $i$. If $\mathbf{x} = [x_i]_{i=1}^N$ is the vector of all $x_i$, and $W = [w_{ij}]_{i, j = 1}^N$, then the propagation equation of the Markov chain can be written as $\mathbf x [t+1] = W \mathbf x [t]$. However, in the case of coalescence propagation, crucial properties such as the \emph{row-stochastic} nature of $W$, and the fact that $\sum_i x_i = 1$ do not hold---hence, we have a slightly more involved propagation equation given above. A propagating coalescence avalanche could reach $i$ from $j$ through any of its neighbours $k$. Hence, we define $w_{ji}$ as, \begin{equation} w_{ji}= \left\{\begin{matrix} \sum_{k\in\mathcal{N}_j} p(k,j,i) & A_{ji} = 1\\ 0 & otherwise \end{matrix}\right. \label{eqn:wij_defn} \end{equation} $p(k,j,i)$ is the probability that a coalescence event between droplets $k$ and $j$ results in the coalescence of droplet $i$. This is identical to the local probability rule, derived from observations of Bremond et al~\cite{Bremond2011} which is used in the stochastic agent based model (see figure \ref{fig:schematic-model} A). Since, the probability for propagation from $j$ to $i$ depends on the neighbours of $j$, we expect $w_{ji} \neq w_{ij}$. \begin{figure} \centering \includegraphics[width=\linewidth]{figure_emulsiongraph.pdf} \caption{Average degree of a graph corresponding to a droplet configuration, scaled by the system parameter $\alpha$, as a function of the average avalanche size ($A$) it exhibits, plotted for different packing conditions: monodisperse, bidisperse with different size ratios ($sr$) and their compositions ($nr$), different total number of droplets ($N$) and different propensities for propagation ($\alpha$). The dotted line is a fit for a linear function with correction, $\Tilde{d}_g = c_1 A + c_2 (1-\exp^{-c_3 A})$, where $c_1 = 5.4$, $c_2 = 3.6$ and $c_3 = 22.9$. INSET: an example droplet configuration with an overlay of the corresponding graph. Parameters explored: $N \in \{144, 196, 225\}$, $\alpha \in \{0.9, 1, 1.1\}$, $sr \in (0, 3)$ and $nr \in (0, 0.5)$.} \label{fig:Emulsion_as_graphs} \end{figure} One can immediately see, from the model according to eq \ref{eqn:markovchain}, that the propagation depends on how the droplets are packed. This is summarised in $W$, which can be thought of as a \emph{weighted adjacency matrix} of the underlying graph. The steady state distribution of an avalanche $\mathbf{x}^s$, \textit{i.e.} when $\mathbf{x}_i[t] = \mathbf{x}_i[t-1]=\mathbf{x}^s$, is an implicit function of the adjacency matrix $W$ alone. Hence, understanding the properties of the $W$ would indeed aid in understanding the role of droplet configuration on the avalanche propagation. \subparagraph{Mean degree explains observed avalanches.} A simple way to characterise a droplet configuration is via the \emph{degree distribution} of the underlying graph \cite{NetworksNeuman2010}. The degree of a node is equal to the number of nearby neighbours of the corresponding droplet, weighted by the probability that coalescence will propagate from that node. When computed for a weighted graph $w$, this corresponds to the net number of neighbours through which coalescence can effectively propagate. Randomly packing a small number of droplets could lead to large variations in the degree distributions between individual configurations that can in turn affect the propagation of avalanches, which is the reason for the variation in the $P(A)$ curves observed in figure \ref{fig:AvalancheProbability_agentbasedmodel} when droplets are randomly packed. It is intuitive to expect the configurations with lower avalanche sizes to have smaller mean degree $d_g$ (averaged over all the droplets in a given configuration). We observe this to be generally true across configurations, even when they are bidisperse. We observe the mean degree, when scaled by the system parameter $\alpha$, ($\Tilde{d}_g = \alpha \times d_g$), to increase monotonically with the mean avalanche size $A$. We find all the data from our simulations---different configurations for both mono and bidisperse cases, different system sizes, and local propagation propensities $\alpha$---collapse on to a master curve, suggesting a universal relationship between the structure of the graph (defined based on the average degree of a configuration) and the expected avalanche size (averaged over many independent realisations) (see figure \ref{fig:Emulsion_as_graphs}). We build a relationship between $\Tilde{d}_g$ and $A$, from data. \begin{equation} \Tilde{d}_g = c_1 A + c_2 \big(1 - e^{-c_3 A} \big). \label{eqn:data-driven-model} \end{equation} We regress a linear model with a correction term (for small $A$) to account for the phase transition (from autocatalytic to non-autocatalytic propagation) that corresponds with the small avalanche sizes. We find this relationship to hold true for different values of system size $N$, different propensities for propagation $\alpha$ and varying levels of bidispersity defined by the parameters $sr$ and $nr$---yielding a universal relationship. \subparagraph{Exceptions to the rule.} While the data-driven model explains the dependence of the mean avalanche size to the packing characteristic $\Tilde{d_g}$ accurately for most cases, it does not explain why some of the droplet configurations corresponding to the monodisperse randomly packed emulsions facilitate avalanches larger than that by the hexagonally packed configuration (hcp) (figure \ref{fig:AvalancheProbability_agentbasedmodel} main panel and \ref{fig:conductance_monodisperse} top panel). This is puzzling, since a hcp configuration has the highest possible $d_g$ for a system of a given size. While $d_g$ is small for these configurations with larger $A$, we find them to have different degree-distribution; the nodes with $6$ neighbours are not as high as hcp, however, they have more number of nodes with $5$ and $4$ neighbours. It is reasonable to expect that this difference in distribution is the cause for the better `flow' that promotes a growing avalanche. To understand how a droplet configuration as a whole affects the propagation of the avalanche, we look at the \textit{conductance} of the graph, which is a measure that can be used to quantify how well-connected a graph is\cite{spielman2007spectral}. For instance, if a graph has bottlenecks, where there are only a small number of edges across the bottleneck, the conductance will be low. In the context of coalescence avalanches, presence of bottlenecks would decrease the propensity of the system to exhibit large avalanches. Unfortunately, graph conductance as it is formally defined is algorithmically difficult to estimate~\cite{conductance}. Hence, we use \emph{algebraic connectivity} $\mathcal{C}$, i.e. the second smallest eigenvalue of the Laplacian matrix of the graph, which is a proxy for the conductance \cite{spielman2007spectral, spielman2012spectral}. Conductance also has interpretations in terms of the diffusion time for diffusion processes on graphs, \textit{i.e} random walks. Since coalescence propagation is a stochastic process, we believe that conductance of a graph (measured as algebraic connectivity) would yield a single measure for the entire graph that can explain the large mean avalanche sizes observed. \begin{figure} \centering \includegraphics[width=\linewidth]{figure_monodisperse_conductance.pdf} \caption{Average degree $d_g$ (top panel) and algebraic connectivity $\mathcal{C}$ (bottom panel) are plotted against the observed mean avalanche sizes $A$ for monodisperse, randomly packed droplet configurations along with that corresponding to hcp configuration (marked as \textbf{X}) for $\alpha = 1$. Top panel -- all the points lie in the region shaded which corresponds to the points that have a lower value of $d_g$ than hcp; bottom panel -- two regions are shaded: configurations with higher algebraic connectivity than hcp are in the top region exhibiting larger mean avalanches and \textit{vice versa}.} \label{fig:conductance_monodisperse} \end{figure} \subparagraph{Algebraic connectivity explains the exceptions.} In figure~\ref{fig:conductance_monodisperse}, we plot both the mean degree $d_g$ (in the top panel) and the algebraic connectivity $\mathcal{C}$ (in the bottom panel) for monodisperse randomly packed configurations as a function of $A$, the mean avalanche size. We find that algebraic connectivity successfully relates the topology of the graph to the propensity for overall propagation (or flow) of the avalanches: $\mathcal{C}$ is found to increase linearly with $A$, similar to $d_g$. Further, algebraic connectivity also explains why some randomly packed configurations facilitate larger avalanches than hcp. In figure~\ref{fig:conductance_monodisperse}, hcp configuration (marked as `X') has a higher $d_g$ in comparison to all the randomly packed configurations (top panel). However, the configurations that exhibit higher avalanche sizes than a hcp are also found to have greater algebraic connectivity values (bottom panel). To understand how the algebraic connectivity, as a measure, works in successfully estimating a slightly lower flow in hcp, we examine its graph connectivity in a little more detail. In the bulk of a hcp configuration, all droplets have the maximum possible number of neighbours (\textit{i.e.} 6). Hence, there are no `bottlenecks' in the bulk that could reduce the overall flow in a hcp. However, the droplets near the boundary of the assembly have relatively low degrees (usually 4 or lower), while those for the randomly packed configurations have a wider degree distribution with a considerable number of droplets with a degree of 5. Now, consider those avalanches that are triggered in the bulk, which propagate towards the boundary. These can become system-spanning only if the boundary is well-connected to the bulk. Therefore, these avalanches have a higher chance of propagating further, in randomly packed configurations which have a net higher mean degree near the boundary than a hcp---a potential reason for the observed higher avalanche probabilities than hcp. Algebraic connectivity, being able to characterise the overall flow in the graph, is able to capture this phenomenon well. Algebraic connectivity $\mathcal{C}$ could have been used as a regressor in the data-driven model (Eq \ref{eqn:data-driven-model}) instead of the scaled mean degree $\Tilde{d}_g$. However in our investigations, we find that using $\Tilde{d}_g$ achieves a more accurate fit to observed average avalanche size $A$ than using $\mathcal{C}$; compare the spread associated with the two metrics for a fixed $A$ in figure \ref{fig:conductance_monodisperse}. While $\mathcal{C}$ explains the exceptions well, it is always affected by the existence of regions on the graph that contribute to smaller `flow'---since, $\mathcal{C}$ is a global measure. However, $\Tilde{d}_g$ being an average local measure of the graph topology, is not affected by these regions and serves as a better regressor in Eq~\ref{eqn:data-driven-model}. Including both the metrics as regressors does not change the accuracy in explaining the data appreciably. \section{Conclusion} In this article, we show how the propensity for coalescence avalanches to propagate in a concentrated emulsion is directly tied to how the droplets are packed. By expressing the droplet configurations as graphs, we are able to employ graph theory formalism to study how the topology of droplet packing can be used to predict the size of coalescence avalanches that these configurations can facilitate. We show the existence of a universal relationship between the scaled mean degree of the graph and the mean avalanche size, which allows us to build a data-driven model that can be used for predicting the avalanche size for a given droplet configuration. Since, the input to the data-driven model is simply how the droplets are packed, this model has the potential to be deployed online, in a real droplet microfluidic setup, to study the stability of packing as droplets self-organise in a 2D microfluidic channel. We find that there are exceptions to the data-driven model which is based on the mean degree of a graph---which is only an average local property of the graph. These are explained by algebraic connectivity, a proxy for graph-conductance, which that takes into account the system-level characteristics of a graph that are linked to a successful propagation of avalanches. Also, since, we relate the topology of the droplet configuration via metrics that characterises its structure to the propagation dynamics, we expect our approach to be agnostic to the specific choice of the coalescence propagation model used in the study. Though specific details of the hydrodynamics are not considered explicitly, \begin{inlineenum} \item our approach takes into account the local propagation propensity (estimated from experiments) and extended to systems with different fluid properties and \item the graph essentially represents the droplet-interaction network where every edge connects every pair of droplets that are within their sphere of hydrodynamic influence. \end{inlineenum} Hence, even the propagation dynamics predicted by a more detailed model---that considers the motion of droplets, thin-film thickness distribution and its effect on triggering a coalescence event, etc.---would still depend on how droplets are packed together and how this configuration gives rise to the propagation of the coalescence avalanches; which is captured by a graphical representation of the droplet ensembles. \section*{Acknowledgments} This project was funded by the DST INSPIRE faculty award, grant number: DST/INSPIRE/04/2017/002985. DRM acknowledges the technical assistance from Bhavya Balu and the discussions with Raghunathan Rengaswamy at IIT Madras, in the early stages of the project.
2,877,628,089,305
arxiv
\section{\label{sec:one}Introduction} Quasi-particles are excitations of the matter. According to modern many body theory, Elementary particles and quasi-particles are basic construction of matters. The latter are crucial for understanding many phenomena in condensed matter physics. Actually, quasi-particles can be regarded as collective excitations of many elementary particles, as well as the mixtures of different elementary excitations, whose behavior are similar to the matter particles \cite{book1}. In atomic physics and quantum optics, some exotic phenomena can be explained by the concept of quasi-particles. For example, slow light phenomenon \cite% {slow,slow397} in electromagnetically induced transparency (EIT) \cite% {Harris,Harris82} can be explained in terms of quasi-particles - polaritons \cite{Lukin84,Lukin65,Lukin77,Lukin75}. EIT happens when a weak signal light field and a stronger control field are coupled to an ensemble of atoms with a $\Lambda $ energy level configuration. Under the two-photon resonance, due to the destructive interference between two interaction paths, the initially opaque resonant medium becomes transparent with respect to the probe field, and the group velocity of light is slowed down. Light then is stopped in the EIT medium because only the dark state polariton is excited. The dark state polariton is a bosonic like collective excitation, which is a mixture of a signal light field and an atomic spin wave \cite{sprl91}. Most recently, the light deflection was observed for the EIT atomic medium in an external field with spatially inhomogeneous distribution \cite% {Karpa,Scul07}. In the experiment of Ref. \cite{Karpa}, it is found that the light ray bends when a magnetic field with small gradient vertical to the propagation direction is applied to a cell with $\Lambda $-type rubidium gas. This experiment was interpreted as the Stern-Gerlach experiment of the dark polariton, thus the effective magnetic moment of the dark state polariton is observed for the first time. It demonstrates that the dark state polariton indeed behaves as a matter particle with mass, momentum and magnetic moment et al, which can be reflected, refracted, and even deflected by a gradient force. Therefore, quasi-particles show their particle nature with definite momentum and effective mass. Different from that in Ref. \cite% {Karpa}, the experiment of Re. \cite{Scul07} shows that a light can also been deflected by an optical driven Rb atomic vapor when the profile of the driving field is inhomogeneous. In this situation the angle of deviation is an order of magnitude larger than that in Ref. \cite{Karpa}. The observed phenomenon about light deflection in such EIT media has been explained correctly according to the semi-classical theory \cite{ZDL} without using the concept of dark-state polariton, which needs the quantization of light fields. Like matter particles, quasi-particles possess the wave-particle duality, that is, quasi-particles sometimes appear to behave as particles and sometimes appear to behave as waves. Here, we are interested in the particle aspect of the dark polariton, which is an atomic collective excitation dressed by the quantized probe light. The main purpose of this paper is to systematically develop a quantum theory describing the spatial motion of polaritons in inhomogeneous magnetic and optical fields. We begin our investigation with the propagation of quasi-particles in the limits of atomic linear response, where the atomic equations are treated perturbatively. With an effective potential induced by the steady atomic response in the external spatial-dependent field, the dynamics of spatial motion of the quasi-particles is governed by the effective Schr\"{o}dinger equation. The spatial motion of the quasi-particle is of anisotropic depression -- the longitudinal motion is like a ultra-relativistic motion of a ``slow light'' while the transverse motion is of non-relativity with certain effective mass. This paper is organized as follows: In sec. \ref{sec:two}, we present the theoretical model for a $\Lambda $-type atomic ensemble in the presence of inhomogeneous external fields, and derive the system of equations governing the spatial motion of the signal field in the atomic linear response with respect to the probe field. In Sec. \ref{sec:three}, the perturbation theory is applied to obtain the atomic motion equation which is related to the linear response to the signal field. In Sec. \ref{sec:four} and Sec. \ref% {sec:five}, the crucial idea of the EIT - the dark-state polariton is introduced as an dressed fields to describe the spatial motion of collective excitation. Afterward, the dynamics of the quasi-particle - dark polariton is discussed in the presence of an inhomogeneous magnetic field with a spatial distribution along the transverse direction. In Sec. \ref{sec:six}, the spatial motion of the signal light in an inhomogeneous coupling field is investigated. Then we make our conclusion in Sec. \ref{sec:sum}. \section{\label{sec:two}theoretical model for $\Lambda$-type atomic ensemble in external fields} We consider an ensemble of $N$ identical and noninteracting atoms, which is confined in a cell ABCD as shown in Fig.~\ref{fig:1}b. Each of the atoms is modeled by a $\Lambda $-shaped energy level configuration with internal states $% \left\vert g\right\rangle $, $\left\vert s\right\rangle $ and $\left\vert e\right\rangle $. The transitions from the two lower states $|g\rangle $ and $|s\rangle $ to the excited state $|e\rangle $ are coupled by two optical fields, a weaker probe field and a stronger control field, as shown in the top panel of Fig. \ref{fig:1}a. The atomic transition from $\left\vert g\right\rangle $ to $\left\vert s\right\rangle $ is forbidden by the electronic dipole coupling. The probe field carries frequency $\nu $ and the wave number $k$. It is a quantized electromagnetic field with $\sigma ^{+}$ polarization. Under the rotating wave approximations, its negative frequency part of the electric field $\tilde{E}^{+}\left( \mathbf{r},t\right) $, couples the ground state $|g\rangle $ to the excited state $|e\rangle $ at resonance, in the absence of the magnetic field $B\left( \mathbf{r} \right) $. The control field has carrier frequency $\nu _{c}=\omega _{e}-\omega _{s}$ and wave number $k_{c}$. It is a classical field with $% \sigma ^{-}$-polarized, and couples to the upper state $|e\rangle $ and the metastable state $|s\rangle $ with Rabi frequency $\Omega \left( \mathbf{r}% \right) $. After the magnetic field is applied along the $z$-direction, the internal energies of the corresponding states are shifted from their origins by magnitudes $\mu _{i}B$ with \begin{equation} \mu _{i}=m_{F}^{i}g_{F}^{i}\mu _{B},i\in \left\{ g,s,e\right\} . \end{equation}% Here, $\mu _{B}$ is the Bohr magneton, $g_{F}$ is the Land\'{e} g-factor of the internal state $i$, and $m_{F}$ is the magnetic quantum number. As shown in Fig. \ref{fig:1}b, the probe field and the control field propagate parallel in the $z-$ direction with wave number $k$ and $k_{c}$ respectively. The Hamiltonian of this typical EIT system is given by $H=H^{(A)}+H^{(F)}+H^{(I)}$. Let us use $\tilde{\sigma}_{\mu \nu }^{j}\left( t\right) =\left\vert \mu \right\rangle _{j}\left\langle \nu \right\vert $ to denote the internal state operator of the $j$-th atom between states $% \left\vert \mu \right\rangle $ and $\left\vert \nu \right\rangle $. We introduce the collective atomic operator \cite{Lukin65} \begin{equation} \tilde{\sigma}_{\mu \nu }\left( r,t\right) =\frac{1}{N_{r}}\sum_{r_{j}\in N_{r}}\tilde{\sigma}_{\mu \nu }^{j}\left( t\right) \text{,} \label{2-01a} \end{equation}% which is averaged over a small but macroscopic volume containing many atoms $% N_{r}=\left( N/V\right) dV\gg 1$ around position $r$. Here $N$ is the number of atoms in an interaction volume $V$. Then the Hamiltonian of the atomic part reads% \begin{equation} H^{(A)}=\frac{N}{V}\sum_{j}\int d^{3}r\left( \omega _{j}-\mu _{j}B\right) \tilde{\sigma}_{jj}, \label{2-01} \end{equation}% where we have neglected the kinetic term of atoms, and $\omega _{j}$ are the corresponding energy level spacing of the internal atomic level. $H^{(F)} $ is the free Hamiltonian of the radiation field. Using the electric-dipole approximation and the rotating-wave approximation, the interaction with electromagnetic field reads \cite{Lukin84,Lukin65,Lukin77,Lukin75} \begin{equation} H^{(I)}=-\frac{N}{V}\int d^{3}r\left[ d_{eg}\tilde{\sigma}_{eg}\tilde{E}% ^{+}+\Omega \tilde{\sigma}_{es}e^{i\left( k_{c}z-\nu _{c}t\right) }+h.c.% \right] \label{2-02} \end{equation}% Here, $\tilde{E}^{+}$ is the negative frequency of the probe field; $\Omega \left( \mathbf{r}\right) $ is the Rabi frequency of the control field, which usually depends on the spatial coordinate through the spatial profile of driving field; $d_{eg}$ is the dipole matrix element between the states $% |g\rangle $ and $|e\rangle $. \begin{figure}[ptb] \includegraphics[width=6 cm]{sgfig1.eps} \caption{\textit{(Color on line)} (a) Level scheme of atoms interacting with $\protect\sigma^{+}$-polarized probe and $\protect\sigma^{-}$-polarized probe strong fields. $\Omega$ denotes the Rabi frequency of the undepleted classical control field. (b) Configuration of the optical beams and the magnetic field inside the atomic medium.} \label{fig:1} \end{figure} For convenience, we describe the electric field as \begin{equation} \tilde{E}^{+}\left( r,t\right) =\sqrt{\frac{\nu }{2\varepsilon _{0}V}}% E\left( r,t\right) e^{i\left( kz-\nu t\right) } \label{2-03} \end{equation}% in the following discussion. Here $\exp \left[ i\left( kz-\nu t\right) \right]$ is the carrier wave with frequency $\nu $ and wave number $k$ propagating in the $z$ direction, and $E\left( r,t\right) $ is the slow varying envelope, meaning that its spatiotemporal variation is much slower than the carrier wave length and frequency. Further we introduce the slowly varying variables for the atomic transition operators \begin{subequations} \label{2-04} \begin{align} \tilde{\sigma}_{eg}\left( r,t\right) & =\sigma _{eg}\left( r,t\right) e^{-ikz}\text{,} \\ \tilde{\sigma}_{es}\left( r,t\right) & =\sigma _{es}\left( r,t\right) e^{-ik_{c}z}\text{.} \end{align}% In the rotating reference frame, the dynamics of this system is described by the interaction Hamiltonian \end{subequations} \begin{align} H_{I}& =-\frac{N}{V}\int d^{3}r[\sum_{j}\mu _{j}B\sigma _{jj} \label{2-05} \\ & +\left( g\sigma _{eg}E+\Omega \sigma _{es}+h.c.\right) ]\text{,} \notag \end{align}% where the atom-field coupling constant $g$ is defined as \begin{equation} g=d_{eg}\sqrt{\frac{\nu }{2\varepsilon _{0}V}}. \label{2-06} \end{equation} Before we study the EIT features of this system in detail, let us first stand in the point of view of light to investigate the propagation effects of pulses in an atomic medium. It is well-known that, when atoms are subjected to an electric field, the applied field displaces the positive charges and the negative charges in atoms from their usual positions. This small movement that positive charges in one direction and negative ones in the other will result in collective induced electric-dipole moments. All dipole moments in the dielectric material generate the polarization collectively, which is defined as the collective dipole moment per unit volume \begin{equation} P=\frac{N}{V}d_{ge}\sigma _{ge}e^{i\left( kz-\omega _{eg}t\right) }+h.c.% \text{.} \label{2-07} \end{equation}% The collective dipole moment in Eq.(\ref{2-07}) is caused by the atomic response to an optical electric field in a dielectric material. In turn, every dipole with a nonvanishing second derivative in time radiates an electromagnetic wave, that is, the dielectric response $P$ of the medium acts as an effective source to produce the electromagnetic field. The Heisenberg equation for the slowly varying field operator $E\left( r,t\right) $ results in a paraxial wave equation in classical optics \cite% {Lukin84,Lukin65} \begin{equation} \left( i\frac{\partial }{\partial t}+ic\frac{\partial }{\partial z}+\frac{c}{% 2k}\nabla _{T}^{2}\right) E=-g^{\ast }N\sigma _{ge}\text{.} \label{2-10} \end{equation}% Here, $c=1/\sqrt{\varepsilon _{0}\mu _{0}}$ is the velocity of light in vacuum and the transverse Laplacian is defined as \begin{equation} \nabla _{T}^{2}=\partial ^{2}/\partial x^{2}+\partial ^{2}/\partial y^{2} \end{equation}% in the rectangular coordinates. When we neglect the $x$- and $y$-dependence of $\tilde{E}$, that is, confine the problem in one dimension, Eq. (~\ref {2-10}) immediately reduces to the usual propagation equation \begin{equation} \left( i\frac{\partial }{\partial t}+ic\frac{\partial }{\partial z}\right) E=-g^{\ast }N\sigma _{ge} \end{equation}% given in Ref.~\cite{Lukin84,Lukin65,Lukin77,Lukin75,Sculb}, which only describe light propagation in $z$ direction. To consider the problem in three spatial dimensions, one can use the paraxial wave equation (~\ref{2-10} )\ to investigate the dynamics of the input pulse in a resonant atomic medium. In this paper we will focus on the case that the linear optical response theory works well, which can sufficiently reflect the main physical features of the spatial motion of the input pulse with slow group velocity. The lowest order contribution to the polarization is the linear response of atoms defined as \begin{equation} P^{(1)}=\frac{N}{V}d_{ge}\sigma _{ge}^{(1)}e^{i\left( kz-\omega _{eg}t\right) }\text{.} \label{2-11} \end{equation}% Then, the paraxial wave equation becomes% \begin{equation} i\frac{\partial }{\partial t}E+ic\frac{\partial }{\partial z}E+\frac{c}{2k}% \nabla _{T}^{2}E=-g^{\ast }N\sigma _{ge}^{(1)}\text{,} \label{2-12} \end{equation}% where the definition of $\sigma _{ge}^{(1)}$ will be given in the next section. \section{\label{sec:three} perturbation approach} We now study the evolution of the atomic ensemble under the influence of the applied optical fields. The dynamics of this atomic ensemble is described by the Heisenberg equations \begin{subequations} \label{3-01} \begin{align} \dot{\sigma}_{eg} & =\left[ i\left( \mu_{g}-\mu_{e}\right) B\right] \sigma_{eg}-i\Omega^{\ast}\sigma_{sg} \\ & +ig^{\ast}\left( \sigma_{ee}-\sigma_{gg}\right) E^{+}\text{,} \notag \\ \dot{\sigma}_{es} & =\left[ i\left( \mu_{s}-\mu_{e}\right) B\right] \sigma_{es}-ig^{\ast}\sigma_{gs}E^{+} \\ & +i\Omega^{\ast}\left( \sigma_{ee}-\sigma_{ss}\right) \text{,} \notag \\ \dot{\sigma}_{sg} & =\left[ i\left( \mu_{g}-\mu_{s}\right) B\right] \sigma_{sg}-i\Omega\sigma_{eg}+ig^{\ast}\sigma_{se}E^{+}\text{.} \end{align} Since EIT is primarily concerned with the nonlinear modification of the optical properties of the probe field, thus the low density approximation is valid. In this approximation, the intensity of the quantum field is much weaker than that of the coupling field $\Omega$, and the number of photons contained in the signal pulse is much less than the number of atoms in the sample. In the low density approximation, the perturbation approach can be applied to the atomic part, which is introduced in terms of perturbation expansion~\cite{Lukin84,Lukin65,Lukin77,Lukin75} \end{subequations} \begin{equation} \sigma _{ij}=\sigma _{ij}^{(0)}+\lambda \sigma _{ij}^{(1)}+\lambda ^{2}\sigma _{ij}^{(2)}+\cdots \text{,} \label{3-02} \end{equation}% where $i,j=\left\{ e,s,g\right\} $ and $\lambda $ is a continuously varying parameter ranging from zero to unity. Here $\sigma _{ij}^{(0)}$ is of the zeroth order in $gE$, $\sigma _{ij}^{(1)}$ is of the first order in $gE$ and so on. We now substitute Eq. (\ref{3-02}) into Eq. (\ref{3-01}) and retain only terms up to the first order in the signal field amplitude. We thereby obtain the system of equations in the zeroth order \begin{subequations} \label{3-03} \begin{align} \dot{\sigma}_{eg}^{(0)}& =d_{1}^{\ast }\sigma _{eg}^{(0)}-i\Omega ^{\ast }\sigma _{sg}^{(0)}\text{,} \\ \dot{\sigma}_{es}^{(0)}& =d_{3}^{\ast }\sigma _{es}^{(0)}+i\Omega ^{\ast }\left( \sigma _{ee}^{(0)}-\sigma _{ss}^{(0)}\right) \text{,} \\ \dot{\sigma}_{sg}^{(0)}& =d_{2}^{\ast }\sigma _{sg}^{(0)}-i\Omega \sigma _{eg}^{(0)}\text{.} \end{align}% where the parameters \end{subequations} \begin{subequations} \label{3-11} \begin{align} d_{1}& =i\left( \mu _{e}-\mu _{g}\right) B-\gamma _{1}\text{,} \\ d_{2}& =i\left( \mu _{s}-\mu _{g}\right) B-\gamma _{2}\text{,} \\ d_{3}& =i\left( \mu _{e}-\mu _{s}\right) B-\gamma _{3}\text{.} \end{align} and we have phenomenologically introduced the energy-level decay rates $% \gamma _{i}$( $i\in \left\{ 1,2,3\right\} $). We assume that all population of atoms are initially prepared in the ground state $\left\vert g\right\rangle $ in the absence of electromagnetic fields, and the depletion of the ground state is not significant for any time $t>0$ due to the quantum interference effect, therefore \end{subequations} \begin{equation} \sigma _{gg}^{(0)}=1 \label{3-08} \end{equation}% while others vanish~\cite{Lukin65}. Then, the first order atomic transition operator $\sigma _{ij}^{(1)}$, which are related to the atomic linear response to the probe field, satisfy the following equations~\cite{Lukin65} \begin{subequations} \label{3-09} \begin{align} \dot{\sigma}_{eg}^{(1)}& =d_{1}^{\ast }\sigma _{eg}^{(1)}-i\Omega ^{\ast }\sigma _{sg}^{(1)}-ig^{\ast }E^{\dag }\text{,} \\ \dot{\sigma}_{es}^{(1)}& =d_{3}^{\ast }\sigma _{es}^{(1)}+i\Omega ^{\ast }\left( \sigma _{ee}^{(1)}-\sigma _{ss}^{(1)}\right) \text{,} \\ \dot{\sigma}_{sg}^{(1)}& =d_{2}^{\ast }\sigma _{sg}^{(1)}-i\Omega \sigma _{eg}^{(1)}\text{.} \end{align}% In order to get the equations of motion for polaritons, we rewrite Eq.~(\ref{3-09}) as~\cite{Lukin65} \end{subequations} \begin{subequations} \label{3-10} \begin{align} gE& =-\left[ \left( \partial _{t}-d_{1}\right) \frac{1}{\Omega }\left( \partial _{t}-d_{2}\right) +\Omega \right] \sigma _{gs}^{(1)}\text{,} \\ \sigma _{ge}^{(1)}& =-\frac{i}{\Omega }\left( \partial _{t}-d_{2}\right) \sigma _{gs}^{(1)}\text{,} \end{align} Equations (\ref{2-12}) and (\ref{3-10}) constitute a self-consistent system of equations, which indicates that the polarization field $\sigma_{ge}^{(1)} $ can serve as a source to generate the electric fields, whereas the propagating light in turn drives the atomic media via the dipole interaction. They are the starting point of our investigation in the following several sections, where we study the phenomena of light deflection that occur as a consequence of the interaction between the $\Lambda$-type atomic ensemble and an external field with a spatial distribution. \section{\label{sec:four}Spatial motion of quasi-particle in a harmonic magnetic field} It is well known that the EIT system has two remarkable properties: 1) the opaque absorption medium becomes transparent with respect to the probe light at certain frequencies. It happens because the absorption on both transitions is suppressed by the destructive interference between the excitation pathways to the upper level. Thus a transparency window is rendered over a narrow spectral range within the absorption line. 2) The group velocity of the incoming pulse has been largely reduced within the transparency window. Physically, the slow light in a EIT system is interpreted by the formation of so called dark-state polariton (DSP). A ``dark-state polariton'' is a bosonic-like collective excitation of a signal light field and an atomic spin wave \cite{Lukin84,Lukin65,Lukin77,Lukin75}, whose relative amplitude is determined by the control laser field. In this section, we study the dynamic of the DSP in the presence of a harmonic field with a spatially inhomogeneous distribution in the transverse direction, where the control field is assumed to be independent of position and time. When the light pulse enters a medium, photons interact with atoms of the medium. They then combine together to form a type of excitations known as polaritons, which are one kind of quasi-particles. In an EIT system, two types of polaritons are introduced - the dark polariton and the bright polariton, which are described respectively by the dark polariton field operator $\Psi$ and bright polariton field operator $\Phi$ \end{subequations} \begin{subequations} \label{4-01} \begin{align} \Psi\left( r,t\right) & = E\cos\theta-\sqrt{N}\sigma_{gs}^{(1)}\sin \theta% \text{,} \\ \Phi\left( r,t\right) & = E\sin\theta+\sqrt{N}\sigma_{gs}^{(1)}\cos \theta% \text{.} \end{align} They are atomic collective excitation (quasi-spin wave) dressed by the quantized probe light with the inverse relations \end{subequations} \begin{subequations} \label{4-02} \begin{align} E & = \Psi\cos\theta+\Phi\sin\theta \\ \sigma_{gs}^{(1)} & = \frac{1}{\sqrt{N}}\left( \Phi\cos\theta-\Psi\sin \theta\right) \text{.} \end{align} The dark polariton field operator $\Psi$ and bright polariton field operator $\Phi$ have bosonic commutation relations in the limit of few photons and many atoms. And the action of $\Psi^{\dag}$ on the vacuum creates the dark states, which contain no component of the excited state $\left\vert e\right\rangle $. By assuming that the Rabi frequency is real, the mixing angle of the signal field and the collective atomic polarization is given by \end{subequations} \begin{equation} \tan\theta=\frac{g\sqrt{N}}{\Omega}\text{,} \label{4-03} \end{equation} where the Rabi frequency $\Omega$ is related to the control laser power $P$ \ through $\Omega^{2}=2\left\vert d_{es}\right\vert ^{2}P/\left( c\epsilon_{0}S\right) $. In the nearly two-photon resonant condition, the only excitations are dark polaritons, which generate an eigen-state with vanishing eigenvalue of the interaction Hamiltonian. It is found from Eq.~(\ref{4-01}) and (~\ref{4-03}) that, by reducing the amplitude of the control field, the contributions of light or atoms to the DSP can be changed, then the DSP varies from photons to atoms. Thus it is roughly seen that the mixing angle $\theta$ determines whether or not the group velocity of the signal pulse propagating in the atomic medium can be decreased. In order to find how the mixing angle affects the group velocity of the input pulse, we derive the equations of the spatial motion for the dark polariton fields. In the above sections, we have achieved the dynamic motion equations of atoms and light. In terms of the field operators for the dark and bright polaritons, Eq.~(\ref{3-10}) and (~\ref{2-12}) can be rewritten as \begin{align} & g\sqrt{N}\left( \Psi \cos \theta +\Phi \sin \theta \right) = \label{4-04} \\ & -[\left( \partial _{t}-d_{1}\right) \frac{1}{\Omega }\left( \partial _{t}-d_{2}\right) +\Omega ] \notag \\ & \times \left( \Phi \cos \theta -\Psi \sin \theta \right) \notag \end{align}% and \begin{align} & \left( i\frac{\partial }{\partial t}+ic\frac{\partial }{\partial z}+\frac{c% }{2k}\nabla _{T}^{2}\right) \left( \Psi \cos \theta +\Phi \sin \theta \right) = \notag \\ & -\frac{g\sqrt{N}}{i\Omega }\left( \partial _{t}-d_{2}\right) \left( \Phi \cos \theta -\Psi \sin \theta \right) . \label{4-05} \end{align}% For a very small magnetic fields, we have $\left\vert \mu _{g}-\mu _{e}\right\vert \ll \gamma _{1}$. Furthermore, we assume a sufficiently strong driving field such that $\Omega ^{2}\gg \gamma _{1}\gamma _{2}$. In the adiabatic approximation, the excitation of the bright polariton field $\Phi$ vanishes approximately. Then the dynamics of the dark polariton field $\Psi$ is governed by the Schr\"{o}dinger-like equation% \begin{equation} i\frac{\partial }{\partial t}\Psi =[\check{T}+V\left( r\right) ]\Psi \label{4-06} \end{equation}% with an effective potential \begin{equation} V(r)=-\left( \mu _{s}-\mu _{g}\right) B\left( r\right) \sin ^{2}\theta \end{equation}% induced by the steady atomic response in the external spatial-dependent field. Here, we have set $\gamma _{2}=0$; while the effective kinetic operator% \begin{equation} \check{T}=v_{g}p_{z}-\frac{1}{2k}\cos ^{2}\theta \nabla _{T}^{2} \label{4-06-b} \end{equation}% represents an anisotropic depression, where the momentum along z direction is defined as $p_{z}\equiv -i\partial _{z}$. The longitudinal term $v_{g}p_{z}$ in Eq.(\ref{4-06-b}) describes a ultra-relativistic motion with a slow light velocity \begin{equation} v_{g}=c\cos ^{2}\theta \text{,} \label{4-07} \end{equation}% while the transverse part $P_{T}^{2}/(2m)$ in the effective kinetic term desribes a non-relativistic motion with an effective transverse mass \begin{equation} m=\frac{k}{v_{g}}=\frac{k}{c}\sec^{2}\theta. \end{equation} The above effective Schr\"{o}dinger equation governs the dynamics of spatial motion of quasi-particles. Obviously, when no magnetic field is applied, due to the transverse Laplacian operator $\nabla _{T}^{2}$ commutating with $\partial _{z}$, we can separate the $z$-component from $x$-$y$ component. Neglecting the $x$- and $y$-dependence of $\Psi $, Eq. (\ref{4-06}) describes a stable propagation along $z$- axis with group velocity $v_{g}$ \cite% {Harris,Harris82,Lukin84,Lukin65,Lukin77,Lukin75}. Hence, the amplitude of the control field determines the group velocity of the input pulse in the atomic medium. Adiabatically rotating the angle from $0$ to $\pi /2$, the polariton can be decelerated to a full stop. On the other hand by increasing the strength of the coupling field, that is, reversing the rotation of $% \theta $ adiabatically, it leads to a re-acceleration of the dark-state polariton associated with a change of character from collective spin-like waves to electromagnetic photons. Now we consider the three-dimension problem. By defining $P_{j}\equiv -i\partial _{j}$( $j\in \left\{ x,y,z\right\} )$, Eq.(\ref{4-06}) can be rewritten as an effective Schr\"{o}dinger equation% \begin{equation} i\frac{\partial }{\partial t}\Psi =H_{eff}\Psi \label{4-08} \end{equation}% with the effective Hamiltonian \begin{equation} H_{eff}=v_{g}P_{z}+\frac{1}{2m}\left( P_{x}^{2}+P_{y}^{2}\right) -\mu ^{\prime }B\left( r\right) \text{,} \label{4-09} \end{equation}% where $\mu ^{\prime }=\left( \mu _{s}-\mu _{g}\right) \sin ^{2}\theta $. The magnitude of the effective transverse mass is totally determined by the mixing angle $\theta $ of the signal field and the collective atomic polarization. When the amplitude of the control field is small, spin waves have large contributions to the DSP, therefore the effective transverse mass is large; when the Rabi frequency is large, photons give large contributions to the DSP, therefore the effective transverse mass is small. The effective Schr\"{o}dinger equation (\ref{4-08}) is the starting point for investigating the spatial motion of the dark polariton. It shows that, due to the inhomogeneity of the magnetic field, the motion of the dark polariton will be scattered by an effective potential with value $\mu ^{\prime }B\left( r\right) $. Now we assume that the magnetic field in $z$ direction has a spatial distribution in the transverse direction with the expression \begin{equation} B\left( r\right) =B_{0}+B_{x}x^{2}+B_{y}y^{2}\text{,} \label{4-10} \end{equation} where $B_{x},B_{y}<0$. Then the effective Hamiltonian operator becomes $ H_{1}=H_{ma}+v_{g}P_{z}-\mu^{\prime}B_{0}$ with \begin{equation} H_{ma}=\frac{P_{x}^{2}}{2m}+\frac{m\omega_{x}^{2}}{2}x^{2}+\frac{P_{y}^{2}}{% 2m}+\frac{m\omega_{y}^{2}}{2}y^{2}. \label{4-11} \end{equation} In classical physics, $H_{ma}$ corresponds to a two dimensional harmonic oscillator with mass $m$ and angular frequency $\omega_{x}=\sqrt{-2\mu ^{\prime}B_{x}/m}$ in $x$-direction and $\omega_{y}=\sqrt{ -2\mu^{\prime}B_{y}/m}$ in $y$-direction. For a given initial state $ \Psi\left( 0\right) $, the evolution state $\Psi\left( t\right)$ of the system is a unitary transformation of the initial state $\Psi\left( 0\right) $ with the time-evolution unitary operator $U\left( t\right) =U_{z}\left( t\right) U_{y}\left( t\right) U_{x}\left( t\right) $: \begin{subequations} \label{4-12} \begin{align} U_{z} & =\exp\left( -iv_{g}P_{z}t\right) \text{,} \\ U_{y} & =\exp\left[ -i\left( \frac{P_{y}^{2}}{2m}+\frac{m\omega_{y}^{2}}{2}% y^{2}\right) t\right] \text{,} \\ U_{x} & =\exp\left[ -i\left( \frac{P_{x}^{2}}{2m}+\frac{m\omega_{x}^{2}}{2}% x^{2}\right) t\right] \text{.} \end{align} Next we consider the evolution dynamics of a spatially well-localized wave packet, which is centered at $(x_{0},y_{0},z_{0})=(0,0,0)$ and has a vanishing mean velocity in all directions. The spatially well-localized wave packet is assumed to be initially in a Gaussian form \end{subequations} \begin{equation} \Psi \left( 0\right) =\prod\limits_{\xi =x,y,z}\left( \frac{\alpha _{\xi }}{% \pi }\right) ^{1/4}e^{-\frac{1}{2}\alpha _{\xi }\xi ^{2}} \label{4-13} \end{equation}% with width $1/\sqrt{\alpha _{\xi }}$, $\xi \in \left\{ x,y,z\right\} $ in $% x,y,z$-direction, respectively, and $\alpha _{\zeta }\neq \lambda _{j}$ where $\lambda _{j}=m\omega _{j}$. By getting rid of an irrelevant global phase factor, the wave function at time $t>0$ reads \begin{align} \Psi \left( t\right) & =\left[ \frac{\alpha _{z}}{\pi }\right] ^{1/4}e^{-% \frac{1}{2}\alpha _{z}\left( z-v_{g}t\right) ^{2}} \label{4-14} \\ & \sum_{n_{1}n_{2}}C_{2n_{1}}^{(x)}C_{2n_{2}}^{(y)}e^{-i2\left( n_{1}\omega _{x}+n_{2}\omega _{y}\right) t}\phi _{2n_{1}}^{(x)}\phi _{2n_{2}}^{(y)}\text{% .} \notag \end{align}% The initial Gaussian wave packet evolves into a superposition of the product states $\phi _{n_{1}}^{(x)}\phi _{n_{2}}^{(y)}$ with quantum numbers $n_{1}$ and $n_{2}$ taking on the value $0,1,2\cdots $. Coefficients in Eq.(\ref% {4-14}) read \begin{equation} C_{2n}^{(j)}=\frac{\sqrt{\left( 2n\right) !}}{2^{n}n!}\left[ \frac{4\lambda _{j}\alpha _{j}}{\left( \lambda _{j}+\alpha _{j}\right) ^{2}}\right] ^{1/4}\left( \frac{\lambda _{j}-\alpha _{j}}{\lambda _{j}+\alpha _{j}}% \right) ^{n}\text{.} \label{4-15} \end{equation}% Here, $\phi _{n}^{(x)}$ is the eigenfunction of Hamiltonian $% P_{x}^{2}/\left( 2m\right) +m\omega _{x}^{2}x^{2}/2$ with the corresponding eigenvalue $E_{n}=\left( n+1/2\right) \omega _{x}$ \begin{equation} \phi _{n}^{(x)}=\left[ \frac{1}{2^{n}n!}\right] ^{\frac{1}{2}}\left( \frac{% \lambda _{x}}{\pi }\right) ^{\frac{1}{4}}H_{n}\left( \sqrt{\lambda _{x}}% x\right) e^{-\frac{1}{2}\lambda _{x}x^{2}}\text{,} \label{4-16} \end{equation}% where $H_{n}\left( x\right) $ are Hermite polynomials. The wave function $% \phi _{n}^{(y)}$ has the similar expression as Eq.(\ref{4-16}) with $x$ replaced by $y$. Since the wave-function at time $t$ is an even function of variables $x$ and $y$, which can be seen from Eq. (\ref{4-14}) to Eq. (\ref{4-16}), the trajectory of the center of the wave packet in the $x$ - $y$ plane does not change with time. But in $z$-direction, the dark polariton propagates with mean velocity $v_{g}$, and the center of the wave packet leaves its original place proportionally to the time $t$ with $\left\langle z\right\rangle =v_{g}t$. Although the wave packet keeps it shape along $z$ direction with an unchanged variance \begin{equation} \left( \Delta z\right) ^{2}=\frac{1}{4\alpha_{z}}\text{,} \label{4-18} \end{equation} the variances in the $x$- and $y$-direction oscillate with time, namely, \begin{align} \left( \Delta x\right) ^{2} & = A_{-x}\cos\left( 2\omega_{x}t\right) +A_{+x}% \text{,} \label{4-17} \\ \left( \Delta y\right) ^{2} & = A_{-y}\cos\left( 2\omega_{y}t\right) +A_{+y}% \text{,} \notag \end{align} where \begin{equation} A_{\pm s}=\frac{2\pi m^{2}\omega_{s}^{2}\pm\alpha_{s}^{2}}{8\alpha_{s}\pi m^{2}\omega_{s}^{2}},s=x,y. \end{equation} Fig. \ref{fig:2} (a) schematically illustrates the time evolution of the initial Gaussian packet. The wave packet distributed along the $z$-direction keeps its original shape. However, in $x$-direction, the shape of the Gaussian packet oscillates as time evolution, its width change is shown in Fig. \ref{fig:2} (b). \begin{figure}[ptb] \includegraphics[width=6 cm]{sgfig2.eps} \caption{\textit{(Color on line)} (a) Schematic illustration about the time evolution for the center of the initial Gaussian state of the dark polariton along $z$ direction. (b) the time evolution of the variance in $x$% -direction. The variance is in unit of $0.1$. } \label{fig:2} \end{figure} In plotting Fig. \ref{fig:2} (b), we have taken the reasonable parameters accessible in current expriments\cite{Karpa}: the width of the initial Gaussian $1/\alpha_{x}=1/4$cm, the atomic density $N/V=10^{13}$ per cube centimeter, the quantum number of the ground state $m_{F}^{g}=-2$ with $% g_{F}^{g}=1/2$, the quantum number of the mestable state $m_{F}^{s}=0$, and $% \gamma_{1}=2\pi\times2.87$MHz. \section{\label{sec:five}The deflection of dark polaritons in a linear magnetic field} In a recent experiment \cite{Karpa}, a magnetic field with small transverse gradient is applied to a $\Lambda$-type atomic medium. It was observed that the light beam is deflected after the signal light passing through the EIT gas cell. This observation first demonstrates that quasi-particles - dark-state polaritons have a non-zero magnetic moment. This experimental observation can be interpreted straightforwardly according to the above quantum theory of spatial motion of polaritons in inhomogeneous fields. For simplicity, we assume the magnetic field \begin{equation} B\left( r\right) =B_{0}+B_{1}x\text{.} \label{5-01} \end{equation} has a linear gradient along the $x$-direction. We further allow the input pulse to vary only in one transverse dimension, says, in the $x$-direction, which means that we neglect the $y$-dependence of the input pulse $E$. Then the two dimensional effective Hamiltonian for the dark polaritons reads \begin{equation} H_{2}=v_{g}P_{z}+\frac{1}{2m}P_{x}^{2}-\mu b_{0}-\mu\zeta x\text{,} \label{5-02} \end{equation} where the parameters \begin{subequations} \label{5-03} \begin{align} \zeta & = 2B_{1}\sin^{2}\theta\text{,} \\ b_{0} & = 2B_{0}\sin^{2}\theta\text{,} \\ \mu & = \mu_{s}-\mu_{g} \end{align} can be controlled by the mixing angle $\theta$. For an initial dark polariton field with Gaussian spatial distribution $\alpha_{x}=\alpha_{z}=b^{-2}$ as given in Eq.~(\ref{4-13}), the time evolution of the polariton field reads $U_{li}\left( t\right) =\exp\left( -iH_{2}t\right) $. By the Wei-Norman algebraic method (see the appendix A) \cite{swna}, the unitary operator $U_{li}\left( t\right) $ can be factorized as $U_{li}\left( t\right) =U_{2}\left( t\right) U_{1}\left( t\right) $ with \end{subequations} \begin{subequations} \label{5-04} \begin{align} U_{2}\left( t\right) & = e^{-iv_{g}tP_{z}}e^{-i\frac{t}{2m}% P_{x}^{2}}e^{it^{2}\frac{\mu\zeta}{2m}P_{x}}\text{,} \\ U_{1}\left( t\right) & = e^{it\mu\zeta x}e^{i\mu b_{0}t-i\frac{t^{3}}{3}% \frac{\mu^{2}\zeta^{2}}{2m}}\text{.} \end{align} A straight forward calculation shows that, the initial Gaussian packet evolves into \end{subequations} \begin{align} \Psi \left( t\right) & =\left( \frac{1/\pi }{b^{2}+i\frac{t}{m}}\right) ^{% \frac{1}{2}}e^{i\mu t\left( b_{0}-\frac{t^{2}}{3}\frac{\mu \zeta ^{2}}{2m}% \right) }e^{-\frac{\left( z-v_{g}t\right) ^{2}}{2b^{2}}} \label{5-05} \\ & e^{i\mu \zeta tx}\exp \left[ -\frac{\left( x-t^{2}\frac{\mu \zeta }{2m}% \right) ^{2}\left( b^{2}-i\frac{t}{m}\right) }{2b^{4}+2t^{2}/m^{2}}\right] \text{.} \notag \end{align}% At time $t$, the center of the input pulse moves to $v_{g}t$ along the $z$% -direction and $t^{2}\mu \zeta /\left( 2m\right) $ along the $x$-direction. When a dark polariton is excitated by the interaction between light and atoms, the dark polariton will achieve a velocity along the $x$-direction as \begin{equation} v_{x}=\frac{\mu \zeta L}{mv_{g}}\text{.} \label{5-06} \end{equation}% after it pass through the gas cell with length L. Therefore the deflection angle reads \begin{equation} \alpha =\frac{v_{x}}{v_{g}}=\frac{L}{v_{g}}\frac{\mu }{k}B_{1}\sin ^{2}\theta \text{.} \label{5-07} \end{equation}% In real experiment, the dephasing time is nonzero due to the collision between atoms, which leads to the absorption of the energy of the probe beam by the atomic medium. The above results mean that the deflection angle of the output pulse depends on the mixing angle $\theta$ between the signal field and the collective atomic polarization, the wave number $k$ of the input pulse, and the gradient $B_{1} $ of the inhomogeneous magnetic field. One can find that the magnetic moment of the dark polariton has an effective value \begin{equation} \mu_{pol}=\mu\sin^{2}\theta\text{.} \label{5-08} \end{equation} By taking $m_{g}=-2$ and $m_{s}=0$, we find the effective magnetic moment \begin{equation} \mu_{pol}=2g_{F}^{(g)}\mu_{B}\sin^{2}\theta\text{,} \label{5-08-0} \end{equation} which is exactly the theoretical result given in Ref. \cite{Karpa}. Next we consider the spatial resolution, which in optics reflects the ability of this optical system to form separate and distinct images of two objects. \begin{figure}[tbp] \includegraphics[width=6 cm]{sgfig3.eps} \caption{\textit{(Color on line)}Schematic illustration about the spatial resolution.} \label{fig:3ad} \end{figure} The spatial resolution is defined here as the mean signal divided by its standard deviation \begin{equation} R=\frac{\left\langle x\right\rangle }{\Delta x}=t^{2}\mu \zeta \sqrt{\frac{% b^{2}}{2m^{2}b^{4}+2t^{2}}}\text{,} \label{5-08-1} \end{equation}% where the mean position $\left\langle x\right\rangle $ in the transverse direction and the standard deviation $\Delta x$ are given by \begin{subequations} \label{5-08-2} \begin{align} \left\langle x\right\rangle & =\int_{-\infty }^{+\infty }\Psi ^{\ast }\left( t\right) x\Psi \left( t\right) dxdz=t^{2}\frac{\mu \zeta }{2m}\text{,} \\ \Delta x& =\sqrt{\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}}=\sqrt{\frac{m^{2}b^{4}+t^{2}}{2b^{2}m^{2}}}\text{.} \end{align}% It can be seen that the spatial resolution increases as the interaction time between light and atoms increases. Actually, the phenomenon of light deflection in such an inhomogeneous magnetic field can also be described without using the concept of quasiparticles - dark polaritons. Here, we show how to calculate the deflection angle $\alpha $ in Eq. (\ref{5-07}) according to the semiclassical theory. We begin the semiclassical approaches by considering the evolution of the system from Eq. (\ref{2-12}) and (\ref{3-09}). First, the atomic linear response to the signal field has been explicitly reflected in Eq. (\ref{3-09}). Under the adiabatic approximation that the evolution of the atomic system is much faster than the temporal change of the radiation field, we can obtain the steady-state solution for the atomic transition $\sigma _{ge}^{(1)}$ by setting all time derivatives to zero in Eq. (\ref{3-09}), namely \end{subequations} \begin{align} \sigma _{ge}^{(1)}& =i\frac{\left[ i\left( \mu _{g}-\mu _{s}\right) B+\gamma _{2}\right] g}{d_{1}d_{2}+\left\vert \Omega \right\vert ^{2}}E \label{5-09} \\ & \approx \frac{g}{\left\vert \Omega \right\vert ^{2}}\mu BE. \notag \end{align}% Here, the undepleted control-field approximation is used and $\gamma _{2}\approx 0$ is assumed. This approach based on the atomic linear response results in an effective potential for the motion of signal slow-varying amplitude due to the spatial distribution of the magnetic field. The spatial motion of the slow varying amplitude is described by the following equation: \begin{align} & i\frac{\partial }{\partial t}E+ic\frac{\partial }{\partial z}E+\frac{c}{2k}% \nabla _{T}^{2}E=-\frac{|g|^{2}N}{\left\vert \Omega \right\vert ^{2}}\mu BE \label{5-10} \\ & =-\mu \left( B_{0}+B_{1}x\right) E\tan ^{2}\theta \notag \end{align} which describes a shape-preserving propagation in $z$ direction with velocity $c$. For an initial Gaussian wave packet of $E$ in $x-z$ plane, after passing through the gas cell, the wave center shifts from $\left( x,z\right) =\left( 0,0\right) $ to the well-defined position \begin{equation} \left( x,z\right) =\left( \frac{\mu B_{1}}{2kc}L^{2}\tan^{2}\theta ,L\right) \text{.} \label{5-11} \end{equation} And it can be obviously found that Eq. (\ref{5-07}) is the deflection angle in this approach. \section{\label{sec:six}Deflection of light in inhomogeneous coupling field} In this section, we turn our discussion to the deflection of slow light in the atomic medium driven by a optical field with an inhomogeneous profile, while the magnetic field is uniform. This phenomenon was experimentally observed in Ref. \cite{Scul07} where the cell filled with EIT-based atomic gas is referred as an ultra-dispersive optical prism with an angular dispersion. We note that the probe light is relatively strong in comparison with the control light in this experiment, thus the susceptibility obtained from the linear response theory can not work well to explain the experiment phenomenon. In this paper, we do not plan to explain the experiment data in Ref. \cite{Scul07} in the strong coupling limit. Our main purpose is to predict a new quantum coherent phenomenon for the light deflection by the atomic media when the experiment is carried out for the weak probe field. We assume the strong driving field has a Gaussian profile \begin{equation} \Omega=\Omega_{0}\exp\left[ -\frac{x^{2}}{2\sigma^{2}}\right] \label{6-01} \end{equation} in the transverse direction. Here, we confine the problem to two dimensional space, the $x$-$z$ plane. Then the transverse Laplacian operator reduces into a one dimensional operator $\nabla_{T}^{2}=\partial^{2}/\partial x^{2}$. By invoking the steady-state conditions, it is found that the polarization field $\sigma_{ge}^{(1)},$ which serves as a source for the electric fields in Eq. (\ref{2-12}), is proportional to the slow varying amplitude $E$ given in Eq. (\ref{5-09}). Under a strong, undepleted driving field approximation, the coupling between atoms and light induces a spatial dependent potential into the propagation equation. The spatial shape of this potential induced by $\sigma_{ge}^{(1)}$ is completely determined by the profile of the Rabi frequency $\Omega$, which can be seen in the first line of Eq. (\ref{5-10} ). Thus, when the signal pulse parallel to the control beam travels across the atomic cell, it will be scattered by the effective potential. However, as the width of probe beam is less than that of the control beam, the trajectory of the probe light may bend when it is adjusted to the left side or to the right side of the control beam profile; hence the probe and control beams are no longer parallel after they go through the gas cell. In order to investigate this phenomenon, we assume the probe beam is in a Gaussian state \begin{equation} E\left( 0,x,z\right) =\frac{1}{\sqrt{\pi b^{2}}}\exp\left[ -\frac{\left( x-a\right) ^{2}}{2b^{2}}-\frac{z^{2}}{2b^{2}}\right] \label{6-02} \end{equation} before it enters the gas cell, where $b$($<\sigma$) is the width of the probe field and $a$ is the initial location of the wave packet center of the probe field along $x$-direction. The sign of $a$ indicates the incident position comparatively to the left or right hand side of the control beam's center $x_{0}=0$, and the magnitude $\left\vert a\right\vert $ denotes the distance from the control beam's center. In order to investigate the evolution of this initial state, we expand $\left\vert \Omega\right\vert ^{-2}$ at the position $a$ and retain the linear term proportional to $x-a$. With the above considerations, the paraxial equation in Eq. (\ref{5-10}) becomes \begin{equation} i\dot{E}+ic\partial _{z}E+\frac{c}{2k}\partial _{x}^{2}E=\left( \eta _{0}+\eta _{1}x\right) E \label{6-03} \end{equation}% where \begin{subequations} \label{6-04} \begin{align} \eta _{0}& =-\Omega _{0}^{-2}\left( 1-\frac{2a^{2}}{\sigma ^{2}}\right) |g|^{2}N\Delta \exp \left( \frac{a^{2}}{\sigma ^{2}}\right) \text{,} \\ \eta _{1}& =-2a\Delta \frac{|g|^{2}N}{\sigma ^{2}}\Omega _{0}^{-2}\exp \left( \frac{a^{2}}{\sigma ^{2}}\right) \text{.} \end{align}% and $\Delta =\left( \mu _{s}-\mu _{g}\right) B$. By making use of the Wei-Norman algebraic method \cite{swna}, it is shown that, after passing through the Rb gas cell, the center position $\left( x,z\right) =\left( a,0\right) $ of the probe field at time $t=0$ is shifted to \end{subequations} \begin{subequations} \label{6-05} \begin{align} x& =a+L^{2}\Omega _{0}^{-2}\Delta ae^{\frac{a^{2}}{\sigma ^{2}}}\frac{% |g|^{2}N}{\sigma ^{2}kc}\text{,} \\ z& =L\text{.} \end{align} If we track the motion of the center of the probe beam, a mirage effect occurs. The sign of $\Delta $ and the incident position $a$ of the signal light determine whether the trajectory of probe beam bend. When the magnetic field is absent $\Delta =0$ or the center of the probe field is collinear to that of the control field $a=x_{0}=0$, the trajectory of the signal light is a straight line. We assign the positive sign for $a$ as the probe beam is shifted to the right with respect to the center of the control light, and denote $a<0$ as the signal beam is shifted to the left. When the probe beam is shifted to the right, in the case of $\Delta <0$, the signal light feels a ``repulsion potential'' due to the coefficient $\eta _{1}>0$, thus the trajectory bends to the left; at the condition $\Delta >0$, the signal light undergoes an ``attractive potential'' in the atomic medium due to $\eta _{1}<0$, thus the trajectory bends to the right. When $a<0$, it can be found from Eq. (\ref{6-04}) that due to the coefficient of the linear potential larger than zero, i.e. $\Delta >0$, the probe beam experiences a ``repulsion potential'' within the EIT medium, and its center is shifted to the left. As $\eta _{1}$ is smaller than zero, i.e. $\Delta <0$, the probe beam suffers an ``attractive potential'' during its passing through the EIT medium, hence its center is shifted to the right. The corresponding schematic diagram is given in Fig.\ref{fig:3}, where yellow solid line is the spatial distribution of the \begin{figure}[tbp] \includegraphics[width=6 cm]{sgfig4.eps} \caption{\textit{(Color on line)}Schematic illustration about the ray deflection of the probe light in the presence of inhomogenous coupling light. The yellow line is the spatial distribution of the control light.} \label{fig:3} \end{figure} control light, the red dash lines give the deflection at $\Delta <0$, the blue dotted lines describe the light trajectory at $\Delta >0$ and the black solid lines depict the light ray at $\Delta =0$. The same results about the deflection of light ray have been discovered by us using the semiclassical theory \cite{ZDL}. From the point of particle nature, the force acting on the particle is completely determined by the value and sign of $\eta _{1}$. Thus for a particle passing through the point $a\neq 0$, when $\Delta =0$, the particle does not feel any force, so it travels across straightly, so does the particle at point $a=0$. For a particle traversing through the position $% a\left( >0\right) $, when $\Delta <0$, this particle is subject to a negative force, which moves the particle to the left with respect to it original place; however, when $\Delta >0$, it experiences a positive force, which makes the particle move to the right. Also for a particle going through the place at $a(<0)$, when $\Delta <0$, the particle moves to the right because of the action of a positive force; when $\Delta >0 $, it goes to the left due to the action of a negative force. We have to point out that the magnetic field is not necessary for the occurrence of the above-described phenomenon. For the model containing a $% \Lambda $-type atomic ensemble interacting with one control beam and one probe beam, similar phenomenon can also be found as long as the two photon detuning \end{subequations} \begin{equation} \Delta =\Delta _{p}-\Delta _{c} \label{6-14} \end{equation}% varies, where $\Delta _{p}=\omega _{eg}-\nu $ is the detuning between the atomic transition from $\left\vert e\right\rangle $ to $\left\vert g\right\rangle $\ and the probe beam, $\Delta _{c}=\omega _{es}-\nu _{c}$ is the detuning between the atomic transition from $\left\vert e\right\rangle $ to $\left\vert s\right\rangle $\ and the control beam. In order to clarify the dependence of light deflection on two photon detuning $\Delta $, we begin our description with the Hamiltonian in the interaction picture. In the rotating frame with respect to \begin{equation*} H_{0}=\omega _{e}\sigma _{ee}+\left( \omega _{s}+\Delta _{c}\right) \sigma _{ss}+\left( \omega _{g}+\Delta _{p}\right) \sigma _{gg}\text{,} \end{equation*}% the interaction Hamiltonian reads \begin{equation*} H_{I}^{\prime }=-\frac{N}{V}\int d^{3}r[\Delta _{p}\sigma _{gg}+\Delta _{c}\sigma _{ss}+\left( g\sigma _{eg}E+\Omega \sigma _{es}+h.c.\right) ]% \text{,} \end{equation*}% in the absence of magnetic field. The first order atomic transition operators have the similar form as Eq.(\ref{3-09}) by replacing $\mu _{s}B$, $\mu _{g}B$ and $\mu _{e}B$ by $\Delta _{c}$, $\Delta _{p}$ and zero respectively. Then the atomic transition operator $\sigma _{ge}^{(1)}=g\Delta E/\left\vert \Omega \right\vert ^{2}$, which induces a potential dependent on the two photon detuning $\Delta $. In a real experiment, the dephasing rate of the forbidden $\left\vert e\right\rangle $-$\left\vert s\right\rangle $ transition is nonzero due to atomic collisions etc. Therefore an additional anti-Hermitian decay term will be introduced phenomenologically into the effective Hamiltonian% \begin{equation} H_{eff}=cp_{z}+\frac{c}{2k}p_{x}^{2}+\eta _{0}^{\prime }+\eta _{1}^{\prime }\left( x-a\right) \text{,} \label{6-15} \end{equation}% where $\eta _{j}^{\prime }=-a_{j}-ib_{j}$, $j=0,1$ are complex. Then it can be found that, after light passing through the Rb gas cell, the dephasing rate introduces two additional terms to Eq.~(\ref{6-05}a) \begin{equation} x=a+\frac{a_{1}c}{2k}T^{2}-b^{2}b_{1}T-\frac{b_{1}c^{2}}{b^{2}k^{2}}T^{3} \label{6-16} \end{equation} where $T=L/c$ is the time for the light travelling through the medium, and \begin{eqnarray*} a_{1} &=&\frac{2a}{\sigma ^{2}}\frac{N\left\vert g\right\vert ^{2}}{\Omega _{0}^{2}}\exp \left( \frac{a^{2}}{\sigma ^{2}}\right) \Delta \text{,} \\ b_{1} &=&\frac{2a}{\sigma ^{2}}\frac{N\left\vert g\right\vert ^{2}}{\Omega _{0}^{2}}\exp \left( \frac{a^{2}}{\sigma ^{2}}\right) \gamma _{2}\text{.} \end{eqnarray*}% For an atomic medium with length $L=7.5cm$ and density $N/V=10^{12}cm^{-3}$, the dephasing rate $\gamma _{2}\approx 10^{-4}\gamma _{1}$. When a control beam with width $\sigma =L/4$ and frequency $\nu _{c}=5\times 10^{14}Hz$ is coupled to the atomic ensemble with $\Omega _{0}=5\gamma _{1}$, for the probe beam with width $0.07mm$ incident at position $a=\sigma /2$, two additional terms $b^{2}b_{1}T\approx 10^{-3}$ and $% b_{1}c^{2}T^{3}/(b^{2}k^{2})\approx 10^{-5}$ caused by dephasing are much smaller than that of the term $a_{1}cT^{2}/(2k)\approx 5\times 10^{-2}$ induced by the frequency detuning $\Delta $, which means Eq. (\ref{6-05}a) dominates the distance in the transverse direction. Next we investigate how the trajectory of the probe beam behaves when an effective potential include the quadratic term of $x$ and $y$ in the transverse direction. This induced potential is obtained when we expand \begin{equation} \left\vert \Omega \right\vert ^{-2}=\Omega _{0}^{-2}\exp \left[ \frac{x^{2}}{% \sigma _{x}^{2}}+\frac{y^{2}}{\sigma _{y}^{2}}\right] \label{6-06} \end{equation}% around the center $a_{x}$ and $a_{y}$ of the incident beam with the profile shape \begin{equation} E\left( 0\right) =\frac{1}{\sqrt{\pi b^{2}}}\exp \left[ \frac{z^{2}+\left( x-a_{x}\right) ^{2}+\left( y-a_{y}\right) ^{2}}{-2b^{2}}\right] \text{.} \label{6-07} \end{equation}% By retaining the quadratic term of $x-a_{x}$ and $y-a_{y}$. The paraxial motion equation becomes% \begin{equation} i\partial _{t}E+ic\partial _{z}E+\frac{c}{2k}\left( \partial _{x}^{2}+\partial _{y}^{2}\right) E=\left[ V\left( x\right) +V\left( y\right) \right] E \label{6-08} \end{equation}% where \begin{equation} V\left( \chi \right) =\left[ \zeta _{\chi 0}+\zeta _{\chi 1}\left( \chi -a_{\chi }\right) +\zeta _{\chi 2}\left( \chi -a_{\chi }\right) ^{2}\right] E% \text{.} \notag \end{equation}% The coefficients for $\chi =\left\{ x,y\right\} $ are \begin{subequations} \label{6-09} \begin{align} \zeta _{\chi 0}& =-\Omega _{0}^{-2}\exp \left[ \frac{a_{\chi }^{2}}{\sigma _{\chi }^{2}}\right] \text{,} \\ \zeta _{\chi 1}& =-\frac{2|g|^{2}N}{\sigma _{\chi }^{2}}a_{\chi }\Delta \Omega _{0}^{-2}\exp \left[ \frac{a_{\chi }^{2}}{\sigma _{\chi }^{2}}\right] \text{,} \\ \zeta _{\chi 2}& =-\frac{\sigma _{\chi }^{2}+2a_{\chi }^{2}}{2\sigma _{\chi }^{4}}\Omega _{0}^{-2}|g|^{2}N\Delta \exp \left[ \frac{a_{\chi }^{2}}{\sigma _{\chi }^{2}}\right] \text{.} \end{align}% After a period of time, the Gaussian state will evolve into \end{subequations} \begin{equation} E\left( t\right) =U\left( t\right) E\left( 0\right) \end{equation}% where the evolution operator \begin{equation} U\left( t\right) =\exp \left[ -i\left( cP_{z}+\frac{P_{x}^{2}+P_{y}^{2}}{% 2m^{\prime }}+V\left( x\right) +V\left( y\right) \right) t\right] \end{equation} Here, we assume that the detuning $\Delta$ is always negative. The Schr\"{o} dinger-type equation (\ref{6-08}) governs the evolution of the wave-function of the signal light in the atomic medium. And the trajectory of light ray is described by the mean value of the coordinate operator \begin{equation} \chi_{c}=\left\langle \chi\right\rangle =\int E^{\ast}(t)\chi E(t)d\chi \text{,} \label{6-09a} \end{equation} with $\chi\in\left\{ x,y,z\right\} $. An explicit calculation gives (see Appendix B) \begin{subequations} \label{6-10} \begin{align} x_{c} & = a_{x}-\zeta_{x1}\frac{1-\cos\left( \omega_{ox}t\right) }{% m^{\prime}\omega_{ox}^{2}}\text{,} \\ y_{c} & = a_{y}-\zeta_{y1}\frac{1-\cos\left( \omega_{oy}t\right) }{% m^{\prime}\omega_{oy}^{2}}\text{,} \\ z_{c} & = ct \end{align} with the angular frequency \end{subequations} \begin{equation} \omega_{ox}=\sqrt{2\zeta_{x2}/m^{\prime}}\text{, }\omega_{oy}=\sqrt {% 2\zeta_{y2}/m^{\prime}}\text{.} \label{6-11} \end{equation} As the light travels across the atomic medium, the light ray - the center of the wave packet oscillates around the initial center in the transverse direction \begin{subequations} \begin{align} x_{c}& =a_{x}+\frac{\zeta _{x1}}{m^{\prime }\omega _{ox}^{2}}\left( \cos \frac{\omega _{ox}z_{c}}{c}-1\right) \text{,} \\ y_{c}& =a_{y}+\frac{\zeta _{y1}}{m^{\prime }\omega _{oy}^{2}}\left( \cos \frac{\omega _{oy}z_{c}}{c}-1\right) \text{.} \end{align}% The anisotropic motion and potential in Eq.~(\ref{6-08}) result in that, light travels in a straight line in the $z$-direction since it acts as an \begin{figure}[tbp] \includegraphics[width=6 cm]{sgfig5.eps} \caption{\textit{(Color on line)}Schematic illustration about the ray trajectory of the probe light in three dimension space.} \label{fig:4} \end{figure} ultra-relativistic particle with velocity $c$, however the light oscillates in the $x$-$y$ plane because it behaves as a non-relativistic particle with effective transverse mass $m^{\prime }=k/c$. If $\zeta _{x2}=\zeta _{y2}$, the light ray is a line with finite length in the transverse direction. In Fig. \ref{fig:4}, we schematically illustrate the wave packet center of the probe light in three-dimension space when $\zeta _{x2}\neq \zeta _{y2}$. \section{\label{sec:sum}Summary} In conclusion, we have developed a quantum approach for the spatial behavior of propagating light when it passes through an EIT system with spatial-dependent external fields. By studying the dynamics of the atomic ensemble and the light pulse, the effective Schr\"{o}dinger equation is derived to depict the space-time evolution of quasi-particles where the effective potential is induced through the steady atomic response in the external spatial-dependent fields. For a magnetic field with a spatial distribution in the transverse direction, by considering the evolution of the Gaussian state, we showed that: 1) in a harmonic magnetic field, the light trajectory is a straight line; 2) in a linear magnetic field, the light ray bends to the direction where the magnetic gradient increases. And the deflection angle depends on four external parameters: the mixing angle between the signal field and the collective atomic polarization, the wave number of the signal pulse, the length of the EIT gas cell, and the small magnetic field gradient. In an inhomogeneous optical control field, we predict some novel results accessible on the light ray behavior. In the linear response limit, it is found that the deflection of the light ray can be controlled by two controllable external parameters: the center position of the probe beam with respect to the control light, and the two photon detuning. In the quadric expansion of coupling amplitude, the light trajectory generally oscillates in the atomic medium. Finally we note that our study is based on a quantum theoretical approach. In our previous paper \cite{ZDL}, the Fermat principle is applied to study the light trajectory in this atomic medium, and similar results are obtained, but that approach is a semi-classical theory , which can be understood in terms of the eikonal equation with the optical WKB approximation of our approach. Though this semi-classical approach can explain the most recent experiment \cite{Karpa} about the light deflection by an EIT-based rubidium gas, it can not be further developed for the investigation of photon state quantum storage since the signal light was assumed as a classical field. The quantum approach assumes the probe light is quantized, thus this approach can be used to investigate the possibility for realizing a protocol for quantum sate storage with spatially-distinguishable channels based on the EIT-enhanced light deflection. This work is supported by the NSFC with Grants No. 90203018, No. 10474104, No. 60433050 and No. 10704023, NFRPC with Grant No. 2001CB309310 and 2005CB724508. One (LZ) of the authors also acknowledges the support of K. C. Wong Education Foundation, Hong Kong. We acknowledge the useful discussions with P. Zhang, T. Shi, and H. Ian.
2,877,628,089,306
arxiv
\section{Conclusion} \label{sec:conclusion} Human cognition has a strong affective component that has been relatively undeveloped in AI systems. Language that explains emotions generated at the sight of a visual stimulus gives us a way to analyze how image content is related to affect, enabling learning that can lead to agents emulating human emotional responses through data-driven approaches. In this paper, we take the first step in this direction through: (1) the release of the {ArtEmis} dataset that focuses on linguistic explanations for affective responses triggered by visual artworks with abundant emotion-provoking content; and (2) a demonstration of neural speakers that can express emotions and provide associated explanations. The ability to deal computationally with images' emotional attributes opens an exciting new direction in human-computer communication and interaction. \section{ArtEmis~ dataset} \label{sec:dataset} The \textit{ArtEmis}~ dataset is built on top of the publicly available WikiArt\footnote{\url{https://www.wikiart.org/}} dataset which contains 81,446{} carefully curated artworks from 1,119 artists (as downloaded in 2015), covering artwork created as far back as the 15\textsuperscript{th} century, to modern fine art paintings created in the 21\textsuperscript{st} century. The artworks cover 27 art-styles (abstract, baroque, cubism, impressionism, etc.) and 45 genres (cityscape, landscape, portrait, still life, etc.), constituting a very diverse set of visual stimuli~\cite{clf_of_wikiart}. In ArtEmis~we annotated \textit{all} artworks of WikiArt by asking at least 5 annotators per artwork to express their dominant emotional reaction along with an utterance explaining the reason behind their response. Specifically, after observing an artwork, an annotator was asked first to indicate their \textit{dominant} reaction by selecting among the eight emotions mentioned in Section \ref{sec:related_work}, or a ninth option, listed as `something-else'. This latter option was put in place to allow annotators to express emotions not explicitly listed, or to explain why they might not have had any strong emotional reaction e.g., why they felt indifferent to the shown artwork. In all cases, after the first step, the annotator was asked to provide a detailed explanation for their choice in free text that would include specific references to visual elements in the artwork. See Figures~\ref{fig:bird_annotations},\ref{fig:subjectivity-of-artemis} for examples of collected annotations and Figure~\ref{fig:data_collection_interface} for a quick overview of the used interface. In total, we collected \textbf{439,121{}} explanatory utterances and emotional responses. The resulting corpus contains 36,347{} distinct words and it includes the explanations of 6,377{} annotators who worked in aggregate 10,220{} hours to build it. The annotators were recruited via Amazon's Mechanical Turk (AMT) services. In what follows we analyze the key characteristics of ArtEmis{}, while pointing the interested reader to the Supplemental Material~\cite{artemis_supp} for further details. \subsection{Linguistic analysis} \label{para:linguistic_analysis} \paragraph{Richness \& diversity.} The average length of the captions of {ArtEmis} is \num{15.8}{} words which is significantly longer than the average length of captions of many existing captioning datasets as shown in Table~\ref{table:pos_per_captions}. In the same table, we also show results of analyzing {ArtEmis} in terms of the average number of nouns, pronouns, adjectives, verbs, and adpositions. {ArtEmis} has a higher occurrence per caption for each of these categories compared to many existing datasets, indicating that our annotations provide rich use of natural language in connection to the artwork and the emotion they explain. This fact becomes even more pronounced when we look at \textit{unique}, say adjectives, that are used to explain the reactions to the same artwork among different annotators (Table~\ref{table:pos_per_image}). In other words, besides being linguistically rich, the collected explanations are also highly \textit{diverse}. \input{tables/datasets_POS_comparative_analysis_per_utterance} \input{tables/datasets_POS_comparative_analysis_per_image} \vspace{-5pt} \mypara{Sentiment analysis.} \label{para:dataset-sent-analysis} In addition to being rich and diverse, {ArtEmis} also contains language that is sentimental. We use a rule-based sentiment analyzer (VADER~\cite{VADER}) to demonstrate this point. The analyzer assigns only $16.5\%$ of {ArtEmis} to the neutral sentiment, while for COCO-captions it assigns $77.4\%$. Figure~\ref{fig:analysis_teaser} ~(c) shows the histogram of VADER's estimated valences of sentimentality for the two datasets. Absolute values closer to 0 indicate neutral sentiment. More details on this metric are in the Supp.~Mat.. \begin{figure}[ht] \includegraphics[width=\linewidth]{figures/analysis_of_artemis/emotion_histogram.png} \vspace{-8pt} \caption{{\bf Histogram of emotions captured in ArtEmis~}. Positive emotions occur significantly more often than negative emotions (four left-most bars contain $61.9\%$ of all responses vs. 5th-8th bars contain $26.3\%$). The annotators use a non-listed emotion (`something-else' category) $11.7\%$ of the time.} \label{fig:histogram_emotions_clicks} \end{figure} \subsection{Emotion-centric analysis.} \label{para:emotion-analysis} In Figure~\ref{fig:histogram_emotions_clicks} we present the histogram over the nine options that the users selected, across all collected annotations. We remark that positive emotions are chosen significantly more often than negative ones, while the ``something-else'' option was selected 11.7\%. Interestingly, 61\% of artworks have been annotated with at least one positive and one negative emotion simultaneously (this percent is 79\% if we treat something-else as a third emotion category). While this result highlights the high degree of subjectivity w.r.t.~the emotional reactions an artwork might trigger, we also note that that there is significant agreement among the annotators w.r.t.~the elicited emotions. Namely, 45.6\% (37,145) of the paintings have a strong majority among their annotators who indicated the same fine-grained emotion. \mypara{Idiosyncrasies of language use.} Here, we explore the degree to which {ArtEmis} contains language that is abstract vs.~concrete, subjective vs.~objective, and estimate the extent to which annotators use similes and metaphors in their explanations. To perform this analysis we tag the collected utterances and compare them with externally curated lexicons that carry relevant meta-data. For measuring the abstractness or concreteness, we use the lexicon in Brysbaert et al.~\cite{40k_absrtact_words} which provides for 40,000 word lemmas a rating from 1 to 5 reflecting their concreteness. For instance, \textit{banana} and \textit{bagel} are maximally concrete/tangible objects, getting a score of 5, but \textit{love} and \textit{psyche} are quite abstract (with scores 2.07 and 1.34, resp.). A random word of {ArtEmis} has 2.80 concreteness while a random word of COCO has 3.55 (p-val significant, see Figure~\ref{fig:analysis_teaser} (a)). In other words, {ArtEmis} contains on average references to more abstract concepts. This also holds when comparing {ArtEmis} to other widely adopted captioning datasets (see~Supp.~Mat.). Next, to measure the extent to which {ArtEmis} makes subjective language usage, we apply the rule-based algorithm provided by TextBlob~\cite{textblob} which estimates how subjective a sentence is by providing a scalar value in $[0,1]$. E.g., \textit{`The painting is red'} is considered a maximally objective utterance (scores 1), while \textit{`The painting is nice'}, is maximally subjective (scores 0). We show the resulting distribution of these estimates in Figure~\ref{fig:analysis_teaser} (b). Last, we curated a list of lemmas that suggest the use of similes with high probability (e.g., `is like', `looks like', `reminds me of'). Such expressions appear on $20.5\%$ of our corpus and, as shown later, are also successfully adopted by our neural-speakers. \subsection{Maturity, reasonableness \& specificity.} \label{para:dataset-Maturity-reasonableness-specificity} We also investigated the unique aspects of {ArtEmis} by conducting three separate user studies. Specifically we aim to understand: a) what is the \textit{emotional and cognitive maturity} required by someone to express a random {ArtEmis} explanation?, b) how \textit{reasonable} a human listener finds a random {ArtEmis} explanation, even when they would not use it to describe their own reaction?, and last, c) to what extent the collected explanations can be used to \textit{distinguish} one artwork from another? We pose the first question to Turkers in a binary (yes/no) form, by showing to them a randomly chosen artwork and its accompanying explanation and asking them if this explanation requires emotional maturity higher than that of a typical 4-year old. The answer for 1K utterances was `yes' \textbf{76.6\%} of the time. In contrast, repeating the same experiment with the COCO dataset, the answer was positive significantly less (\textbf{34.5\%}). For the second question, we conducted an experiment driven by the question ``Do you think this is a realistic and reasonable emotional response that could have been given by someone for this image?''. Given a randomly sampled utterance, users had four options to choose, indicating the degree of response appropriateness for that artwork. We elaborate on the results in Supp.~Mat.; in summary, 97.5\% of the utterances were considered appropriate. To answer the final question, we presented Turkers with one piece of art coupled with one of its accompanying explanations, and placed it next to two random artworks, side by side and in random order. We asked Turkers to guess the `referred' piece of art in the given explanation. The Turkers succeeded in predicting the `target’ painting 94.7\% of the time in a total of 1K trials. These findings indicate that, despite the inherent subjective nature of ArtEmis, there is significant common ground in identifying a reasonable affective utterance and suggest aiming to build models that replicate such high quality captions. \section{Evaluation} \label{section:evaluation} In this section we describe the evaluation protocol we follow to quantitatively compare our trained neural networks. First, for the auxiliary classification problems we report the average attained accuracy per method. Second, for the evaluation of the neural speakers we use three categories of metrics that assess different aspects of their quality. To measure the extent to which our generations are linguistically similar to held-out ground-truth human captions, we use various popular machine-based metrics: e.g., BLEU 1-4~\cite{BLEU}, ROUGE-L~\cite{lin2004rouge}, METEOR~\cite{denkowski:lavie:meteor-wmt:2014}. For these metrics, a higher number reflects a better agreement between the model-generated caption and at least one of the ground-truth annotator-written captions. We highlight that CIDEr-D~\cite{cider} which requires a generation to be semantically close to \textit{all} human-annotations of an artwork, is not a well suited metric for ArtEmis, due to the large diversity and inherent subjectivity of our dataset (see more on this on Supp. Mat). The second dimension that we use to evaluate our speakers concerns how \textit{novel} their captions are; here we report the average length of the longest common subsequence for a generation and (a subsampled version) of all training utterances. The smaller this metric is, the farther away one can assume that the generations are from the training data~\cite{fan2019strategies}. The third axis of evaluation concerns two unique properties of ArtEmis and affective explanations in particular. First, we report the percent of a speaker's productions that contain similes, i.e., generations that have lemmas like `thinking of', `looks like' etc. This percent is a proxy for how often a neural speaker chooses to utter metaphorical-like content. Secondly, by tapping on the $C_{emotion|text}$, we can compute which emotion is most likely explained by the generated utterance; this estimate allows us to measure the extent to which the deduced emotion is `aligned' with some ground-truth. Specifically, for test artworks where the emotion annotations form a strong majority, we define the \textit{emotional-alignment} as the percent of the grounded generations where the $\argmax(C_{emotion|generation})$ agrees to the emotion made by the majority. The above metrics are algorithmic, i.e., they do not involve direct \textit{human judgement}, which is regarded as the golden standard for quality assessment~\cite{cui2018learning,kilickaya2016re} of synthetic captions. The discrepancy between machine and human-based evaluations can be exacerbated in a dataset with subjective and affective components like ArtEmis. To address this, we evaluate our two strongest (per machine metrics) speaker variants via user studies that imitate a Turing test; i.e., they assess the extent to which the synthetic captions can be `confused' as being made by humans. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{figures/neural_speakers/emo_grounded_generations_gallery.png} \caption{\textbf{Examples of neural speaker productions on \textit{unseen} artworks.} The produced explanations reflect a variety of dominant emotional-responses (shown above each utterance in bold font). The top row shows examples where the deduced grounding emotion was positive; the bottom row shows three examples where the deduced emotion was negative and an example from the something-else category. Remarkably, the neural speaker can produce pragmatic explanations that include \textbf{visual analogies}: \textit{looks like blood, like a dead animal}, and \textbf{nuanced} explanations of affect: \textit{sad and lonely, expressive eyes}.} \label{fig:neural_productions} \vspace{-5pt} \end{figure*} \section{Introduction} \label{sec:introduction} Emotions are among the most pervasive aspects of human experience. While emotions are not themselves linguistic constructs, the most robust and permanent access we have to them is through language~\cite{OrtonyBook}. In this work, we focus on collecting and analyzing at scale language that \textit{explains} emotions generated by observing visual artworks. Specifically, we seek to better understand the link between the visual properties of an artwork, the possibly subjective affective experience that it produces, and the way such emotions are explained via language. Building on this data and recent machine learning approaches, we also design and test neural-based speakers that aim to emulate human emotional responses to visual art and provide associated explanations. \mypara{Why visual art?} We focus on visual artworks for two reasons. First and foremost because art is often created with the intent of provoking emotional reactions from its viewers. In the words of Leo Tolstoy,\textit{``art is a human activity consisting in that one human consciously hands on to others feelings they have lived through, and that other people are infected by these feelings, and also experience them''}~\cite{tolstoy-art}. Second, artworks, and abstract forms of art in particular, often defy simple explanations and might not have a single, easily-identifiable subject or label. Therefore, an affective response may require a more detailed analysis integrating the image content as well as its effect on the viewer. This is unlike most natural images that are commonly labeled through purely objective content-based labeling mechanisms based on the objects or actions they include \cite{coco_chen2015,caba2015activitynet}. Instead, by focusing on art, we aim to initiate a more nuanced perceptual image understanding which, downstream, can also be applied to richer understanding of ordinary images. We begin this effort by introducing a large-scale dataset termed \textit{ArtEmis} [\underline{Art} \underline{Em}ot\underline{i}on\underline{s}] that associates human emotions with artworks and contains \textit{explanations} in natural language of the rationale behind each triggered emotion. \mypara{Novelty of ArtEmis.} Our dataset is novel as it concerns an underexplored problem in computer vision: the formation of linguistic affective explanations grounded on visual stimuli. Specifically, ArtEmis~ exposes moods, feelings, personal attitudes, but also abstract concepts like freedom or love, grounded over a wide variety of complex visual stimuli (see Section~\ref{para:emotion-analysis}). The annotators typically explain and link visual attributes to psychological interpretations e.g., \textit{`her youthful face accentuates her innocence'}, highlight peculiarities of displayed subjects, e.g., \textit{`her neck is too long, this seems unnatural'}; and include imaginative or metaphorical descriptions of objects that do not directly appear in the image but may relate to the subject's experience; \textit{`it reminds me of my grandmother' or `it looks like blood'} (over $20\%$ of our corpus contains such similes). \mypara{Subjectivity of responses.} Unlike existing captioning datasets, ArtEmis~welcomes the subjective and personal angle that an emotional explanation (in the form of a caption) might have. Even a single person can have a range of emotional reactions to a given stimulus \cite{Mikels_2005,compare_dim_models,vector_model_of_affect,circumplex_model} and, as shown in Fig.~\ref{fig:subjectivity-of-artemis}, this is amplified across different annotators. The subjectivity and rich semantic content distinguish {ArtEmis} from, e.g., the widely used COCO dataset~\cite{coco_chen2015}. Fig.~\ref{fig:bird_annotations} shows different images from both {ArtEmis} and COCO datasets with captions including the word \textit{bird}, where the imaginative and metaphorical nature of {ArtEmis} is apparent (e.g., `bird gives hope' and `life as a caged bird'). Interestingly, despite this phenomenon, as we show later (Section~\ref{para:emotion-analysis}), (1) there is often substantial agreement among annotators regarding their \textit{dominant} emotional reactions, and (2) our collected explanations are often \textit{pragmatic} -- i.e., they also contain references to visual elements present in the image (see Section \ref{para:dataset-Maturity-reasonableness-specificity}). \mypara{Difficulty of emotional explanations.} There is debate within the neuroscience community on whether human emotions are innate, generated by patterns of neural activity, or learned \cite{shackman2019emotional,adolphs2017should,barrett2019historical}. There may be intrinsic difficulties with producing emotion explanations in language -- thus the task can be challenging for annotators in ways that traditional image captioning is not. Our approach is supported by significant research that argues for the central role of language in capturing and even helping to form emotions \cite{lindquist2015role,barrett2017emotions}, including the \emph{Theory of Constructed Emotions}~\cite{barrett2017emotions,barrett2017theory,barrett2006solving,barrett2007mice} by Lisa Feldman Barrett. Nevertheless, this debate suggests that caution is needed when comparing, under various standard metrics, {ArtEmis} with other captioning datasets. \mypara{Affective neural speakers.} To further demonstrate the potential of {ArtEmis}, we experimented with building a number of neural speakers, using deep learning language generation techniques trained on our dataset. The best of our speakers often produce well-grounded affective explanations, respond to abstract visual stimuli, and fare reasonably well in emotional Turing tests, even when competing with humans. \noindent In summary, we make the following key contributions: \begin{itemize} \item We introduce \textit{ArtEmis}, a large scale dataset of emotional reactions to visual artwork coupled with explanations of these emotions in language (Section~\ref{sec:dataset}). \item We show how the collected corpus contains utterances that are significantly more affective, abstract, and rich with metaphors and similes, compared to existing datasets (Sections \ref{para:linguistic_analysis}-\ref{para:emotion-analysis}). \item Using \textit{ArtEmis}, we develop machine learning models for dominant emotion prediction from images or text, and neural speakers that can produce plausible grounded emotion explanations (Sections \ref{sec:method} and~\ref{sec:experimental_results}). \end{itemize} \section{Neural methods} \label{sec:method} \subsection{Auxiliary classification tasks} \label{subsection:aux_classifiers} Before we present the neural speakers we introduce two auxiliary \textit{classification} problems and corresponding neural-based solutions. First, we pose the problem of predicting the emotion explained with a given textual explanation of ArtEmis. This is a classical 9-way text classification problem admitting standard solutions. In our implementations we use cross-entropy-based optimization applied to an LSTM text classifier trained from scratch, and also consider fine-tuning to this task a pretrained BERT model~\cite{devlin2018bert}. Second, we pose the problem of predicting the expected distribution of emotional reactions that \textit{users} typically would have given an artwork. To address this problem we fine-tune an ImageNet-based~\cite{deng2009imagenet} pretrained ResNet-32{} encoder~\cite{he2016deep} by minimizing the KL-divergence between its output and the empirical user distributions of ArtEmis. Having access to these two classifiers, which we denote as $C_{emotion|text}$ and $C_{emotion|image}$ respectively, is useful for our neural speakers as we can use them to evaluate, and also, steer, the emotional content of their output (Sections~\ref{section:evaluation} and~\ref{subsection:emotion_grounded_speaker}). Of course, these two problems have also intrinsic value and we explore them in detail in Section~\ref{sec:experimental_results}. \subsection{Affective neural speakers} \paragraph{Baseline with ANPs.} In order to illustrate the importance of having an emotion-explanation-oriented dataset like ArtEmis for building affective neural speakers; we borrow ideas from previous works~\cite{sentiment_inject,SentiCap} and create a baseline speaker that does not make any (substantial) use of ArtEmis. Instead, and similar to what was done for the baseline presented in~\cite{SentiCap}, we first train a neural speaker with the COCO-caption dataset and then we inject \textit{sentiment} to its generated captions by adding to them appropriately chosen adjectives. Specifically we use the intersection of Adjective Noun Pairs (ANPs) between ArtEmis and the ANPs of \cite{SentiCap} (resulting in 1,177 ANPs, with known positive and negative sentiment) and capitalize on the $C_{emotion|image}$ to decide what sentiment we want to emulate. If the $C_{emotion|image}$ is maximized by one of the four positive emotion-classes of ArtEmis, we inject the adjective corresponding to the \textit{most frequent} (per ArtEmis) positive ANP, to a randomly selected noun of the caption. If the maximizer is negative, we use the corresponding ANP with negative sentiment; last, we resolve the something-else maximizers (\textless $10\%$) by fair coin-flipping among the two sentiments. We note that since we apply this speaker to {ArtEmis} images and there is significant visual domain gap between COCO and WikiArt, we fine-tune the neural-speaker on a small-scale and separately collected (by us) dataset with \textit{objective} captions for 5,000 wikiArt paintings. We stress that this new dataset was collected following the AMT protocol used to build COCO-captions, i.e., asking only for objective (not affective) descriptions of the main objects, colors etc. present in an artwork. Examples of these annotations are in the Supp.~Mat.{} \mypara{Basic {ArtEmis} speakers.} We experiment with two popular backbone architectures when designing neural speakers trained on {ArtEmis}: the Show-Attend-Tell (SAT) approach~\cite{xu2015show}, which combines an image encoder with a word/image attentive LSTM; and the recent line of work of top-down, bottom-up meshed-memory transformers ($M^{2}$)~\cite{cornia2020meshed}, which replaces the recurrent units with transformer units and capitalizes on separately computed object-bounding-box detections (computed using Faster R-CNN~\cite{girshick2015fast}). We also include a simpler baseline that uses {ArtEmis} but without training on it: for a test image we find its nearest visual neighbor in the training set (using ImageNet pre-trained ResNet-32{} features) and output a random caption associated with this neighbor. \mypara{Emotion grounded speaker.} \label{subsection:emotion_grounded_speaker} We additionally tested neural speakers that make use of the emotion classifier, i.e., $C_{emotion|image}$. At training time, in addition to grounding the (SAT) neural-speaker with the visual stimulus and applying teacher forcing with the captions of {ArtEmis}, we further provide at each time step a feature (extracted via a fully-connected layer) of the emotion-label chosen by the annotator for that specific explanation. This extra signal promotes the \textit{decoupling} of the emotion conveyed by the linguistic generation, from the underlying image. In other words, this speaker allows us to independently set the emotion we wish to explain for a given image. At inference time (to keep things fair) we deploy first the $C_{emotion|image}$ over the test artwork, and use the output maximizing emotion, to first ground and then sample the generation of this variant. \mypara{Details.} To ensure a meaningful comparison between neural-speakers, we use the same image-encoders, learning-rate schedules, LSTM hidden-dimensions, etc. across all of them. When training with {ArtEmis} we use an $[85\%, 5\%, 10\%]$ train-validation-test data split and do model-selection (optimal epoch) according to the model that minimizes the negative-log-likelihood on the validation split. For the ANP baseline, we use the Karpathy splits~\cite{karpathy2015deep} to train the same (SAT) backbone network we used elsewhere. When \textit{sampling} a neural speaker, we keep the test generation with the highest log-likelihood resulting from a greedy beam-search with beam size of 5 and a soft-max temperature of 0.3. The only exception to the above (uniform) experimental protocol was made for the basic ArtEmis speaker, trained with Meshed Transformers. In this case we used the author's publicly available implementation without customization~\cite{m2_implementation}. \section{Background and related work} \label{sec:related_work} \paragraph{Emotion classification.} \label{para:related-work:emotions} Following previous studies \cite{img_clf_art,Yanulevskaya-emotions,Zhao2014art,emotion_clf}, we adopt throughout this work the same discrete set of eight \textit{categorical} emotion states. Concretely, we consider: \textit{anger}, \textit{disgust}, \textit{fear}, and \textit{sadness} as negative emotions, and \textit{amusement}, \textit{awe}, \textit{contentment}, and \textit{excitement} as positive emotions. The four negative emotions are considered universal and basic (as proposed by Ekman in \cite{ekman_emotions}) and have been shown to capture well the discrete emotions of the International Affective Picture System~\cite{IAPS}. The four positive emotions are finer grained versions of \textit{happiness}~\cite{happiness_1}. We note that while \textit{awe} can be associated with a negative state, following previous works (\cite{Mikels_2005, emotion_clf}), we treat \textit{awe} as a positive emotion in our analyses. \vspace{-4pt} \mypara{Deep learning, emotions, and art.} Most existing works in Computer Vision treat emotions as an image classification problem, and build systems that try to deduce the main/dominant emotion a given image will elicit~\cite{img_clf_art,Yanulevskaya-emotions,Zhao2014art,emotion_clf_valence_arousal}. An interesting work linking paintings to textual descriptions of their historical and social intricacies is given in \cite{sem_art}. Also, the work of ~\cite{prose_for_painting} attempts to make captions for paintings in the prose of Shakespeare using language style transfer. Last, the work of~\cite{Wilber2017BAMTB} introduces a large scale dataset of artistic imagery with multiple attribute annotations. Unlike these works, we focus on developing machine learning tools for analyzing and generating \emph{explanations} of emotions as evoked by artworks. \mypara{Captioning models and data.} There is a lot of work and corresponding captioning datasets~\cite{young14tacl,Kazemzadeh,conceptual-captions,VG_Krishna_2017,mao16,pont2020connecting} that focus on different aspects of human cognition. For instance COCO-captions~\cite{coco_chen2015} concern descriptions of common objects in natural images, the data of Monroe et al.~\cite{monroe_colors} include discriminative references for 2D monochromatic colors, Achlioptas et al. \cite{achlioptas2020referit3d,achlioptas2019shapeglot} collects discriminative utterances for 3D objects, etc. There is correspondingly also a large volume on deep-net based captioning approaches \cite{baby_talk,mao16,unsup_context_aware,licheng_16,nagaraja16,mattnet,nagaraja16}. The seminal works of~\cite{show-tell,karpathy2015deep} opened this path by capitalizing on advancements done in deep recurrent networks (LSTMs~\cite{lstm}), along with other classic ideas like training with Teacher Forcing~\cite{teacher_forcing}. Our neural speakers build on these `standard' techniques, and {ArtEmis} adds a new dimension to image-based captioning reflecting emotions. \mypara{Sentiment-driven captions.} There exists significantly less captioning work concerning sentiments (positive vs.~negative emotions). Radford and colleagues \cite{unsup_sentiment_cell} discovered that a single unit in recurrent language models trained without sentiment labels, is automatically learning concepts of sentiment; and enables sentiment-oriented manipulation by fixing the sign of that unit. Other early work like SentiCap~\cite{SentiCap} and follow-ups like~\cite{sentiment_inject}, provided explicit sentiment-based supervision to enable sentiment-flavored language generation grounded on real-world images. These studies focus on the visual cues that are responsible for only two emotional reactions (positive and negative) and, most importantly, they do not produce emotion-\emph{explaining} language. \section{Experimental results} \label{sec:experimental_results} \paragraph{Estimating emotion from text or images alone.} We found experimentally that predicting the fine-grained emotion explained in ArtEmis data is a difficult task (see examples where both humans and machines fail in Table~\ref{table:hard_examples_to_guess_emotion}). An initial AMT study concluded that users where able to infer the exact emotion from text alone 53.0\% accurately (in 1K trials). Due to this low score, we decided to make a study with experts (authors of this paper). We attained slightly better accuracy (60.3\% on a sample of 250 utterances). Interestingly, the neural networks of Section~\ref{subsection:aux_classifiers} attained $63.1\%$ and $65.7\%$ (LSTM, BERT respectively) on the entire test split used by the neural-speakers (40,137{} utterances). Crucially, both humans and neural-nets failed gracefully in their predictions and most confusion happened among subclasses of the same, positive or negative category (we include confusion matrices in the Supp.~Mat.{}). For instance, w.r.t. binary labels of positive vs. negative emotion sentiment (ignoring the something-else annotations), the experts, the LSTM and the BERT model, guess correctly $85.9\%$, $87.4\%$, $91.0\%$ of the time. This is despite being trained, or asked in the human studies, to solve the fine-grained 9 way problem. \vspace{5pt} \input{tables/hard_examples_to_guess_emotion} Since we train our image classifiers to predict a distribution of emotions, we select the maximizer of their output and compare it with the `dominant' emotion of the (8,160) test images for which the emotion distribution is unimodal with a mode covering more than $50\%$ of the mass ($38.5\%$ of the split). The attained accuracy for this sub-population is $60.0\%$. We note that the training (and test) data are highly unbalanced, following the emotion-label distribution indicated by the histogram of Figure~ \ref{fig:histogram_emotions_clicks}. As such, losses addressing long-tail, imbalanced classification problems (e.g.,\cite{lin2017focal}) could be useful in this setup. \mypara{Neural speakers.} In Table~\ref{table:speaker_metrics} we report the machine-induced metrics described in Section~\ref{section:evaluation}. First, we observe that on metrics that measure the linguistic similarity to the held-out utterances (BLEU, METEOR, etc.) the speakers fare noticeably worse as compared to how the same architectures fare (modulo secondary-order details) when trained and tested with objective datasets like COCO-captions; e.g., BLEU-1 with SOTA ~\cite{cornia2020meshed} is 82.0. This is expected given the analysis of Section~\ref{sec:dataset} that shows how ArtEmis is a more diverse and subjective dataset. Second, there is a noticeable difference in all metrics in favor of the three models trained with ArtEmis (denoted as Basic or Grounded) against the simpler baselines that do not. This implies that we cannot simply reproduce ArtEmis with ANP injection on objective data. It further demonstrates how even among similar images the annotations can be widely different, limiting the Nearest-Neighbor (NN) performance. Third, on the emotion-alignment metric (denoted as Emo-Align) the emotion-grounded variant fares significantly better than its non-grounded version. This variant also produces a more appropriate percentage of similes by staying closest to the ground-truth's percentage of $~20.5$. Qualitative results of the emotion-grounded speaker are shown in Figure~\ref{fig:neural_productions}. More examples, including typical failure cases and generations from other variants, are provided in the project's website\footnote{\url{https://artemisdataset.org}} and the Supp.~Mat.{} As seen in Figure~\ref{fig:neural_productions} a well-trained speaker creates sophisticated explanations that can incorporate nuanced emotional understanding and analogy making. \mypara{Turing test.} For our last experiment, we performed a user study taking the form of a Turing Test deployed in AMT. First, we use a neural-speaker to make one explanation for a test artwork and couple it with a randomly chosen ground-truth for the same stimulus. Next, we show to a user the two utterances in text, along with the artwork, and ask them to make a multiple choice among 4 options. These were to indicate either that one utterance was more likely than the other as being made by a human explaining their emotional reaction; or, to indicate that both (or none) were likely made by a human. We deploy this experiment with 500 artworks, and repeat it separately for the basic and the emotion-grounded (SAT) speakers. Encouragingly, \textbf{50.3\%} of the time the users signaled that the utterances of the emotion-grounded speaker were on-par with the human groundtruth (20.6\%, were selected as the more human-like of the pair, and 29.7\% scored a tie). Furthermore, the emotion-grounded variant achieved significantly better results than the basic speaker, which surpassed or tied to the human annotations 40\% of the time (16.3\% with a win and and 23.7\% as a tie). To explain this differential, we hypothesize that grounding with the \textit{most likely} emotion of the $C_{emotion|image}$ helped the better-performing variant to create more common and thus on average more fitting explanations which were easier to pass as being made by a human. \vspace{-13pt} \mypara{Limitations.} While these results are encouraging, we also remark that the quality of even the best neural speakers is very far from human ground truth, in terms of diversity, accuracy and creativity of the synthesized utterances. Thus, significant research is necessary to bridge the gap between human and synthetic emotional neural speakers. We hope that {ArtEmis} will enable such future work and pave the way towards a deeper and nuanced emotional image understanding. \input{tables/speaker_metrics} \subsubsection{Emotion Classification}
2,877,628,089,307
arxiv
\section{Introduction} A knot is \emph{algebraic} if it arises as a link of an isolated singularity of a complex curve. Algebraic knots are special cases of iterated torus knots. In 1976, Rudolph \cite{Rudolph} asked for a description of the subgroup of the knot concordance group generated by algebraic knots. For ease of reference, we refer to this question as a conjecture. \begin{conj}[Rudolph's Conjecture {\cite{Rudolph}}]\label{conj:rudolph-conj} The set of algebraic knots is linearly independent in the smooth knot concordance group~\(\mathcal{C}\). \end{conj} \noindent This question has been of particular interest due to its relevance to the slice-ribbon conjecture: a result of Miyazaki shows that non-trivial linear combinations of iterated torus knots are not ribbon~\cite[Corollary~8.4]{Miyazaki}. In particular, if the slice-ribbon conjecture holds, then Rudolph's conjecture holds. Baker~\cite{Baker} and Abe-Tagami~\cite{AbeTagami} recently noticed that the slice-ribbon conjecture implies a statement stronger than Rudolph's conjecture: \begin{conj}[Abe-Tagami~\cite{AbeTagami} and Baker~\cite{Baker}]\label{conj:main-conjecture} The set of prime fibered strongly quasi-positive knots is linearly independent in the smooth knot concordance group \(\mathcal{C}\). \end{conj} This paper exhibits new large families of knots for which Conjectures~\ref{conj:rudolph-conj} and~\ref{conj:main-conjecture} hold. \subsection{Statement of the results} Evidence of Rudolph's conjecture was first provided in 1979 by Litherland, who proved that positive torus knots are linearly independent in~$\mathcal{C}$~\cite{Litherland-signature}. In~2010, Hedden, Kirk and Livingston showed that for an appropriate choice of positive integers~$\{q_n\}_{n=1}^\infty$, the set~$\{T(2,q_n),T(2,3;2,q_n) \}_{n= 1}^\infty$ is linearly independent in~$\mathcal{C}$, where~$T(p,q)$ and~$T(p,q;r,s)$ denote the~$(p,q)$-torus knot and the~$(r,s)$-cable of~$T(p,q)$, respectively, and~$p$ is coprime to~$qrs$. It is known that an \emph{iterated torus knot}~$T(p_1,q_1;\ldots;p_k,q_k)$ is algebraic if and only if~$p_i,q_i>0$ and~$q_{i+1}>q_ip_{i+1}p_i$ for each~$i$. Our main result, which relies on metabelian twisted Blanchfield pairings~\cite{MillerPowell, BorodzikConwayPolitarczyk}, reads as follows. \begin{theorem} \label{thm:Main} Fix a prime power~$p$. Let \(\mathcal{S}_{p}\) be the set of iterated torus knots \(T(p,q_{1};p,q_{2};\ldots;p,q_{\ell})\), where the sequences~$(q_{1},q_{2},\ldots,q_{\ell})$ of positive integers that are coprime to~$p$ satisfy \begin{enumerate} \item $q_\ell$ is a prime; \item for \(i=1,\ldots,\ell-1\), the integer \(q_{i} \) is coprime to \(q_{\ell}\) when~$\ell >1$; \end{enumerate} The set \(\mathcal{S}_{p}\) is linearly independent in the topological knot concordance group~$\mathcal{C}^{\text{top}}$. \end{theorem} As an immediate corollary of Theorem~\ref{thm:Main}, we obtain the following. \begin{corollary} For every prime power~$p$, the subset \(\mathcal{S}_{p}^{alg} \subset \mathcal{S}_{p}\) of algebraic knots in~$\mathcal{S}_{p}$ is linearly independent in~$\mathcal{C}^{\text{top}}$ and therefore satisfies Conjecture~\ref{conj:rudolph-conj}. \end{corollary} Since positively iterated torus knots are strongly quasi-positive (via~\cite[Theorem 1.2]{HeddenSomeRemarks} and~\cite[Proposition 2.1]{HeddenNotions}), Theorem~\ref{thm:Main} also gives infinite families of knots satisfying Conjecture~\ref{conj:main-conjecture}. \begin{corollary} \label{cor:sqp} For every prime power \(p\), the set \(\mathcal{S}_{p}\) satisfies Conjecture~\ref{conj:main-conjecture}, and \(\mathcal{S}_{p} \setminus \mathcal{S}_{p}^{alg}\) is an infinite family of non-algebraic knots satisfying Conjecture~\ref{conj:main-conjecture}. \end{corollary} Abe and Tagami also conjecture that the set of L-space knots is linearly independent in~$\mathcal{C}$~\cite[Conjecture 3.4]{AbeTagami}. For a knot~$K$ with Seifert genus~$g$, the~$(p,q)$-cable~$K_{p,q}$ is an L-space knot if and only if~$K$ is an L-space knot and~$(2g - 1)p \leq q$~\cite{HeddenOnKnotFloer,HomANoteOnCabling}. Since torus knots are L-space knots, we also obtain the following result. \begin{corollary} For every prime power \(p\), the subset \(\mathcal{S}_{p}^{L} \subset \mathcal{S}_{p}\) of L-space knots in~$\mathcal{S}_{p}$ is linearly independent in~$\mathcal{C}^{\text{top}}$, and this statement also holds for the infinite family~$\mathcal{S}_{p}^{L} \setminus \mathcal{S}_{p}^{\text{alg}}$ of non-algebraic L-space knots. \end{corollary} Note however that not all our examples are L-spaces knots: since the cable of an iterated torus knot need not be an L-space knot, Corollary~\ref{cor:sqp} shows that the infinite set~$\mathcal{S}_p \setminus \mathcal{S}_p^L$ contains no L-spaces knots but is nevertheless linearly independent in~$\mathcal{C}^{\text{top}}$. \subsection{Context and comparison with smooth techniques} Litherland used the Levine-Tristram signature to show that torus knots are linearly independent in~$\mathcal{C}$~\cite{Litherland-signature}. This approach is insufficient to answer Rudolph's conjecture since Livingston and Melvin showed in~\cite{LivingstonMelvinAlgebraicKnots} that the following linear combinations of iterated torus knots are algebraically slice: \begin{equation} \label{eq:HKLIntro}J(p,q,q_1,q_2):= T(p,q;p,q_1) \# -T(p,q_1)\# -T(p,q;p,q_2)\# T(p,q_2). \end{equation} Classical knot invariants can thus not obstruct~$J(p,q,q_1,q_2)$ from being slice. Hedden, Kirk, and Livingston managed to leverage the Casson-Gordon invariants to provide further evidence of Rudolph's conjecture~\cite{HeddenKirkLivingston}. Indeed, they showed that for an appropriate choice of~$\{q_n\}_{n=1}^\infty$, the knots~$\{J(2,3,q_{2n-1},q_{2n})\}_{n= 1}^\infty$ generate an infinite rank subgroup in~$\mathcal{C}$. This result is particularly notable since they observe that the~$s$-invariant from Khovanov homology and the~$\tau$-invariant from Heegaard-Floer homology both vanish on~$J(2,3,q_{2n-1},q_{2n})$~\cite[Proposition~8.2]{HeddenKirkLivingston}. In fact, their argument (combined with Proposition~\ref{prop:algebraic-sliceness}) generalises to show that if \(K\) is a linear combination of algebraically slice knots belonging to \(\mathcal{S}_{p}\), then~$\tau(K) = 0$ and $s(K)=0$. Next, we observe that the Upsilon invariant~$\Upsilon_K \colon [0,2] \to \mathds{R}$ from knot Floer homology~\cite{OSS} is also insufficient to prove Theorem~\ref{thm:Main}. First note that if~$q_1,q_2>p(p-1)(q-1)$, then~$T(p,q;p,q_i)$ is an L-space knot~\cite{HeddenOnKnotFloer}, and thus a result of Tange shows that~$\Upsilon_{T(p,q;p,q_i})(t)=\Upsilon_{T(p,q)}(pt)+\Upsilon_{T(p,q_i)}(t)$ for all~$t\in[0,2]$~\cite[Theorem 3]{Tange}. The additivity of~$\Upsilon$ then establishes that~$\Upsilon_{J(p,q,q_1,q_2)}(t)=~0$ for all~$t \in [0,2]$ whenever~$q_1,q_2>p(p-1)(q-1)$. \subsection{Strategy and ingredients of the proof} The proof of Theorem~\ref{thm:Main} relies on Casson-Gordon theory~\cite{CassonGordon1,CassonGordon2,KirkLivingston}, and more specifically on the metabelian Blanchfield pairings introduced by Miller-Powell~\cite{MillerPowell} and further developed by the first author, the third author, and Maciej Borodzik~\cite{BorodzikConwayPolitarczyk}. Since these invariants are somewhat technical, the next paragraphs describe some background and ideas that go into the proof of Theorem~\ref{thm:Main}. For notational simplicity, however, we restrict ourselves to a very particular case: we apply our strategy to the knot~$J(p,q,q_1,q_2)$ described in~\eqref{eq:HKLIntro}. \subsubsection*{The sliceness obstruction} Let~$p$ be a prime power, let~$\Sigma_p(J)$ be the~$p$-fold branched cover of the knot~$J:=J(p,q,q_1,q_2)$, let~$\chi$ be a character on~$H_1(\Sigma_p(J))$, and let~$M_J$ be the~$0$-framed surgery of~$J$. Associated to this data, there is a non-singular sesquilinear and Hermitian \emph{metabelian Blanchfield pairing} $$\operatorname{Bl}_{\alpha(p,\chi)}(J) \colon H_1(M_J;\mathbb{C}[t^{\pm 1}]^p)\times H_1(M_J;\mathbb{C}[t^{\pm 1}]^p) \to \mathds{C}(t)/\mathbb{C}[t^{\pm 1}].$$ Here~$H_1(M_J;\mathbb{C}[t^{\pm 1}]^p)$ denotes the homology of~$M_J$ twisted by a metabelian representation \linebreak~$\alpha(p,\chi) \colon \pi_1(M_J) \to GL_p(\mathbb{C}[t^{\pm 1}])$ whose definition will be recalled in Section~\ref{sec:TwistedPolynomial}. The precise definition of~$\operatorname{Bl}_{\alpha(p,\chi)}(J)$ is irrelevant in this paper: only its properties are required. Informally, however, the pairing~$\operatorname{Bl}_{\alpha(p,\chi)}(J)~$ contains both the information from twisted polynomial invariants and twisted signature invariants. We now describe how~$\operatorname{Bl}_{\alpha(p,\chi)}(J)~$ provides a sliceness obstruction. Let~$\lambda_p(J)$ denote the~$\mathds{Q}/\mathds{Z}$-valued linking form on~$H_1(\Sigma_p(J))$. Miller and Powell show that if for every~$\mathds{Z}_p$-invariant metaboliser~$G$ of~$\lambda_p(J)$, there exists a prime power order character~$\chi$ that vanishes on~$G$ and such that~$\operatorname{Bl}_{\alpha(p,\chi)}(J)$ is not metabolic, then~$J$ is not slice~\cite[Theorem~6.10]{MillerPowell}. In order to make this obstruction more concrete, we now recall some terminology on linking forms and their metabolizers. \subsubsection*{The Witt group of linking forms} We focus on linking forms over~$\mathbb{C}[t^{\pm 1}]$, referring to Section~\ref{sec:Metabolisers} for a discussion over more general rings. A \emph{linking form} over~$\mathbb{C}[t^{\pm 1}]$ is a sesquilinear Hermitian pairing~$V \times V \to \mathds{C}(t)/\mathbb{C}[t^{\pm 1}]$, where~$V$ is a torsion~$\mathbb{C}[t^{\pm 1}]$-module. A linking form~$(V,\lambda)$ is \emph{metabolic} if there is a submodule~$L \subset V$ such that~$L=L^\perp$; such an~$L$ is called a \emph{metaboliser}. The \emph{Witt group of linking forms}, denoted~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$, consists of the monoid of non-singular linking forms modulo the submonoid of metabolic linking forms. We write~$\lambda_1 \sim \lambda_2$ if two linking forms agree in~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$. The Miller-Powell obstruction to sliceness, therefore, consists of deciding whether a certain twisted Blanchfield pairing~$\operatorname{Bl}_{\alpha(p,\chi)}(J)$ is zero in the group~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$. As we will now describe, one of our main ideas is to transfer a problem of linear independence in~$\mathcal{C}^{\text{top}}$ (namely Rudolph's conjecture) into a problem of linear independence in~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$. \subsubsection*{From linear independence in~$\mathcal{C}^{\text{top}}$ to linear independence in~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$} Since the \linebreak knot~$J= T(p,q;p,q_1) \# -T(p,q_1)\# -T(p,q;p,q_2)\# T(p,q_2)$ is a connected sum of four knots, both~$H_1(\Sigma_p(J))$ and~$\lambda_p(J)$ can be decomposed into four direct summands: $$\lambda_p(J)=\lambda_p(T(p,q_1))\oplus -\lambda_p(T(p,q_1))\oplus \lambda_p(T(p,q_2))\oplus -\lambda_p(T(p,q_2)).$$ In particular, any character on~$H_1(\Sigma_p(J))$ can be written as~$\chi=\chi_1\oplus\chi_2\oplus\chi_3\oplus\chi_4$. For each given~$\mathds{Z}_p$-invariant metaboliser~$M$ of~$\lambda_p(J)$, the ``sliceness-obstructing character" that we will produce will be of the form~$\chi=\chi_1 \oplus \chi_2 \oplus \theta \oplus \theta$ where~$\theta$ denotes the trivial character. Using the definition of~$J$, together with the direct sum decomposition of \cite[Corollary~8.21]{BorodzikConwayPolitarczyk}, the Witt class of the metabelian Blanchfield pairing of~$J$ is given by \begin{align}\label{equation:Bl(J)} \operatorname{Bl}_{\alpha(p,\chi)}(J) \sim \operatorname{Bl}_{\alpha(p,\chi_1)}(T(p,q;p,q_1)) &\oplus -\operatorname{Bl}_{\alpha(p,\chi_2)}(T(p,q_1)) \\ &\oplus - \operatorname{Bl}_{\alpha(p,\theta)}(T(p,q;p,q_2))\oplus \operatorname{Bl}_{\alpha(p,\theta)}(T(p,q_2)). \nonumber \end{align} This expression can be further decomposed by applying the satellite formula for the metabelian Blanchfield forms given in \cite[Theorem~8.19]{BorodzikConwayPolitarczyk}. Regardless of the final expression, the problem has been converted into a question of linear independence in~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$. In Proposition~\ref{prop:Splitting}, we describe a criterion for linear independence in terms of roots of the orders of the underlying modules (recall that the order of a module is a Laurent polynomial in~$\mathbb{C}[t^{\pm 1}]$; it is defined up to multiplication by units of~$\mathbb{C}[t^{\pm 1}]$). Here is a simplified version of this statement. \begin{proposition} \label{prop:SplittingIntro} If~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$ are two non-metabolic linking forms over~$\mathbb{C}[t^{\pm 1}]$ such that~$\operatorname{Ord}(V_1)$ and~$\operatorname{Ord}(V_2)$ have distinct roots, then the Witt classes~$[V_1,\lambda_1]$ and~$[V_2,\lambda_2]$ are linearly independent in \(W(\mathds{C}(t),\mathds{C}[t^{\pm 1}])\). \end{proposition} \subsubsection{Computation of twisted Alexander polynomials} In order to apply Proposition~\ref{prop:SplittingIntro}, we must, therefore, understand the roots of the metabelian twisted Alexander polynomials of~$T(p,q)$ associated to characters on~$H_1(\Sigma_p(T(p,q)))$. This is carried out in Section~\ref{sec:TwistedPolynomial} and relies on our explicit understanding of the~$p$-fold cover~$E_p(T(p,q)) \to E(T(p,q))$ from Section~\ref{sec:branch-covers-torus}. Since this computation of twisted polynomials might be of independent interest, we summarize it as follows. \begin{proposition}[Lemma~\ref{lem:Characaters} and Corollary~\ref{cor:TwistedAlexanderPolynomial}] \label{prop:TwistedPolyIntro} Let~$p,q>0$ be two coprime integers, and set~$\xi_p=e^{2\pi i/p}$. The abelian group of characters on \(H_1(\Sigma_{p}(T(p,q))) \cong \mathds{Z}_q^{p-1}\) is isomorphic~to \[\{ \mathbf{a}:=(a_{1},\ldots,a_{p}) \in \mathds{Z}_{q}^{p} \ | \ a_1+\cdots+a_p=0\}.\] We write~$\chi_{\mathbf{a}}$ for the character associated to~$\mathbf{a}$. The metabelian twisted Alexander polynomial of the~$0$-framed surgery~$M_{T(p,q)}$ associated to the character~$\chi_{\mathbf{a}}\colon H_1(\Sigma_{p}(T(p,q))) \to \mathds{Z}_q$ is given by ~$$\Delta_{1}^{\alpha(p,\chi_{\mathbf{a}})}(M_{T(p,q)})= \frac{(-1)^{p-1}(1-t^{q})^{p-1}}{(t\xi_q^{a_{1}}-1)(t\xi_q^{a_{2}}-1) \cdots (t\xi_q^{a_{p}}-1)(t-1)}.~$$ \end{proposition} \subsubsection*{Main steps of the proof} We now return to the knot $J=J(p,q,q_1,q_2)$ from~\eqref{eq:HKLIntro}. Obstructing $J$ from being slice has three main steps. In fact, the proof of Theorem~\ref{thm:Main} in Section~\ref{sec:MainTheorem} follows more complicated versions of these same steps. \begin{enumerate} \item Firstly, we use the previously described ingredients to study the implications of $\operatorname{Bl}_{\alpha(p,\chi)}(J)$ being metabolic on the characters $\chi_1$ and $\chi_2$; here $\chi=\chi_1 \oplus \chi_2 \oplus \theta \oplus \theta$ with $\theta$ the trivial character. This is the content of Subsection~\ref{subsub:Blanchfield}. \item Secondly, we show that for every metaboliser $L$ of~$\lambda_p(T_{p,q_1}) \oplus -\lambda_p(T_{p,q_1})$, it is possible to build characters $\chi_1$ and $\chi_2$ that violate these conditions and such that $\chi_1 \oplus \chi_2$ vanishes on $L$. This is the content of Subsection~\ref{subsub:BuildingCharacters}. \item Finally, we combine these two steps to obstruct the sliceness of $J$: for every metaboliser~$G$ of~$\lambda_p(J)$, we are able to build a character $\chi=\chi_1 \oplus \chi_2 \oplus \theta \oplus \theta$ that vanishes on~$G$ and such that $\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not metabolic. This is the content of Subsection~\ref{subsub:Conclusion}. \end{enumerate} \begin{remark} When~$p=2$, Hedden, Kirk and Livingston also use an obstruction based on the Casson-Gordon set-up to show that for an appropriate choice of positive integers~$\{q_n\}_{n=1}^\infty$, the set~$\{T(2,q_n),T(2,3;2,q_n) \}_{n= 1}^\infty$ is linearly independent in~$\mathcal{C}^{\text{top}}$~\cite{HeddenKirkLivingston}. Our work differs from theirs in two main points: \begin{itemize} \item While~\cite{HeddenKirkLivingston} uses a blend of discriminants and signatures to prove its linear independence result, we use metabelian Blanchfield pairings. In a nutshell, the Blanchfield pairing encapsulates both the discriminant and (most of) the signature invariants allowing us to both streamline and generalize several of the argument from~\cite{HeddenKirkLivingston}. \item The result of~\cite{HeddenKirkLivingston} is proved without having to study invariant metabolizers; see also~\cite[Section 9]{BorodzikConwayPolitarczyk}. This is a feature of iterated torus knots~$T(p,Q)$ with~$p=2$ and fails when~$p>2$. \end{itemize} Passing from our outline to obstruct the sliceness of~\(J(p,q,q_{1},q_{2})\) to the proof of Theorem~\ref{thm:Main} requires additional steps. As often in Casson-Gordon theory, the main technical difficulty to overcome concerns the metabolizers of the linking form of the knot in question. Regarding these metabolizers, our strategy can be summarized as follows: \begin{enumerate} \item Given a metabolizer, we isolate certain technical conditions which guarantee that a character violates the sliceness obstruction. This is the content of Lemma~\ref{lemma:constructing-characters}. \item We distinguish a certain family of metabolizers called \emph{graph metabolizers}, see~Section~\ref{sub:Graph}. \item The construction of the required character, for any fixed non-graph metabolizer is not overly challenging, see Cases~1 and~2 in the proof of Lemma~\ref{lemma:constructing-characters}. \item Dealing with graph metabolizers requires more work. In Case 3, we show that either there exists a character satisfying the conditions from Lemma~\ref{lemma:constructing-characters}, or the knot in question contains a slice summand~\(K \# -K\), for some knot \(K\). Consequently, once we cancel all the summands of the form \(K \# -K\), we are able to construct the desired obstructing character for any graph metabolizer, and finish the proof. \end{enumerate} \end{remark} \subsection{Assumptions and outlook} We conclude this introduction by commenting on the various technical assumptions that appear in Theorem~\ref{thm:Main}. \begin{enumerate} \item The assumption that the integers~$q_i$ are coprime to~$q_\ell$ is used in Proposition~\ref{prop:NotMetabolic} to ensure that certain Witt classes are linearly independent in~$W(\mathds{C}(t),\mathds{C}[t^{\pm 1}])$. This hypothesis has its roots in the notion of \emph{$p$-independence} introduced in~\cite[Definition 6.2]{HeddenKirkLivingston}. \item We assume that~$p$ is a prime power in order to use Casson-Gordon theory~\cite{CassonGordon1,CassonGordon2}. \item We required that the~$q_i$ be positive mostly because of our interest in Rudolph's conjecture: algebraic knots are iterated torus knots with \emph{positive} cabling parameters. \item We use that the~$q_{i,\ell_i}$ are prime in order to obtain the decomposition in~\eqref{eq:DecompoBranchedCover} and to ensure that~$\mathbb{F}_{q_{i,\ell_i}}$ is a field. \end{enumerate} Summarising, our assumptions are made for technical reasons: we have so far not encountered linear combinations of (algebraically slice) iterated torus knots whose sliceness is not obstructed by some Casson-Gordon invariants. Furthermore, this paper does not fully use the techniques developed in~\cite{BorodzikConwayPolitarczyk} to compute the Casson-Gordon Witt class. Therefore, it would be interesting to study how far these methods can be pushed to investigate Rudolph's conjecture. \subsection{Organisation} This paper is organized as follows. In Section~\ref{sec:branch-covers-torus}, we collect several results on the algebraic topology of the exterior of the torus knot~$T(p,q)$. In Section~\ref{sec:TwistedPolynomial}, we use these results to compute Alexander polynomials of~$T(p,q)$ twisted by metabelian representations. In Section~\ref{sec:Metabolisers}, we review some facts about linking forms. Finally in Section~\ref{sec:MainTheorem}, we prove Theorem~\ref{thm:Main}. \subsection*{Acknowledgments} AC thanks the MPIM for its financial support and hospitality. MHK was partly supported by the POSCO TJ Park Science Fellowship and by NRF grant 2019R1A3B2067839. WP was supported by the National Science Center grant 2016/22/E/ST1/00040. We wish to thank the CIRM in Luminy for providing excellent conditions where the bulk of this work was carried out. \subsection*{Conventions} Manifolds are assumed to be compact and oriented. Throughout the paper, the~$p$-fold branched cover of a knot is denoted~$\Sigma_p(K)$, and~$\lambda_p(K)$ denotes the linking form on~$H_1(\Sigma_p(K))$. \section{Branched covers of torus knots} \label{sec:branch-covers-torus} The aim of this section is to describe the~$\mathds{Z}[\mathds{Z}_p]$-module structure of~$H_1(\Sigma_p(T(p,q)))$ induced from the~$\mathds{Z}_p$-covering action on~$\Sigma_p(T(p,q))$ when~$q$ is a prime. Let~$E(T(p,q))$ be the complement of the torus knot~$T(p,q)$, and let~$E_p(T(p,q))$ be its~$p$-fold cyclic cover. In Subsection~\ref{sub:KnotGroup}, to set up some notation, we recall the decomposition of~$E(T(p,q))$ coming from the standard genus 1 Heegaard splitting of~$S^3$, as described in \cite[Example 1.24]{Hatcher}. In Subsection~\ref{sub:BranchedCover}, this decomposition of~$E(T(p,q))$ is used to decompose~$E_p(T(p,q))$: after that,~$H_1(\Sigma_p(T(p,q)))$ can be computed via a Mayer-Vietoris sequence argument since~$\Sigma_p(T(p,q))$ is a union of~$E_{p}(T(p,q))$ with a solid torus glued along the torus boundary. \subsection{The homotopy type of~$E(T(p,q))$} \label{sub:KnotGroup} The goal of this subsection is to describe the homotopy type of~$E(T(p,q))$, as well as describe explicit generators for~$\pi_1(E(T(p,q)))$. To achieve this, we follow closely~\cite[Example 1.24]{Hatcher}. \medbreak Consider the standard decomposition~$S^3=S^1\times D^2\cup D^2\times S^1$ and denote~$S^1\times D^2$ and~$D^2\times S^1$ by~$H_1$ and~$H_2$ respectively;~$H_1 \subset \mathds{R}^3$ being the bounded solid torus. We parametrize the~$(p,q)$-torus knot~$T(p,q)$ on the torus~$H_1\cap H_2$ as follows: \begin{equation}\label{equation:torus-knot}T(p,q)=\{(e^{2\pi i pt},e^{2\pi iqt})\mid t\in [0,1]\}\subset S^1\times S^1 = H_1\cap H_2. \end{equation} Using this description of~$T(p,q)$, for each~$x\in S^1$, we see that~$T(p,q)$ intersects~$\{x\}\times D^2\subset H_1$ in~$p$ equi-distributed points of~$\{x\}\times \partial D^2$; see Figure~\ref{figure:equi-distributedpoints} for~$p=3$. \begin{figure}[!htb] \centering \includegraphics{Figure1} \caption{On the left hand side: the intersection~$T(p,q)\cap (\{x\}\times D^2)$. On the right hand side: the complement~$H_1\smallsetminus T(p,q)$ deformation retracts onto a 2-complex~$X_p$.} \label{figure:equi-distributedpoints} \end{figure} As depicted in the right hand side of Figure~\ref{figure:equi-distributedpoints}, the complement~$H_1\smallsetminus T(p,q)$ deformation retracts onto a 2-complex~$X_p \subset H_1$ which is the mapping cylinder of the degree~$p$ map~$f_p\colon S^1\to c_1$, where~$c_1$ is the core circle of~$H_1$. The same argument shows that~$H_2\smallsetminus T(p,q)$ deformation retracts onto the mapping cylinder~$X_q$ of the degree~$q$ map~$f_q\colon S^1\to c_2$, where~$c_2$ is the core circle of~$H_2$. By perturbing~$X_p$ near~$H_1\cap H_2$, we can arrange that~$X_p$ and~$X_q$ match up. Next, let~$X_{p,q}$ be the union of~$X_p$ and~$X_q$. Note that~$X_{p,q}$ is homeomorphic to the double mapping cylinder of the maps~$f_p$ and~$f_q$, defined by \[X_{p,q}:= S^1\times[0,1]\cup c_1 \cup c_2/{\sim}\] where~$(z,0)\sim f_p(z)$ and~$(z,1)\sim f_q(z)$ for all~$z\in S^1$ (see Figure~\ref{figure:doublemappingcylinder}). By van Kampen's theorem, \[\pi_1(X_{p,q})\cong \langle c_1,c_2\mid c_1^p=c_2^q\rangle.\] Summarizing, we have the following proposition which is implicit in \cite[Example 1.24]{Hatcher}: \begin{proposition}[{\cite[Example 1.24]{Hatcher}}]\label{prop:complement-torus-knot} There is a deformation retraction~$E(T(p,q))\to X_{p,q}$ sending~$H_1\smallsetminus T(p,q)$ and~$H_2\smallsetminus T(p,q)$ to~$X_p$ and~$X_q$. In particular,~$\pi_1(E(T(p,q)))\cong \langle c_1,c_2\mid c_1^p=c_2^q\rangle$ where~$c_i$ is the core circle of~$H_i$ for~$i=1,2$. \end{proposition} \subsection{The computation of~$H_1(\Sigma_p(T(p,q)))$ as a~$\mathds{Z}[\mathds{Z}_p]$-module} \label{sub:BranchedCover} In this subsection, we describe the~$\mathds{Z}[\mathds{Z}_p]$-module structure of~$H_1(\Sigma_p(T(p,q)))$. To do so, we first study the~$p$-fold cyclic covering map~$\pi\colon E_p(T(p,q))\to E(T(p,q))$, then we compute~$\pi_1(E_p(T(p,q)))$, and finally we describe~$H_1(\Sigma_p(T(p,q)))$. \medbreak We first use Subsection~\ref{sub:KnotGroup} to describe a deformation retract of~$E_p(T(p,q))$. Using~\eqref{equation:torus-knot}, we see that the torus knot~$T(p,q)$ links respectively~$q$ and~$p$ times the core circles~$c_1$ and~$c_2$. Consequently,~$c_1$ and~$c_2$ are homologous to~$q\mu$ and~$p\mu$ in~$H_1(E(T(p,q)))$, where~$\mu=c_1^kc_2^l$ is a meridian of~$T(p,q)$ and~$pk+ql=1$. Use~$(X_{p,q})_p$ to denote the pre-image~$\pi^{-1}(X_{p,q})$, and observe that by Proposition~\ref{prop:complement-torus-knot},~$E_p(T(p,q))$ deformation retracts onto~$(X_{p,q})_p$. \begin{figure}[!htb] \centering \includegraphics{Figure2-1.pdf} \caption{The double mapping cylinder~$X_{p,q}$ obtained by gluing~$S^1\times [0,1]$ with the circles~$c_1$ and~$c_2$ by the degree~$p$ and~$q$ maps~$f_p$ and~$f_q$.} \label{figure:doublemappingcylinder} \end{figure} We describe~$\pi_1(E_p(T(p,q)))$ by studying the homotopy type of~$(X_{p,q})_p$. The (restricted) covering map~$\pi\colon (X_{p,q})_p\to X_{p,q}$ corresponds to the homomorphism~$\pi_1(X_{p,q})\to \mathds{Z}_p$ sending~$c_1$ to~$q\in \mathds{Z}_p$ and~$c_2$ to~$0\in \mathds{Z}_p$. We use~$\pi_*\colon \pi_1( (X_{p,q})_p)\to \pi_1(X_{p,q})$ to denote the induced map. Let~$a$ be the pre-image~$\pi^{-1}(c_1)$ and let~$b_0,\ldots,b_{p-1}$ be the components of the pre-image~$\pi^{-1}(c_2)$; we choose the indices of the~$b_i$'s so that \begin{equation} \label{eq:bi} \pi_*(b_i)=\mu^i c_2\mu^{-i} \quad \text{ for } i=0,\ldots, p-1. \end{equation} Since~$\pi$ is a covering map, the induced map~$\pi_*\colon \pi_1( (X_{p,q})_p)\to \pi_1(X_{p,q})$ is injective. For this reason, we shall often identify~$b_i$ with~$\mu^i c_2\mu^{-i}$. Since~$X_{p,q}$ is a double mapping cylinder, so is~$(X_{p,q})_p$. \begin{figure}[htb!] \centering \includegraphics{Figure2-2.pdf} \caption{The~$p$-fold cyclic cover~$(X_{p,q})_p$ of~$X_{p,q}$ is also a double mapping cylinder, where~$f_1$ and~$f_q$ denote the degree~$1$ and the degree~$q$ maps, respectively.} \label{figure:(X_{p,q})_p} \end{figure} More precisely as illustrated in Figure~\ref{figure:(X_{p,q})_p}, we have \[(X_{p,q})_p=\bigcup_{i=0}^{p-1}S^1_i\times [0,1] \cup a\cup b_0\cup\cdots\cup b_{p-1}/{\sim},\] where each~$S^1_i\times \{0\}$ is identified with the circle~$a$ by the identity map, and~$S^1_i\times \{1\}$ is identified with the circle~$b_i$ by the degree~$q$ map. By van Kampen's theorem, we deduce that \[\pi_1((X_{p,q})_p)\cong \langle b_{0},b_{1},\ldots,b_{p-1} \mid b_{i}^{q} = b_{j}^{q} \text{ for } 0 \leq i \neq j \leq p-1 \rangle.\] Since~$E_p(T(p,q))$ deformation retracts onto~$(X_{p,q})_p$, we obtain the following proposition. \begin{proposition} \label{prop:FundamentalGroupCyclicCover} Let~$\pi\colon E_p(T(p,q))\to E(T(p,q))$ be the~$p$-fold cyclic covering and let~$b_0,b_1,\ldots,b_{p-1}$ be the homotopy classes of the components of~$\pi^{-1}(c_2)$ so that~$\pi_*(b_i)=\mu^i c_2\mu^{-i}$. Then \[\pi_{1}(E_p(T(p,q))) = \langle b_{0},b_{1},\ldots,b_{p-1} \mid b_{i}^{q} = b_{j}^{q} \text{ for } 0 \leq i \neq j \leq p-1 \rangle.\] \end{proposition} Next, we use this description of~$\pi_1(E_p(T(p,q)))$ to obtain generators of the finite abelian group~$H_1(\Sigma_p(T(p,q)))=TH_1(E_p(T(p,q)))$. First, note that Proposition~\ref{prop:FundamentalGroupCyclicCover} shows \linebreak that~\(H_{1}(E_p(T(p,q))) \cong \mathds{Z} \oplus \mathds{Z}_{q}^{p-1}\) has generators \(b_0,b_1,\ldots b_{p-1}\) and relations~$q b_{i} = q b_{j}$ for each~$i,j$. In the remainder of this section, we describe a set of generators that will be more convenient for the twisted Alexander polynomial computations of Section~\ref{sec:TwistedPolynomial}. \begin{remark} \label{rem:LiftMeridians} While the meridian~$\mu$ of~$T(p,q)$ does not lift to~$E_p(T(p,q))$, a loop representing~$\mu^p$ does. Since the projection induced map~$\pi_* \colon \pi_1(E_p(T(p,q))) \to \pi_1(E(T(p,q)))$ is injective, we slightly abuse notations and also write~$\mu^p$ for the homotopy class of this lift in~$\pi_1(E_p(T(p,q)))$. \end{remark} In what follows, we make no notational distinction between elements in~$\pi_1(E_p(T(p,q)))$ and elements in~$H_1(E_p(T(p,q)))$, despite switching from multiplicative to additive notations. In some rare instances, we will also use the multiplicative notation in homology. Keeping this in mind, for~$i=0,\ldots, p-1~$, we consider~$\mu^{-p}b_i$ in~$\pi_1(E_p(T(p,q)))$ and~$x_i:=b_i-\mu^p$ in~$H_1(E_p(T(p,q)))$. The next proposition describes the homology group~$H_{1}(\Sigma_{p}(T(p,q)))$ as a~$\mathds{Z}[\mathds{Z}_p]$-module. \begin{proposition}\label{prop:homology-group-cover} The abelian group \(H_{1}(\Sigma_{p}(T(p,q))) \cong \mathds{Z}_{q}^{p-1}\) is generated by the~$x_{i} =b_i-\mu^p$, and these elements satisfy the following relations: \begin{enumerate} \item \(x_{0}+x_{1}+\cdots+x_{p-1}=0\), \item \(x_{i} = t^{i}x_{0}\), where \(t\) denotes the covering transformation of \(\Sigma_{p}(T(p,q))\). \end{enumerate} In particular, there exists an isomorphism of \(\mathds{Z}[t^{\pm1}]\)-modules \[H_{1}(\Sigma_{p}(T(p,q))) \cong \mathds{Z}_{q}[t]/(1+t+t^{2}+\cdots+t^{p-1}).\] \end{proposition} \begin{proof} The proof has four steps. Firstly, we establish a criterion for an element in~$H_1(\Sigma_p(T(p,q)))$ to be torsion; secondly, we prove that the~$x_i$ are torsion; thirdly, we show that that~$x_i$ generate~$TH_1\Sigma_p(T(p,q))$ as an abelian group; fourthly and finally we prove that the~$x_i$ satisfy the two identities stated in the lemma. We assert that an element \(x =\sum_{i=0}^{p-1} a_ib_i\) in~$H_{1}(E_p(T(p,q)))$ is torsion if and only if \(\sum_{i=0}^{p-1} a_i=~0\). The map~$\pi_* \colon H_1(E_p(T(p,q))) \to H_1(E(T(p,q)))$ maps~$ TH_{1}(E_p(T(p,q)))$ to zero and maps the infinite cyclic summand isomorphically onto~$p\mathds{Z} \cong \mathds{Z} \langle c_2 \rangle$. \footnote{For any knot~$K$ and prime power~$n$, one has the decomposition~$H_1(E_n(K))=TH_1(E_n(K)) \oplus \mathds{Z}$, where the~$\mathds{Z}$ summand is generated by a lift of the~$n$-fold power of the meridian.} In particular, a class~$x \in H_1(E_p(T(p,q)))$ is torsion if and only if~$\pi_*(x)=0$. On the other hand, using Proposition~\ref{prop:FundamentalGroupCyclicCover}, we deduce that~$\pi$ induces the following map on homology, concluding the proof of the assertion: \begin{align*} \pi_* \colon H_{1}(E_p(T(p,q))) &\to p\mathds{Z} \subset \mathds{Z}=H_1(E(T(p,q)))\\ \sum_{i=0}^{p-q} a_ib_i&\mapsto \sum_{i=0}^{p-1} a_i. \end{align*} We move on to the second step: we prove that the homology classes~$x_0,\ldots,x_{p-1}$ are torsion. Using the criterion, we must show that~$\pi_*(x_i)=0$ for each~$i$. Since~$\pi_*(b_i)=1$, this reduces to showing that~$\pi_*(\mu^p)=1$. We start by computing the abelianization of~$\mu^p$. Since~$\mu=c_1^kc_2^l$, we notice that in \(\pi_1(E_p(T(p,q)))\), the following equality holds: \begin{equation} \label{eq:Mup} \mu^{p} = (c_{1}^{k}c_{2}^lc_{1}^{-k}) \cdot (c_{1}^{2k} c_{2}^{l} c_{1}^{-2k}) \cdots (c_{1}^{(p-1)k} c_{2}^{l} c_{1}^{-(p-1)k}) c_{1}^{pk} c_{2}^{l}. \end{equation} In order to compute the abelianisation of this expression, we claim that for any~\(0 \leq s \leq p-1\), and any~$k$, the equation ~$\mu^{s} c_{2} \mu^{-s} = c_{1}^{ks} c_{2} c_{1}^{-ks}$ holds in~\(H_{1}(E_p(T(p,q))) = \pi_1(E_p(T(p,q)))^{ab}\). This claim is a consequence of following direct computation in \(\pi_1(E_p(T(p,q)))\): \[\mu^{s} c_{2} \mu^{-s} = \left(\prod_{i=1}^{s-1} c_{1}^{ki} c_{2}^{l} c_{1}^{-ki}\right) \cdot \left( c_{1}^{ks} c_{2} c_{1}^{-ks} \right) \cdot \left(\prod_{i=1}^{s-1} c_{1}^{ki} c_{2}^{-l} c_{1}^{-ki}\right).\] Using consecutively~\eqref{eq:Mup}, the equation~$\mu^{s} c_{2} \mu^{-s} = c_{1}^{ks} c_{2} c_{1}^{-ks}$ that we just established, and the identification~$b_i=\mu^ic_2\mu^{-i}$ from~\eqref{eq:bi} (as well as the presentation in Proposition~\ref{prop:complement-torus-knot} and~\(qk+pl=1\)), we obtain the following sequence of equalities in~$H_1(E_p(T(p,q)))$: \begin{align} \label{eq:mup} \mu^{p} &=(c_{1}^{k}c_{2}^lc_{1}^{-k}) \cdot (c_{1}^{2k} c_{2}^{l} c_{1}^{-2k}) \cdots (c_{1}^{(p-1)k} c_{2}^{l} c_{1}^{-(p-1)k}) c_{1}^{pk} c_{2}^{l} \nonumber \\ &=(\mu c_{2}^l \mu^{-1})(\mu^2 c_{2}^l \mu^{-2})\cdots (\mu^{(p-1)} c_{2}^{l} \mu^{-(p-1)})c_{1}^{pk} c_{2}^{l} \nonumber \\ &= l (b_{0}+b_{1}+\cdots+b_{p-1}) + qkb_{0}. \end{align} As~$\pi_*(b_i)=1$ for each~$i$, this implies that \(\pi_*(\mu^{p})=1\). It follows that \(\pi_*(x_{i}) = \pi_*(b_{i})-\pi_*(\mu^{p})=~0\), and therefore each of the~$x_i$ is torsion. This concludes the second step of the proof. Thirdly, we show that every element of~$TH_1(E_p(T(p,q)))$ can be written as a linear combination of the~$x_i$ for~$i=0,1,\ldots,p-1$: given \(x = \sum_{i=0}^{p-1}a_ib_i\), adding and substracting~$\mu^p$, using~$\sum_{i=0}^{p-1}a_i=0$ (which holds thanks to the first step) and the definition of~$x_i$, we obtain \begin{align*} x&=\sum_{i=0}^{p-1}a_ib_i =\sum_{i=0}^{p-1}a_i\mu^{p} +\sum_{i=0}^{p-1}a_i(b_i-\mu^{p} ) =\sum_{i=0}^{p-1}a_ix_i. \end{align*} Fourthly and finally, we establish the relations~\(x_{0}+x_{1}+\cdots+x_{p-1}=0\) and~$x_i=t^ix_0$. The latter relation is clear (since~$b_{i} = t^i b_{0}$ and \(t \mu^{p} = \mu^{p}\)) and so we focus on the former. Using consecutively~\eqref{eq:mup}, the relation \(qb_{i}=qb_{j}\), and the fact that~$pl+qk=1$, we notice that the following equation holds in~\(H_{1}(E_p(T(p,q)))\): \begin{align*} p \mu^p &= pl(b_{0}+b_{1}+\cdots+b_{p-1}) + pqkb_{0} \\ &= pl(b_{0}+b_{1}+\cdots+b_{p-1}) + qk(b_{0}+b_{1}+\cdots+b_{p-1}) \\ &= (b_{0}+b_{1}+\cdots+b_{p-1}). \end{align*} The conclusion now promptly follows from the definition of the~$x_i$, establishing the proposition. \end{proof} Assume that~$q$ is a prime. In this case~$H_{1}(\Sigma_{p}(T(p,q)))$ becomes an~$\mathbb{F}_q$-vector space. The covering action~$t$ is then an~$\mathbb{F}_q$-linear endomorphism of \(V_{p,q}\). \section{Twisted polynomials of torus knots} \label{sec:TwistedPolynomial} In this this section, we compute the Alexander polynomial of the~$0$-framed surgery~$M_{T(p,q)}$ twisted by a metabelian representation~$\alpha_{T(p,q)}(p,\chi) \colon \pi_1(M_{T(p,q)}) \to GL_p(\mathbb{C}[t^{\pm 1}])$ that frequently appears in Casson-Gordon theory~\cite{HeraldKirkLivingston}. In Subsection~\ref{sub:MetabRep}, we recall the definition of~$\alpha_K(p,\chi)$ for a general knot~$K$, in Subsection~\ref{sub:MetabForTorusKnot}, we restrict to torus knots, and in Subsection~\ref{sub:TwistedPolynomial}, we compute the relevant twisted Alexander polynomials. \subsection{The metabelian representation~$\alpha_K(p,\chi)$} \label{sub:MetabRep} In this subsection, given a knot~$K$ and a positive integer~$p$, we recall the definition of the representation~$\alpha_K(p,\chi)\colon \pi_1(M_K) \to GL_p(\mathbb{C}[t^{\pm 1}])$ from~\cite{HeraldKirkLivingston}. In what follows,~$E_K$ denotes the exterior of~$K$ and~$M_K$ denotes its~$0$-framed surgery. Finally, we use~$\xi_m:=e^{2\pi i/m}$ to denote the~$m$-th primary root of unity. \medbreak We use~$H_1(E_K;\mathds{Z}[t_{K}^{\pm1}])\cong \pi_1(E_K)^{(1)}/\pi_1(E_K)^{(2)}$ to denote the Alexander module of~$K$. In what follows, we shall frequently identify~$H_1(\Sigma_p(K))$ with~$H_1(E_K;\mathds{Z}[t_K^{\pm 1}])/(t_K^p-1)$, as for instance in~\cite[Corollary 2.4]{FriedlEta}. Consider the following composition of canonical projections: \begin{equation} \label{eq:qK} q_K \colon \pi_{1}(M_{K})^{(1)} \to H_1(E_K;\mathds{Z}[t_K^{\pm 1}]) \to H_1(\Sigma_p(K)). \end{equation} Use~$\phi_K \colon \pi_1(E_K) \to H_1(E_K;\mathds{Z})\cong \mathds{Z}=\langle t_K \rangle$ to denote the abelianization homomorphism, and fix an element~$\mu_{K}$ in~$\pi_1(E_K)$ such that~$\phi_K(\mu_{K})=t_K$. Note that for every~$g \in \pi_1(E_K)$, we have~$\phi_K(\mu_K^{-\phi_K(g)}g)=1$. Since~$\phi_K$ is the abelianization map, we deduce that~$\mu_K^{-\phi_K(g)}g$ belongs to~$\pi_1(E_K)^{(1)}$. Combining these notations, we consider the following representation: \begin{align} \label{eq:Matrix} \alpha_K(p,\chi) &\colon \pi_1(E_K) \to ~\operatorname{GL}_p(\mathbb{C}[t^{\pm 1}]) \nonumber \\ \alpha_K(p,\chi)(g)&= \begin{pmatrix} 0& 1 & \cdots &0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ t & 0 & \cdots & 0 \end{pmatrix}^{\phi_K(g)} \begin{pmatrix} \xi_{m}^{\chi(q_K(\mu_K^{-\phi_K(g)}g))} & 0 & \cdots &0 \\ 0 & \xi_{m}^{\chi(t_K \cdot q_K(\mu_K^{-\phi_K(g)}g))} & \cdots &0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \xi_{m}^{\chi(t_K^{p-1} \cdot q_K(\mu_K^{-\phi_K(g)}g))} \end{pmatrix} \nonumber \\ &=:A_p(t)^{\phi_K(g)}\operatorname{diag}\left(\xi_m^{\chi(q_K(\mu_K^{-\phi_K(g)}g))},\ldots,\xi_m^{\chi(t_K^{p-1} \cdot q_K(\mu_K^{-\phi_K(g)}g))}\right). \end{align} Note that~$\alpha(p,\chi)$ can equally well be defined on~$\pi_1(M_K)$ instead of~$\pi_1(E_K)$: the definition can be adapted \textit{verbatim}, and we use the same notation: $$ \alpha_K(p,\chi) \colon \pi_1(M_K) \to ~\operatorname{GL}_p(\mathbb{C}[t^{\pm 1}]). ~$$ A closely related observation is that~$\alpha(p,\chi)$ is a metabelian representation and therefore vanishes on the longitude of~$K$; this also explains why~$\alpha_K(p,\chi)$ descends to ~$\pi_1(M_K)~$. \subsection{An explicit description of~$\alpha_{T(p,q)}(p,\chi)$.} \label{sub:MetabForTorusKnot} We use the presentation of~$\pi_1(E_{T(p,q)})$ from Proposition~\ref{prop:complement-torus-knot} to describe the representation~$\alpha_{T(p,q)}(p,\chi)$. In this subsection, we will often set~$K:=T(p,q)$ in order to avoid cumbersome notations such as~$q_{T(p,q)}$. \medbreak We recall the definition of the generators~$x_0,\ldots,x_{p-1}$ of~$H_1(\Sigma_p(K)) \cong \mathds{Z}_q^{p-1}$ described in Proposition~\ref{prop:homology-group-cover}, referring to Section~\ref{sec:branch-covers-torus} for further details. Using the notations of that section, we set~$x_i=b_i-\mu^p$, where~$\mu$ is a meridian of~$K$. Thinking of~$x_i$ as the abelianisation of~$\mu^{-p}b_i$, and using Proposition~\ref{prop:FundamentalGroupCyclicCover} to identify~$b_i$ with~$\mu^ic_2\mu^{-i}$, we have \begin{equation} \label{eq:PracticalForCharacter} t_K^iq_K(\mu^{-p}c_2) =q_K(\mu^{-p}\mu^i c_2\mu^{-i}) =q_K(\mu^{-p}b_i) =x_i. \end{equation} Recall furthermore that Proposition~\ref{prop:homology-group-cover} also established the relations~$x_0+\cdots+x_{p-1}=0$ as well as~$t_Kx_i=x_{i+1}$. The next result follows immediately from these considerations. \begin{lemma} \label{lem:Characaters} Let~$p,q>0$ be two coprime integers. The abelian group of characters on \(H_1(\Sigma_{p}(T(p,q)))\) is isomorphic to \[\{ \mathbf{a}:=(a_{1},\ldots,a_{p}) \in \mathds{Z}_{q}^{p} \ | \ a_1+\cdots+a_p=0\}.\] The isomorphism maps a character~$\chi$ to~$(\chi(x_0),\ldots,\chi(x_{p-1}))$, and we write~$\chi_{\mathbf{a}}$ for the character associated to~$\mathbf{a}$. \end{lemma} Recall that Proposition~\ref{prop:complement-torus-knot} described a two-generator one-relation presentation for the knot group~$\pi_1(E_{T(p,q)})$: the generators were denoted by~$c_1$ and~$c_2$, and the unique relator was~$c_1^pc_2^{-q}$. The next proposition describes the image of these generators under ~$\alpha(p,\chi):=\alpha_{T(p,q)}(p,\chi)$. This will be useful in Proposition~\ref{prop:TwistedAlexanderPolynomial} when we compute the twisted Alexander polynomial of~$E_{T(p,q)}$. \begin{proposition} \label{prop:alphac1c2} Let~$p,q>0$ be two coprime integers. For a character \(\chi=\chi_{\mathbf{a}}\) on \(H_1(\Sigma_p(T(p,q)))\), the representation~$\alpha(p,\chi)$ is conjugated to a representation~$\alpha'(p,\chi)$ such that \begin{align*} \alpha'(p,\chi)(c_{2}) &= t \cdot \operatorname{diag}(\xi_q^{a_{1}},\ldots,\xi_q^{a_{p}}), \nonumber \\ \alpha'(p,\chi)(c_{1}) &= A_{p}(t)^{q}. \end{align*} \end{proposition} \begin{proof} We first compute~$\alpha(p,\chi)(c_{2})$. We know that~$\phi_K(c_2)=p$ and~$A_p(t)^p=t \cdot \operatorname{id}$. In order to compute the diagonal matrix which appears in the definition of~$\alpha(p,\chi)(c_{2})$ (recall~\eqref{eq:Matrix}), we use~\eqref{eq:PracticalForCharacter} and Lemma~\ref{lem:Characaters} to obtain~$\chi(t_K^{i-1} q_K(\mu^{-p}c_2))=\chi(x_{i-1})=a_i$. The first assertion follows: \[\alpha(p,\chi)(c_{2}) = t \cdot \operatorname{diag}(\xi_q^{\chi(q_K(\mu^{-p}c_2))},\ldots,\xi_q^{\chi(t_K^{p-1}q_K(\mu^{-p}c_2))})=t \cdot \operatorname{diag}(\xi_q^{a_{1}},\ldots,\xi_q^{a_{p}}).\] Next, we study the conjugacy class of~$\alpha(p,\chi)(c_{1})$: we must find an invertible matrix \(X\) such that \begin{align} X \alpha(p,\chi)(c_{1}) X^{-1} &= A_{p}(t)^{q}, \label{eq:conjugation-c1} \\ X \alpha(p,\chi)(c_{2}) X^{-1} &= t \cdot \operatorname{diag}(\xi_q^{a_{1}},\ldots,\xi_q^{a_{p}}). \label{eq:conjugation-c2} \end{align} For \(v \in H_{1}(\Sigma_{p}(K))\), we define~$\widetilde{\alpha}(v) := \operatorname{diag}(\xi_{q}^{\chi(v)},\xi_{q}^{\chi(t_{K}v)},\ldots,\xi_{q}^{\chi(t_{K}^{p-1}v)})~$. Observe that if we set~\(X := \widetilde{\alpha}(z)\), then~\eqref{eq:conjugation-c2} is satisfied for any \(z \in H_{1}(\Sigma_{p}(K))\): indeed~$\alpha(p,\chi)(c_2)$ commutes with~$X$ since both are diagonal. Therefore, we just have to establish the existence of a \(z \in H_{1}(\Sigma_{p}(K))\) such that~\eqref{eq:conjugation-c1} is satisfied for~$X= \widetilde{\alpha}(z)$. First, for any \(x \in H_{1}(\Sigma_{p}(K))\) a computation shows that the following equation holds: \[\widetilde{\alpha}(x) A_{p}(t)^{q} \widetilde{\alpha}(x)^{-1} = A_{p}(t)^{q} \widetilde{\alpha}((t_{K}^{-q}-1)x).\] Define \(y := q_{K}(\mu^{-q}c_{1}) \) so that~$\alpha(p,\chi)(c_{1}) = A_{p}(t)^{q} \widetilde{\alpha}(y).$ Consequently, if we set~$X:=\widetilde{\alpha}(z)$ (for any~$z \in H_1(\Sigma_p(K))$), use the definition of~$y$, the fact that~$\widetilde{\alpha}(y)$ and~$X$ commute (both are diagonal), and the aforementioned identity, then we obtain \begin{align*} X \alpha(p,\chi)(c_{1}) X^{-1} =X A_{p}(t)^{q} \widetilde{\alpha}(y) X^{-1} &=X A_{p}(t)^{q}X^{-1} \widetilde{\alpha}(y) \\ &= A_{p}(t)^{q} \widetilde{\alpha}((t_{K}^{-q}-1)z+y). \end{align*} Therefore, if we choose \(z := -(t_{K}^{-q}-1)^{-1}y\), then~\eqref{eq:conjugation-c1} holds. For this to make sense however, we must argue that~$(t_{K}^{-q}-1)$ is an automorphism of~$H_{1}(\Sigma_{p}(K))$. This is indeed the case: as~$t_K-1$ is an automorphism of~$H_{1}(\Sigma_{p}(K))$, the inverse is given by~$(t_{K}^{-1}-1)^{-1} (1+t_{K}^{-q}+t_{K}^{-2q}+\cdots+t_{K}^{-(k-1)q})$, where~$qk \equiv 1$ mod~$p$. Such a~$k$ exists because~$p$ and~$q$ are coprime. We have therefore found~$X$ such that~\eqref{eq:conjugation-c1} and~\eqref{eq:conjugation-c2} hold, and this concludes the proof of the proposition. \end{proof} \subsection{The computation of the twisted polynomial} \label{sub:TwistedPolynomial} In this subsection, we compute the twisted Alexander polynomial of the~$0$-framed surgery~$M_{T(p,q)}$ with respect to~$\alpha(p,\chi)$. \medbreak Recall that given a space~$X$ and a representation~$\rho \colon \pi_1(X) \to GL_p(\mathbb{C}[t^{\pm 1}])$, the \emph{twisted Alexander polynomia}l~$\Delta_{1}^{\rho}(X)$ is defined as the order of the twisted Alexander module~$H_{1}(X; \mathds{C}[t^{\pm1}]^{p}_{\rho})$. More generally, we write~$\Delta_{i}^{\rho}(X)$ for the order of the~$\mathbb{C}[t^{\pm 1}]$-module~$H_{i}(X; \mathds{C}[t^{\pm1}]^{p}_{\rho})$. Recall that the~$\Delta_{i}^{\rho}(X)$ are defined up to multiplication by units of~$\mathbb{C}[t^{\pm 1}]$. The next proposition describes~$\Delta_{1}^{\alpha(p,\chi)}(E_{T(p,q)})$, where~$E_{T(p,q)}$ denotes the exterior of~$T(p,q)$. \begin{proposition} \label{prop:TwistedAlexanderPolynomial} Let~$p,q>0$ be coprime integers. For~$\chi=\chi_{\mathbf{a}} \colon H_1(\Sigma_p(T(p,q))) \to~GL_p(\mathbb{C}[t^{\pm 1}])$, the metabelian twisted Alexander polynomial of~$E_{T(p,q)}$ is given by ~$$\Delta_{1}^{\alpha(p,\chi)}(E_{T(p,q)})= \frac{(1-t^{q})^{p-1}}{(t\xi_q^{a_{1}}-1)(t\xi_q^{a_{2}}-1) \cdots (t\xi_q^{a_{p}}-1)}.~$$ \end{proposition} \begin{proof} We use~$\tau^{\alpha(p,\chi)}(E_{K})$ to denote the Reidemeister torsion of a knot exterior~$E_{K}$ twisted by~$\alpha(p,\chi):=\alpha_K(p,\chi)$. We refer to~\cite{FriedlVidussiSurvey} for further references on the subject, but simply note that~$\tau^{\alpha(p,\chi)}(E_{K})$ is defined since the chain complex~$C_*(E_{K};\mathds{C}(t)^p)$ of left~$\mathds{C}(t)$-modules is acyclic~\cite[Corollary after Lemma~4]{CassonGordon1}. Since~$E_{K}$ has torus boundary, by \cite[Proposition 2, item 5]{FriedlVidussiSurvey}, the twisted Reidemeister torsion and twisted Alexander polynomial are related~by ~$$\tau^{\alpha(p,\chi)}(E_{K})=\frac{\Delta_1^{\alpha(p,\chi)}(E_{K})}{\Delta_0^{\alpha(p,\chi)}(E_{K})}.$$ Since~$\Delta_{0}^{\alpha(p,\chi)}(E_{K})= 1$ for every knot~$K$~\cite[Lemma 8.1]{BorodzikConwayPolitarczyk}, we are reduced to computing~$\tau^{\alpha(p,\chi)}(E_{T(p,q)}).$ By~\cite[Theorem A]{Kitano}, this torsion invariant can be expressed via Fox calculus. In our case, using the presentation of~$\pi_1(E_{T(p,q)})$ resulting from Proposition~\ref{prop:complement-torus-knot}, we obtain \begin{equation} \label{eq:AlexanderFox} \Delta_{1}^{\alpha(p,\chi)}(E_{T(p,q)}) =\tau^{\alpha(p,\chi)}(E_{T(p,q)})= \frac{\det\left(\alpha(p,\chi)\left( \frac{\partial (c_{1}^{p}c_{2}^{-q})}{\partial c_{1}} \right)\right)}{\det\left( \alpha(p,\chi)(c_{2})-\operatorname{id}) \right)}. \end{equation} Since this expression does not depend on the conjugacy class of~$\alpha(p,\chi)$, we can work with the representation~$\alpha'(p,\chi)$ described in Proposition~\ref{prop:alphac1c2}. Using the first item of Proposition~\ref{prop:alphac1c2}, the denominator of~\eqref{eq:AlexanderFox} is given by the formula \begin{equation} \label{eq:Denominator} \det(\alpha(p,\chi)(c_{2})-\operatorname{id}) = \det (\operatorname{diag}(t \xi_q^{a_{1}}-1,t \xi_q^{a_{2}}-1,\ldots,t \xi_q^{a_{p}}-1)) = \prod_{i=1}^{p}(t\xi_q^{a_{i}}-1). \end{equation} We will now compute the numerator of~\eqref{eq:AlexanderFox} and show that it equals~$(1-t^q)^{p-1}$. Recall from~\eqref{eq:Matrix} that for~$g \in \pi_1(E_K)$, the metabelian representation~$\alpha_K(p,\chi)$ is given by~$\alpha_K(p,\chi)(g)=A_p(t)^{\phi_K(g)}D_g$, where~$D_g$ is the diagonal matrix with~$\xi_q^{\chi(t_K^{i-1} \cdot q_K(\mu_{K}^{-\phi_K(g)}g))}$ as its~$i$-th diagonal component. An inductive argument involving the properties of the Fox derivative shows that \begin{align*} \frac{\partial (c_{1}^{p}c_{2}^{-q})}{\partial c_{1}} &= \frac{\partial c_{1}^{p}}{\partial c_{1}} = 1+c_{1}+c_{1}^{2}+\cdots+c_{1}^{p-1}=:g. \end{align*} We will now apply~$\alpha(p,\chi)$ to~$g$. We recall from Proposition~\ref{prop:alphac1c2} that~$\alpha'(p,\chi)(c_1)=A_p(t)^q$, and we now work over~$\mathds{C}[t^{\pm 1/p}]$. Indeed, as observed in~\cite[page 935]{HeraldKirkLivingston}, in this ring, the matrix~\(A_{p}(t)\) is conjugated to the diagonal matrix \[B_{p}(t) := \operatorname{diag}(t^{1/p},\xi_{p}t^{1/p},\xi_{p}^{2}t^{1/p},\ldots,\xi_{p}^{p-1}t^{1/p}).\] Since~\eqref{eq:AlexanderFox} only depends on the conjugacy class of the representation~$\alpha(p,\chi)$, we can work with~$B_p(t)$ instead of~$A_p(t)$. We use~$\sim$ to denote the conjugacy relation. Since~$B_p(t)$ is diagonal, its powers are easy to compute, and as a consequence, we obtain \begin{align*} \alpha'(p,\chi)(g) &\sim \operatorname{id} + B_{p}(t)^{q} + B_{p}(t)^{2q} + \cdots + B_{p}(t)^{(p-1)q} \\ &= \operatorname{diag}\left( \frac{1-t^{q}}{1-t^{q/p}}, \frac{1-t^{q}}{1-\xi_{p}^{q}t^{q/p}}, \frac{1-t^{q}}{1-\xi_{p}^{2q}t^{q/p}}, \ldots, \frac{1-t^{q}}{1-\xi_{p}^{q(p-1)}t^{q/p}} \right). \end{align*} Taking the determinant of this expression, we deduce that \begin{align} \label{eq:Numerator} \det \left(\alpha(p,\chi)\left( \frac{\partial (c_{1}^{p}c_{2}^{-q})}{\partial c_{1}} \right)\right) = \prod_{j=0}^{p-1}\frac{1-t^{q}}{1-\xi_{p}^{jq}t^{q/p}} = \frac{(1-t^{q})^{p}}{1-t^{q}} = (1-t^{q})^{p-1}. \end{align} Plugging~\eqref{eq:Denominator} and~\eqref{eq:Numerator} into~\eqref{eq:AlexanderFox} concludes the proof of the proposition. \end{proof} Using Proposition~\ref{prop:TwistedAlexanderPolynomial}, we can compute the twisted polynomial of the~$0$-framed surgery~$M_{T(p,q)}$. \begin{corollary} \label{cor:TwistedAlexanderPolynomial} Let~$p,q>0$ be two coprime integers. For~$\chi=\chi_{\mathbf{a}} \colon H_1(\Sigma_p(T(p,q))) \to GL_p(\mathbb{C}[t^{\pm 1}])$, the metabelian twisted Alexander polynomial of~$M_{T(p,q)}$ is given by ~$$\Delta_{1}^{\alpha(p,\chi)}(M_{T(p,q)}))= \frac{(-1)^{p-1}(1-t^{q})^{p-1}}{(t\xi_q^{a_{1}}-1)(t\xi_q^{a_{2}}-1) \cdots (t\xi_q^{a_{p}}-1)(t-1)}.~$$ \end{corollary} \begin{proof} By Proposition~\ref{prop:TwistedAlexanderPolynomial}, we need only show that~$(-1)^{p-1}(t-1)\Delta_{1}^{\alpha(p,\chi)}(M_{K})=\Delta_{1}^{\alpha(p,\chi)}(E_{K})$ for every knot~$K$, where~$\alpha(p,\chi):=\alpha_K(p,\chi)$. Using the equality~$\Delta_{1}^{\alpha(p,\chi)}(E_{K}) =\tau^{\alpha(p,\chi)}(E_{K})$ that was obtained in the proof of Proposition~\ref{prop:TwistedAlexanderPolynomial},~\cite[Lemma~3]{FriedlVidussiSurvey}, as well as~\cite[Proposition 2, item~(8)]{FriedlVidussiSurvey},~\cite[Proposition 5]{FriedlVidussiSurvey}, and the fact that~$\Delta_{0}^{\alpha(p,\chi)}(M_{K})=1$ (by~\cite[Lemma~8.1]{BorodzikConwayPolitarczyk}), we obtain the following sequence of equalities: \begin{align*} \Delta_{1}^{\alpha(p,\chi)}(E_{K}) &=\tau^{\alpha(p,\chi)}(E_{K}) =\det(\alpha(p,\chi)(\mu_K)-\operatorname{id})\tau^{\alpha(p,\chi)}(M_{K}) \\ &=\det(\alpha(p,\chi)(\mu_K)-\operatorname{id})\frac{\Delta_{1}^{\alpha(p,\chi)}(M_{K})}{\Delta_{0}^{\alpha(p,\chi)}(M_{K})\Delta_{2}^{\alpha(p,\chi)}(M_{K})}\\ &=\det(\alpha(p,\chi)(\mu_K)-\operatorname{id})\frac{\Delta_{1}^{\alpha(p,\chi)}(M_{K})}{\Delta_{0}^{\alpha(p,\chi)}(M_{K})\overline{\Delta_{0}^{\alpha(p,\chi)}(M_{K})}} \\ &=\det(\alpha(p,\chi)(\mu_K)-\operatorname{id})\Delta_{1}^{\alpha(p,\chi)}(M_{K}). \end{align*} It thus remains to show that~$\det(\alpha(p,\chi)(\mu_K)-\operatorname{id})=(-1)^{p-1}(t-1)$: this follows from the definition of~$\alpha(p,\chi)$ (recall~\eqref{eq:Matrix}) since~$\alpha(p,\chi)(\mu_K)=A_p(t)$. This concludes the proof of the proposition. \end{proof} \section{Linking forms and their metabolisers} \label{sec:Metabolisers} This section collects some facts about linking forms and their metabolizers. This will be useful in Section~\ref{sec:MainTheorem} since both the metabelian Blanchfield pairing and $\lambda_p(T(p,q))$ are linking forms. In Subsection~\ref{sub:LinkingForms}, we recall some basics on linking forms and their Witt groups. In Subsection~\ref{sub:Graph}, we prove a result on metabolisers of linking forms of the type~$(V_1 \oplus V_2,\lambda_1 \oplus -\lambda_2)$. \subsection{The Witt group of linking forms} \label{sub:LinkingForms} Let~$R$ be a PID with involution, and let~$Q$ denote its field of fractions. This subsection is concerned with linking forms. Firstly, we recall the definition of the Witt group~$W(Q,R)$ of linking forms. Secondly, we collect some facts about~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$ that are used in Section~\ref{sec:MainTheorem} below. \medbreak A \emph{linking form} over~$R$ is a pair~$(V,\lambda)$, where~$V$ is a torsion~$R$-module, and~$\lambda \colon V \times V \to~Q/R$ is a sesquilinear and Hermitian pairing. A linking form~$(V,\lambda)$ is \emph{non-singular} if its \emph{adjoint}~$\lambda^\bullet \colon V \to~V^*, \linebreak x \mapsto \lambda(x,-)$ is an isomorphism. In the sequel, our linking forms will be either over~$\mathds{Z}$ or~$\mathds{C}[t^{\pm 1}]$. From now on, we also assume that all linking forms are non-singular. Given a linking form~$(V,\lambda)$ over~$R$, a submodule~$L \subset V$ is \emph{isotropic} if~$L \subset L^\perp$ and is a \emph{metaboliser} if~$L=L^\perp$. A linking form is \emph{metabolic} if it admits a metabolizer. \begin{definition} \label{def:metabolic_and_so_on} The \emph{Witt group of linking forms}, denoted~$W(Q,R)$, consists of the monoid of linking forms modulo the submonoid of metabolic linking forms. Two linking forms~$(V,\lambda)$ and~$(V',\lambda')$ are called \emph{Witt equivalent} if they represent the same element in~$W(Q,R)$. \end{definition} The Witt group of linking forms is known to be an abelian group under direct sum, where the inverse of the class~$[(V,\lambda)]$ is represented by~$(V,-\lambda)$. Next, we collect some facts on~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$ that will be used in Section~\ref{sec:MainTheorem} below. \begin{remark} \label{rem:JumpsRoots} The Witt group~$W(\mathds{C}(t),\mathds{C}[t^{\pm 1}])$ is known to be free abelian and is detected by the signature jumps~$\delta \sigma_{(V,\lambda)}$~\cite[Sections 4 and~5]{ BorodzikConwayPolitarczyk}. In particular, a linking form~$(V,\lambda)$ over~$\mathbb{C}[t^{\pm 1}]$ is metabolic if and only if all its signature jumps vanish~\cite[Theorem~5.3]{ BorodzikConwayPolitarczyk}. Reformulating,~$[V,\lambda]=0$ in~$W(\mathds{C}(t),\mathds{C}[t^{\pm 1}])$ if and only if~$\delta \sigma_{(V,\lambda)}(\omega)=0$ for all~$\omega \in S^1$. We refer to~\cite[Sections 4 and~5]{ BorodzikConwayPolitarczyk} for further details regarding signatures of linking forms but note that a linking form~$(V,\lambda)$ will have a trivial jump at~$\omega \in S^1$ if the order~$\operatorname{Ord}(T)$ of the~$\mathbb{C}[t^{\pm 1}]$-module~$T$ does not have a root at~$\omega$. \end{remark} In particular, Remark~\ref{rem:JumpsRoots} implies the following result about linear independence in~$W(\mathds{C}(t),\mathbb{C}[t^{\pm 1}])$. \begin{proposition} \label{prop:Splitting} If~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$ are two linking forms over~$\mathbb{C}[t^{\pm 1}]$ such that~$\operatorname{Ord}(V_1)$ and~$\operatorname{Ord}(V_2)$ have distinct roots, then the following assertions hold: \begin{enumerate} \item if~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$ are not metabolic, then the Witt classes~$[V_1,\lambda_1]$ and~$[V_2,\lambda_2]$ are linearly independent in \(W(\mathds{C}(t),\mathds{C}[t^{\pm 1}])\); \item if~$(V_1,\lambda_1) \oplus (V_2,\lambda_2)$ is metabolic, then~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$ are both metabolic. \end{enumerate} \end{proposition} \begin{proof} We only prove the first assertion as the second assertion follows immediately. Assume that~$n_1 [V_1,\lambda_1]+n_2 [V_2,\lambda_2]=0$ for some integers~$n_1$ and~$n_2$. Remark~\ref{rem:JumpsRoots} implies that all the signature jumps of~$n_1 \lambda_1 \oplus n_2 \lambda_2$ must vanish. Since~$\lambda_1$ is not metabolic, Remark~\ref{rem:JumpsRoots} also implies that~$\lambda_1$ admits a non-trivial signature jump at some~$\omega_1 \in S^1$. As a consequence of these two assertions, we infer that~$n_1\lambda_1$ and~$n_2 \lambda_2$ must have a non-trivial signature jump at~$\omega_1$. Since~$\operatorname{Ord}(V_1)$ and~$\operatorname{Ord}(V_2)$ have distinct roots, we deduce that~$n_1=0$. The same reasoning shows that~$n_2=0$, thus establishing the linear independence of~$[V_1,\lambda_1]$ and~$[V_2,\lambda_2]$ and establishing the proposition. \end{proof} \subsection{Graph metabolisers} \label{sub:Graph} Given linking forms~$(V_1,\lambda_1),(V_2,\lambda)$, we prove a result on metabolisers of linking forms of the type~$(V_1 \oplus V_2,\lambda_1 \oplus -\lambda_2)$. More precisely, Proposition~\ref{prop:DirectSummetaboliser} provides a criterion for when such a metabolizer must be a graph. This result will be used in Section~\ref{sec:MainTheorem} when we study metabolisers of~$\lambda_p(T(p,q))^N \oplus -\lambda_p(T(p,q))^N.$ \medbreak Given linking forms~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$, a \emph{morphism} of linking forms is an~$R$-linear homomorphism~$f \colon V_1 \to V_2$ such that~$\lambda_2(f(x),f(y))=\lambda_1(x,y)$ for all~$x,y \in V_1$. Observe that if the forms are non-singular, then a morphism is necessarily injective. An \emph{isometry} of linking forms is a bijective morphism of linking forms. The graph $$ \Gamma_f= \{(v,f(v)) \in V_1 \oplus V_2 \ | \ v \in V_{1}\}~$$ of a morphism~$f \colon (V_1,\lambda_1) \to (V_2,\lambda_2)$ is an isotropic submodule of~$(V_1 \oplus V_2,\lambda_1 \oplus -\lambda_2)$. If~$f$ is an isometry, then~$ \Gamma_f$ is in fact a metaboliser of ~$(V_1 \oplus V_2,\lambda_1 \oplus -\lambda_2)$. The next proposition provides an assumption under which the converse also holds. \begin{proposition} \label{prop:DirectSummetaboliser} Let~$(V_1,\lambda_1)$ and~$(V_2,\lambda_2)$ be linking forms over~$R$, and let~$L \subset V_1 \oplus V_2$ be a metaboliser of~$\lambda_1 \oplus -\lambda_2$. The following assertions hold: \begin{enumerate} \item if \(L \cap (V_{1} \oplus 0) =0= L \cap (0 \oplus V_{2})\), then ~$L$ is the graph of an isometry~$f \colon V_{1} \to V_{2}$: \[L = \{(v,f(v)) \in V_1 \oplus V_2 \ | \ v \in V_{1}\} ;\] \item if we additionally work over~$R=\mathds{Z}$, suppose that \(V_{1}\) and \(V_{2}\) are equipped with an isometric~$\mathds{Z}_p-$action, and~\(L\) is a \(\mathds{Z}_{p}\)-invariant metaboliser, then the isometry \(f\) is \(\mathds{Z}_{p}\)-equivariant. \end{enumerate} \end{proposition} \begin{proof} We prove the first assertion. The isometry~$f$ will be defined by using the canonical projections \(\operatorname{pr}_{i} \colon V_1 \oplus V_2 \to V_{i}\) for \(i=1,2\). Since \(L \cap (V_{1} \oplus 0) =0= L \cap (0 \oplus V_{2})\), it follows that~\(\operatorname{pr}_{i}|_{{L}}\) is injective, for \(i=1,2\). Set \(W_{i} := \operatorname{pr}_{i}(L)\), for \(i=1,2\), and define~$f$ as the composition \[f \colon W_{1} \xrightarrow{\operatorname{pr}_{1}^{-1},\cong} L \xrightarrow{\operatorname{pr}_{2},\cong} W_{2}.\] Since~$f$ is an isomorphism of~$R$-modules, it remains to check that it is a morphism of linking forms. First however, we use the definition of~$f$ to observe that \begin{equation} \label{eq:NearlyL} L = \{(v,f(v)) \in V_1 \oplus V_2 \ | \ v \in W_{1}\} \subset V_1 \oplus V_2. \end{equation} The fact that~$f$ is a morphism now follows from the fact that~$L$ is isotropic: for any~$v,w \in~W_1$, the pairs~$(v,f(v)), (w,f(w))$ belong to~$L$, and therefore we have \[0 = (\lambda_{1} \oplus -\lambda_{2})((v,f(v)),(w,f(w))) = \lambda_{1}(v,w) - \lambda_{2}(f(v),f(w)).\] Looking at~\eqref{eq:NearlyL}, it only remains to show that~$V_1=W_1$ and~$V_2=W_2$. Since~$f$ is an isomorphism, we have \(\operatorname{ord}(W_{1}) = \operatorname{ord}(W_{2})\) and therefore~\eqref{eq:NearlyL} implies that~$ \operatorname{ord}(L)^{2} = \operatorname{ord}(W_{1}) \operatorname{ord}(W_{2})$. Since~\(L\) is a metaboliser, we deduce that \begin{equation} \label{eq:Proportional} \operatorname{ord}(V_{1}) \operatorname{ord}(V_{2}) = \operatorname{ord}(L)^{2} = \operatorname{ord}(W_{1}) \operatorname{ord}(W_{2}). \end{equation} By way of contradiction, assume that~\(\operatorname{ord}(W_{1}) \) divides \( \operatorname{ord}(V_{1})\), but that \(\operatorname{ord}(W_{1}) \neq \operatorname{ord}(V_{1})\); we write \(\operatorname{ord}(W_{1}) \nmid \operatorname{ord}(V_{1})\). A glance at~\eqref{eq:Proportional} shows that~\(\operatorname{ord}(V_{2})~\nmid~\operatorname{ord}(W_{2})\), contradicting the inclusion~$W_2 \subset V_2$. We conclude that \(\operatorname{ord}(W_{i}) = \operatorname{ord}(V_{i})\) and consequently~\(W_{i} =~V_{i}\), for \(i=1,2\). This concludes the proof of the first assertion. We prove the second assertion. Use~$t$ to denote a generator of~$\mathds{Z}_p$. As the metaboliser~$L$ is~$\mathds{Z}_p$-invariant, observe that if~\((v,f(v)) \in L\), then~\((tv,tf(v)) \in L\) for any~$v \in V_1$. Moreover, as~\((tv,f(tv)) \in L\) and \( L \cap (0 \oplus V_{2})=0\), it follows that~\((tv,f(tv) )= (tv,tf(v))\). We have therefore established that \(f(tv)=tf(v)\) for any~$v \in V_1$, and thus~$f$ is~$\mathds{Z}_p$-equivariant, as desired. This concludes the proof of the proposition. \end{proof} \section{Non-slice linear combinations of iterated torus knots } \label{sec:MainTheorem} This section aims to prove Theorem~\ref{thm:Main} from the introduction, whose statement we now recall. For an integer \(p \geq 2\) and a sequence \(Q = (q_{1},q_{2},\ldots,q_{\ell})\) of integers that are relatively prime to~$p$, we use the following notation for iterated torus knots:~$T(p,Q):= T(p,q_{1};p,q_{2};\ldots;p,q_{\ell}).$ Our main result reads as follows. \begin{theorem} \label{thm:LinIndep} Fix a prime power~$p$. Let \(\mathcal{S}_{p}\) be the set of iterated torus knots \(T(p,q_{1};p,q_{2};\ldots;p,q_{\ell})\), where the sequences~$(q_{1},q_{2},\ldots,q_{\ell})$ of positive integers that are coprime to~$p$ satisfy \begin{enumerate} \item $q_\ell$ is a prime; \item for \(i=1,\ldots,\ell-1\), the integer \(q_{i} \) is coprime to \(q_{\ell}\) when~$\ell >1$; \end{enumerate} The set \(\mathcal{S}_{p}\) is linearly independent in the topological knot concordance group~$\mathcal{C}^{\text{top}}$. \end{theorem} To prove Theorem~\ref{thm:LinIndep}, we must obstruct the sliceness of linear combinations of knots belonging to~$\mathcal{S}_p$. The first step, which is carried out in Subsection~\ref{sub:algebr-slice-line}, is to determine which of these linear combinations are algebraically slice. In Subsection~\ref{sub:free-subgr-gener}, we use metabelian twisted Blanchfield pairings to obstruct the sliceness of such algebraically slice linear combinations. \subsection{Algebraically slice linear combinations of algebraic knots} \label{sub:algebr-slice-line} Fix an integer~$p \geq 2$. For~$i=1,\ldots, k$, fix sequences \(Q_{i} = (q_{i,1},q_{i,2},\ldots,q_{i,\ell_i})\) of \(\ell_{i}\) positive integers each of which is coprime to~$p$, and let \(n_{1},\ldots,n_{k} \in \mathds{Z}\). The goal of this subsection is to determine when the following knot is algebraically slice: \begin{equation} \label{eq:LinearCombination} K = n_{1}T(p,Q_{1}) \# n_{2} T(p,Q_{2}) \# \cdots \# n_{k} T(p,Q_{k}). \end{equation} In order to provide a convenient criterion, we define the \(s\)-\emph{level} of \(K\) to be the following knot: $$ \mathcal{K}_{s}(K) :=n_{1} T(p,q_{1,\ell_{1}-s}) \# n_{2} T(p,q_{2,\ell_{2}-s}) \# \ldots \# n_{k} T(p,q_{k,\ell_{k}-s}). $$ Here, it is understood that \(T(p,q_{i}^{\ell_{i}-s})\) is the unknot \(U\) if \(\ell_{i}-s < 1\). As an example of this notation, we see that if~$Q = (q_{1},\ldots,q_{\ell})$, then~$\mathcal{K}_{s}(T(p,Q)) =T(p,q_{\ell-s})$ for~$0 \leq s \leq \ell-1$ and~$\mathcal{K}_{s}(T(p,Q)) = U$, for~$s \geq \ell.$ In particular, the cabling formula for the classical Blanchfield form implies that \begin{equation} \label{eq:DecompositionTorusKnot} \operatorname{Bl}(T(p,Q)) \cong \bigoplus_{s \geq 0}\operatorname{Bl}(\mathcal{K}_{s}(T(p,Q)))(t^{p^{s}}). \end{equation} Indeed, for a knot~$L$, the cabling formula reads as~$\operatorname{Bl}(L_{p,q})(t)=\operatorname{Bl}(T(p,q))(t)\oplus\operatorname{Bl}(L)(t^p)$~\cite{LivingstonMelvin}. Next, we move on to a slightly more involved example. \begin{example} \label{ex:Decompo} The~$s$-levels of \(J := T(p,q_{1};p,q_{2}) \# T(p,q_{3}) \# -T(p,q_{1};p,q_{3}) \# -T(p,q_{2}) \) are given~by \[\mathcal{K}_{s}(J) = \begin{cases} T(p,q_{2}) \# T(p,q_{3}) \# -T(p,q_{3}) \# -T(p,q_{2}), & s = 0, \\ T(p,q_{1}) \# -T(p,q_{1}), & s = 1, \\ U & s \geq 2. \end{cases} \] Here for~$s=1$, we used that~$\mathcal{K}_{1}(J)=T(p,q_{1}) \# U \# -T(p,q_{1}) \# -U$ is~$T(p,q_{1}) \# -T(p,q_{1})$. In particular, observe that the formula displayed in~\eqref{eq:DecompositionTorusKnot} also holds for~$J$. As we shall use in Proposition~\ref{prop:algebraic-sliceness} below, it holds for the linear combination of~\eqref{eq:LinearCombination}. \end{example} For later use, we note that the~$0$-level of~$K$ is the most important to us: the first homology of its~$p$-fold branched cover equals that of~$K$. \begin{remark} \label{rem:BranchedCover0Level} Since we know that~$H_1(\Sigma_p(J_{p,q}))=H_1(\Sigma_p(T(p,q)))$ for any knot~$J$, we deduce ~$$H_1(\Sigma_p(K))= H_1(\Sigma_p(\mathcal{K}_0(K)))=\bigoplus_{i=1}^k H_1(\Sigma_p(T(p,q_{i,\ell_i}))).$$ The analogous decomposition holds for the linking form~$\lambda_p(K)$~\cite[Lemma 4]{Litherland}. \end{remark} The next proposition uses~$s$-levels to exhibit a criterion for the algebraic sliceness of~$K$. \begin{proposition}\label{prop:algebraic-sliceness} Fix an integer \(p \geq 2\) and choose sequences of positive integers~\(Q_{i} = (q_{i,1},\ldots,q_{i,\ell_i})\) that are relatively prime to \(p\), for \(i=1,2,\ldots,k\). The following statements are equivalent: \begin{enumerate} \item\label{item:algebraic-sliceness-1} the knot ~$K = n_{1}T(p,Q_{1}) \# \cdots \# n_{k} T(p,Q_{k})$ is algebraically slice, \item\label{item:algebraic-sliceness-2} each \(\mathcal{K}_{s}(K)\) is slice. \end{enumerate} \end{proposition} \begin{proof} We first assert that the polynomials~$\Delta_{\mathcal{K}_s(K)}(t^{p^s})$ and~$\Delta_{\mathcal{K}_{u}(K)}(t^{p^{u}})$ have distinct roots if~$s\neq~u$. For a positive integer~$m$, we set \(\xi_{m} := e^{ 2 \pi i/m }\). The roots of~\(\Delta_{T(p,q)}(t)\) occur at those \(\xi_{pq}^{a}\) where the integer \(1 \leq a \leq pq\) is such that neither~\(p\) nor~\(q\) divides \(a\), i.e. \(\left(\xi_{pq}^{a}\right)^{p} \neq 1\) and~\(\left( \xi_{pq}^{a} \right)^{q} \neq 1\). Consequently, the roots of~$\Delta_{T(p,q)}(t^{p^{s}})$ occur at \(\xi_{p^{s+1}q}^{a}\) such that \(1 \leq a \leq p^{s+1}q\) and neither \(p\) nor \(q\) divides \(a\). We argue that if~$s\neq u$, then~$\Delta_{T(p,q_1)}(t^{p^s})$ and~$\Delta_{T(p,q_2)}(t^{p^{u}})$ have distinct roots. Assume to the contrary that they have a common root. This root must be of the form~$\xi_{p^{s+1}q_1}^a=\xi_{p^{u+1}q_2}^b$ where~$q_1,p$ (resp.~$q_2,p)$ do not divide~$a$ (resp. b). Without loss of generality, assume that~$s<u$ so that~$1=(\xi_{p^{s+1}q_1}^a)^{p^{s+1}q_1}=(\xi_{p^{u+1}q_2}^b)^{p^{s+1}q_1}=\xi_{p^{u-s}q_2}^{bq_1}$. This implies that~$p^{u-s}q_2$ divides~$bq_1$. However, by assumption,~$p$ divides neither~$q_1$ nor~$b$, yielding the desired contradiction. Next, recall from the definition of the~$s$-level that ~$$ \mathcal{K}_{s}(K) :=n_{1} T(p,q_{1,\ell_{1}-s}) \# n_{2} T(p,q_{2,\ell_{2}-s}) \# \ldots \# n_{k} T(p,q_{k,\ell_{k}-s}). $$ Thus, if~$s\neq u$, then~$\Delta_{\mathcal{K}_s(K)}(t^{p^s})$ and~$\Delta_{\mathcal{K}_{u}(K)}(t^{p^{u}})$ have distinct roots. This proves the assertion. Assume that~$K$ is algebraically slice. By the cabling formula for the Blanchfield pairing (see Example~\ref{ex:Decompo}), \begin{equation} \label{eq:BlanchfieldLinearCombination} \operatorname{Bl}(K)(t) \cong \bigoplus_{s \geq 0}\operatorname{Bl}(\mathcal{K}_{s}(K))(t^{p^{s}}) \end{equation} is metabolic. By the assertion and Proposition~\ref{prop:Splitting}, we deduce that each \(\operatorname{Bl}(\mathcal{K}_{s}(K))(t^{p^{s}})\) is metabolic. It follows that the jump function of each \(\operatorname{Bl}(\mathcal{K}_{s}(K))(t^{p^{s}})\) is trivial which is simply a reparametrization of the jump function of \(\operatorname{Bl}(\mathcal{K}_{s}(K))(t)\) where the parameter~$t\in S^1$ is changed to~$t^{p^r}$. Hence~$\mathcal{K}_s(K)$ is a connected sum of torus knots such that the jump function of~$\sigma_{\omega}(\mathcal{K}_s)$ is trivial. Since Litherland showed in \cite[Lemma~1]{Litherland-signature} that the jump functions of~$\sigma_{\omega}(T(p,q))$ are linearly independent,~$\mathcal{K}_s(K)$ is slice as desired. Assume that each \(\mathcal{K}_{s}(K)\) is slice. As a linking form over~$\mathds{Z}[t^{\pm 1}]$,~$\operatorname{Bl}(\mathcal{K}_{s}(K))$ is metabolic. Combining this with the decomposition displayed in~\eqref{eq:BlanchfieldLinearCombination}, we deduce that~$\operatorname{Bl}(K)$ is metabolic, as a linking form over~$\mathds{Z}[t^{\pm 1}]$. This is equivalent to~$K$ being algebraically slice~\cite{Kearton}, completing the proof of Proposition~\ref{prop:algebraic-sliceness}. \end{proof} When~$K$ is algebraically slice, we obtain a convenient description of the~$0$-level of~$K$. \begin{corollary} \label{cor:0Level} Suppose that~$K$, \(p\), \(\ell_{i}\) and \(Q_{i}\), for \(i=1,\ldots,k\), are as in Proposition~\ref{prop:algebraic-sliceness}. If~$K$ is algebraically slice, then~$k$ is even and, after renumbering if necessary, the~$0$-level of~$K$ is ~$$ \mathcal{K}_{0}(K) = \bigsharp_{j=1}^{k/2} m_j \left( T(p,q_{j,\ell_j}) \# -T(p,q_{j,\ell_j}) \right) ~$$ \end{corollary} \begin{proof} By Proposition~\ref{prop:algebraic-sliceness},~$\mathcal{K}_{0}(K)$ is a slice linear combination of torus knots. Since torus knots are linearly independent in the knot concordance group, the conclusion follows. \end{proof} \subsection{Linear independent families of iterated torus knots} \label{sub:free-subgr-gener} Fix a prime power \(p\). The goal of this section is to prove Theorem~\ref{thm:LinIndep} whose statement we briefly recall. Let~$\mathcal{S}_p$ be the set of iterated torus knots~\(T(p,Q)\), where the sequences~$Q = (q_{1},q_{2},\ldots,q_{\ell})$ of \(\ell_{i}\) positive integers are coprime to~$p$ and satisfy \begin{enumerate} \item $q_\ell$ is a prime; \item for \(i=1,\ldots,\ell-1\), the integer \(q_{i} \) is coprime to \(q_{\ell}\) when~$\ell >1$; \end{enumerate} Theorem~\ref{thm:LinIndep} states that~$\mathcal{S}_p$ is linearly independent in the topological knot concordance group~$\mathcal{C}^{\text{top}}$. For~$i=1,\ldots, k$, we therefore choose sequences \(Q_{i} = (q_{i,1},q_{i,2},\ldots,q_{i,\ell_i})\) of positive integers where $q_{i,\ell_i}$ is prime for all~$i$, and the integer~$q_{i,j}$ is coprime to~$p$ and to~$q_{i,\ell_i}$ for all~$j$. We also let \(n_{1},\ldots,n_{k} \in \mathds{Z}\) be integers. We will use metabelian Blanchfield pairings~\cite{MillerPowell,BorodzikConwayPolitarczyk} to obstruct the sliceness of the knot \[K = n_{1}T(p,Q_{1}) \# n_{2} T(p,Q_{2}) \# \cdots \# n_{k} T(p,Q_{k}).\] The sliceness obstruction that we will use, and which is due to Miller-Powell~\cite[Theorem~6.10]{MillerPowell}, reads as follows. If for every~$\mathds{Z}_p$-invariant metaboliser~$G$ of~$\lambda_p(K)$, there exists a prime power order character~$\chi$ that vanishes on~$G$ and such that~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not metabolic, then~$K$ is not slice. Here, we use~$\alpha(p,\chi):=\alpha_K(p,\chi)$ to denote the metabelian representation that was described in Subsection~\ref{sub:MetabRep}. \begin{remark} The \emph{metabelian Blanchfield pairing} is a linking form $$ \operatorname{Bl}_{\alpha(p,\chi)}(K) \colon H_1(M_K;\mathbb{C}[t^{\pm 1}]_{\alpha(p,\chi)}^p) \times H_1(M_K;\mathbb{C}[t^{\pm 1}]_{\alpha(p,\chi)}^p) \to \mathds{C}(t)/\mathbb{C}[t^{\pm 1}], $$ where $H_1(M_K;\mathbb{C}[t^{\pm 1}]_{\alpha(p,\chi)}^p) $ denotes the homology of the $0$-framed surgery of $K$ twisted by $\alpha(p,\chi)$. The precise definition of $\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not needed in this paper (the interested reader can nonetheless find it in~\cite{MillerPowell} and~\cite{BorodzikConwayPolitarczyk}). All we need is the behavior of $\operatorname{Bl}_{\alpha(p,\chi)}(K)$ under satellite operations, and this will be recalled as the argument proceeds. \end{remark} The strategy behind the proof of Theorem~\ref{thm:LinIndep} is as follows. \begin{enumerate} \item Firstly, we study the characters on~$H_1(\Sigma_p(K))$. \item Secondly, we study the consequences of~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ being metabolic. This will impose substantial restrictions on~$\chi$. \item Thirdly, we build characters that violate these restrictions. \item Finally, we combine these first three steps to conclude the proof. \end{enumerate} The reader that wishes to see how these steps combine might consider starting with a glance at the end of the argument, after the conclusion of the proof of Lemma~\ref{lemma:constructing-characters}; see Subsection~\ref{subsub:Conclusion}. \subsubsection{Characters on~$H_1(\Sigma_p(K))$.} \label{subsub:Charac} Assume that~$K$ is slice. The first step is to study the possible characters on the~$p$-fold branched cover of~$K$. Since~$K$ is algebraically slice, Corollary~\ref{cor:0Level} implies that~$k$ is even and, after renumbering if necessary, for some prime~$r$ (which is one of the $q_{j,\ell_j}$) and some integers~$m_1,\ldots,m_{k/2}$, we can write \[\mathcal{K}_{0}(K) = m_1 \left( T(p,r) \# -T(p,r) \right) \# \bigsharp_{j=2}^{k/2} m_j \left( T(p,q_{j,\ell_j}) \# -T(p,q_{j,\ell_j}) \right), \] where~$q_{i,\ell_i}= r$ if and only if~$1 \leq i \leq 2m_{1}$. It follows that if we set \(M_{j} = m_{1}+m_{2}+ \cdots + m_{j-1}\), for \(j=2,\ldots,k/2\), then after further possible renumbering, the knot~$K$ can be rewritten as \begin{equation} \label{eq:AlgebraicallySliceForm} K = \bigsharp_{i=1}^{m_{1}}\left( T(p,Q_{2i-1}) \# -T(p,Q_{2i}) \right) \# \bigsharp_{j=2}^{k/2} \bigsharp_{i=1}^{m_{j}} \left( T(p,Q_{2M_{j}+2i-1}) \# -T(p,Q_{2M_{j}+2i}) \right). \end{equation} As Remark~\ref{rem:BranchedCover0Level} implies that ~$H_{1}(\Sigma_{p}(K)) \cong H_{1}(\Sigma_{p}(\mathcal{K}_{0}(K)))$, the description of \(\mathcal{K}_{0}(K)\), the primary decomposition, and the fact that the~$q_{i,\ell_i}$ are prime shows that \begin{align} \label{eq:DecompoBranchedCover} H_{1}(\Sigma_{p}(K)) =H_1(\Sigma_p(T(p,r)))^{m_{1}} &\oplus H_1(\Sigma_p(-T(p,r)))^{m_{1}} \\ &\oplus \bigoplus_{j=2}^{k/2} \left( H_1(\Sigma_p(T(p,q_{j,\ell_j})))^{m_{j}} \oplus H_1(\Sigma_p(-T(p,q_{j,\ell_j})))^{m_{j}} \right). \nonumber \end{align} The linking form~$\lambda_p(K)$ on~$\Sigma_p(K)$ decomposes analogously. From now on,~$\theta$ denotes the trivial character. Also, since~$H_1(\Sigma_p(T(p,r))) \cong \mathds{Z}_r^{p-1}$, we write characters~$H_1(\Sigma_p(T(p,r))) \to \mathds{Z}_r$ as~$\chi_{\mathbf{a}}$ where~$\mathbf{a} \in \mathds{Z}_r^p$. Since $r$ is distinct from~$q_{i,\ell_i}$ for~$i>2m_1$, the decomposition of~\eqref{eq:DecompoBranchedCover} implies that any character~$\chi \colon H_{1}(\Sigma_{p}(K)) \to \mathds{Z}_{r}$ must be of the form \begin{equation} \label{eq:Charac} \chi= \bigoplus_{i=1}^{m_1} \left( \chi_{\mathbf{a}^i} \oplus \chi_{\mathbf{b}^i} \right) \oplus \bigoplus_{j=2}^{k/2}\bigoplus_{i=1}^{m_j} \theta \oplus \theta, \end{equation} where~$\lbrace \mathbf{a}^j \rbrace_{j=1}^{m_1}$ and~$\lbrace \mathbf{b}^j \rbrace_{j=1}^{m_1}$ are sequences of~$p$ elements in~$\mathds{Z}_r$. \begin{remark} \label{rem:PrimaryDecompositionMetaboliser} Recall that the Miller-Powell obstruction requires that for every~$\mathds{Z}_p$-invariant metaboliser~$G$ of~$\lambda_p(K)$, we construct a prime power order character~$\chi$ that vanishes on~$G$ and such that~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not metabolic. The primary decomposition implies that every such metabolizer decomposes as a direct sum of metabolisers of the summands in~\eqref{eq:DecompoBranchedCover}. Consequently, thanks to the form of the character in~\eqref{eq:Charac}, it is sufficient to prove the following result: for every~$\mathds{Z}_p$-invariant metaboliser~$L$ of~$\lambda_p(T(p,r))^{m_1} \oplus -\lambda_p(T(p,r))^{m_1}$, there is a prime power order character~$\bigoplus_{i=1}^{m_1} \left( \chi_{\mathbf{a}^i} \oplus \chi_{\mathbf{b}^i} \right)$ that vanishes on~$L$ and such that~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not metabolic, with $\chi$ as in~\eqref{eq:Charac}. \end{remark} \subsubsection{The metabelian Blanchfield pairing of~$K$.} \label{subsub:Blanchfield} We now study the metabelian Blanchfield pairing of~$K$. We first use satellite formulas to decompose it, and we then study the implications of it being metabolic. We use~$\alpha(p,\chi):=\alpha_K(p,\chi)$ to denote the metabelian representation that was described in Subsection~\ref{sub:MetabRep}. The behavior of metabelian Blanchfield pairings under connected sums~\cite[Corollary~8.21]{BorodzikConwayPolitarczyk} implies that~$ \operatorname{Bl}_{\alpha(p,\chi)}(K)$ is Witt equivalent to the following linking form: \begin{align} \label{eq:ApplySatelliteFormula} \operatorname{Bl}_{\alpha(p,\chi)}(K) \sim \bigoplus_{i=1}^{m_1} & \left(\operatorname{Bl}_{\alpha(p,\chi_{\mathbf{a}^i})}(T(p,Q_{2i-1})) \oplus - \operatorname{Bl}_{\alpha(p,\chi_{\mathbf{b}^i})}(T(p,Q_{2i})) \right) \\ & \oplus \bigoplus_{j=2}^{k/2 } \bigoplus_{i=1}^{m_j} \left( \operatorname{Bl}_{\alpha(p,\theta)}(T(p,Q_{2 M_{j} + 2i-1})) \oplus - \operatorname{Bl}_{\alpha(p,\theta)}(T(p,Q_{2 M_{j} + 2i})) \right). \nonumber \end{align} For a sequence~$S=(q_1,\ldots,q_k)$, we use~$T(p,\widehat{S})$ to denote the iterated torus knot~$T(p,q_1;\ldots;p,q_{k-1})$. Next, we apply the satellite formula for the metabelian Blanchfield pairing~\cite[Theorem 8.19]{BorodzikConwayPolitarczyk} to both expressions in~\eqref{eq:ApplySatelliteFormula}. As we are working with~$p$-fold covers, and the sequences~$Q_{2i-1}$ and~$Q_{2i}$ (resp.~$Q_{2 M_{j} + 2i-1}$ and~$Q_{2 M_{j} + 2i}$) both have~$r$ (resp.~$q_{j,\ell_j}$) as the prime in last position, we claim \begin{align} \label{eq:TwistedSatelliteApplication} \operatorname{Bl}_{\alpha(p,\chi)}(K) & \sim \bigoplus_{i=1}^{m_1} \left(\operatorname{Bl}_{\alpha(p,\chi_{\mathbf{a}^i})}(T(p,r)) \oplus - \operatorname{Bl}_{\alpha(p,\chi_{\mathbf{b}^i})}(T(p,r)) \right) \nonumber \\ &\oplus \bigoplus_{i=1}^{m_1} \bigoplus_{u=1}^p \left(\operatorname{Bl}(T(p,\widehat{Q}_{2i-1}))(\xi_r^{\mathbf{a}_u^i}t) \oplus - \operatorname{Bl}(T(p,\widehat{Q}_{2i}))(\xi_r^{\mathbf{b}_u^i}t) \right) \\ &\oplus \bigoplus_{j=2}^{k/2 } \bigoplus_{i=1}^{m_j} \left( \operatorname{Bl}_{\alpha(p,\theta)}(T(p,q_{j,\ell_j}))) \oplus - \operatorname{Bl}_{\alpha(p,\theta)}(T(p,q_{j,\ell_j})) \right) \nonumber \\ &\oplus \bigoplus_{j=2}^{k/2 } \bigoplus_{i=1}^{m_j} \bigoplus_{u=1}^p \left( \operatorname{Bl}(T(p,\widehat{Q}_{2 M_{j} + 2i-1}))(t) \oplus - \operatorname{Bl}(T(p,\widehat{Q}_{2 M_{j} + 2i}))(t) \right). \nonumber \end{align} The satellite formula of~\cite[Theorem~8.19]{BorodzikConwayPolitarczyk} involves the expression~$\operatorname{Bl}(K) (\xi_{q_1}^{\chi(t_Q^{i-1}q_Q(\mu_Q^{-w}\eta))} t)$, where~$\mu_Q$ denotes the meridian of the satellite knot~$Q=P_\eta(K)$ with pattern~$P$, companion~$K$ and infection curve~$\eta$; furthermore,~$q_Q\colon \pi_1(M_Q) \to H_1(\Sigma_p(Q))$ denotes the map described in~\eqref{eq:qK}. Recalling the notations of Section~\ref{sec:branch-covers-torus}, we see that in our case,~$\eta$ coincides with the curve~$c_2$, and~$\mu_Q=\mu_{T(p,q)}$. Thus, as explained in~\eqref{eq:PracticalForCharacter} for~$\chi=\chi_{\mathbf{a}}$, we deduce that~$\chi(t_Q^{u-1}q_Q(\mu_Q^{-w}\eta))=\mathbf{a}_u$, and this explains the second summand of~\eqref{eq:TwistedSatelliteApplication}. The decomposition in~\eqref{eq:TwistedSatelliteApplication} is now justified, concluding the claim. Next, we wish to apply the cabling formula~$\operatorname{Bl}(J_{p,q})(t)=\operatorname{Bl}(T(p,q))(t) \oplus \operatorname{Bl}(J)(t^p)$ for the classical Blanchfield pairing. To make notations more manageable however, for \(s \geq 1\), coprime integers~$p,q$ and~$\mathbf{a} \in \mathds{Z}_r^{p}$, we consider the linking form \[\Lambda(p,q,\chi_\mathbf{a},s) := \bigoplus_{u=0}^{p-1}\operatorname{Bl}(T(p,q))(\xi_{r}^{p^{s-1}\mathbf{a}_{u}}t^{p^{s-1}}).\] If the character~$\chi_{\mathbf{a}}$ is trivial, then we write~$\Lambda(p,q,s)$ instead of~$\Lambda(p,q,\theta,s)$. These pairings appear as summands of the Blanchfield pairing of a cable. Indeed, using these notations and the aforementioned untwisted cabling formula, we deduce from~\eqref{eq:TwistedSatelliteApplication} that \begin{align} \operatorname{Bl}_{\alpha(p,\chi)}(K) &\sim \bigoplus_{i=1}^{m_1} \left(\operatorname{Bl}_{\alpha(p,\chi_{\mathbf{a}^{i}})}(T(p,r)) \oplus -\operatorname{Bl}_{\alpha(p,\chi_{\mathbf{b}^{i}})}(T(p,r)) \right) \label{eq:term-one}\\ &\oplus \bigoplus_{j=2}^{k/2} \bigoplus_{i=1}^{m_j} \left(\operatorname{Bl}_{\alpha(p,\theta)}(T(p,q_{j,\ell_{j}})) \oplus -Bl_{\alpha(p,\theta)}(T(p,q_{j,\ell_{j}})) \right) \label{eq:term-two}\\ &\oplus \bigoplus_{i=1}^{m_1} \bigoplus_{s \geq 1} \left( \Lambda(p,q_{2i-1,\ell_{2i-1}-s},\chi_{\mathbf{a}^{i}},s) \oplus - \Lambda(p,q_{2i,\ell_{2i}-s},\chi_{\mathbf{b}^{i}},s) \right) \label{eq:term-three}\\ &\oplus \bigoplus_{j=1}^{k/2} \bigoplus_{i=2}^{m_j} \bigoplus_{s \geq 1} \left( \Lambda(p,q_{2M_j+2i-1,\ell_{2M_j+2i-1}-s},s) \oplus - \Lambda(p,q_{2M_j+2i,\ell_{2M_j+2i}-s},s)\right). \label{eq:term-four} \end{align} To simplify the notation, we respectively use \(B_1^\chi,B_2,B_3^\chi,B_4\) to denote~\eqref{eq:term-one},~\eqref{eq:term-two},~\eqref{eq:term-three} and~\eqref{eq:term-four}. Now that we have decomposed~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$, we study the consequences of it being metabolic. \begin{claim} \label{claim:T1T3PlusT4Metabolic} If \(\operatorname{Bl}_{\alpha(p,\chi)}(K)\) is metabolic, then \(B_1^\chi\) and \(B_3^\chi \oplus B_{4}\) are metabolic. \end{claim} \begin{proof} As~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ and~$B_2$ are metabolic,~$B_1^\chi \oplus (B_3^\chi \oplus B_4)$ is metabolic. By Proposition ~\ref{prop:Splitting}, it suffices to prove that the orders of \(B_1^\chi\) and \(B_3^\chi \oplus B_{4}\) have distinct roots: the roots of the twisted polynomial occur at prime powers of unity (by Proposition~\ref{prop:TwistedAlexanderPolynomial}), while this is never the case for the classical Alexander polynomial~\cite[proof of Proposition~3.3, item~(3)]{FriedlEta}.\footnote{Here is a topological proof of this fact: for a knot~$K$ and an integer~$q$, the order of~$H_1(\Sigma_{q}(K))$ is~$\prod_{a=1}^{q-1}\Delta_K(\xi_{q}^a)$~\cite[Corollary~9.8]{LickorishIntroduction}; since~$q$ is a prime power,~$H_1(\Sigma_{q}(K))$ is a finite group, and thus none of the~$\Delta_K(\xi_{q}^a)$ can vanish.} This proves of Claim~\ref{claim:T1T3PlusT4Metabolic}. \end{proof} In order to study the consequences of \(B_3^\chi \oplus B_{4}\) being metabolic, for \(s \geq 1\), we set \begin{align*} B_3^\chi(s) &:= \bigoplus_{i=1}^{m_1} \left( \Lambda(p,q_{2i-1,\ell_{2i-1}-s},\chi_{\mathbf{a}^{i}},s) \oplus - \Lambda(p,q_{2i,\ell_{2i}-s},\chi_{\mathbf{b}^{i}},s) \right), \\ B_{4}(s) &:= \bigoplus_{j=1}^{k/2} \bigoplus_{i=2}^{m_j} \left( \Lambda(p,q_{2M_j+2i-1,\ell_{2M_j+2i-1}-s},s) \oplus - \Lambda(p,q_{2M_j+2i,\ell_{2M_j+2i}-s},s)\right). \end{align*} Using these forms, we derive a further consequence of~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ being metabolic. \begin{claim} \label{claim:Ti(s)Metabolic} If~$B_3^\chi \oplus B_4$ is metabolic, then~$B_3^\chi(s) \oplus B_{4}(s)$ is metabolic for each~$s$. \end{claim} \begin{proof} By definition, we have the decompositions~$B_3^\chi = \bigoplus_{s \geq 1} B_3^\chi(s)$ and~$B_{4} = \bigoplus_{s \geq 1} B_{4}(s)$. For~$u \neq v$, the order of~$B_3^\chi(u) \oplus B_4(u)$ and the order of~$B_3^\chi(v) \oplus B_4(v)$ have distinct roots. By Proposition~\ref{prop:Splitting}, Claim~\ref{claim:Ti(s)Metabolic} follows. \end{proof} Consequently, it is sufficient to study the linking forms \(B_3^\chi(s) \oplus B_{4}(s)\), for a fixed \(s \geq 1\). To further decompose~$B_3^\chi(s) \oplus B_4(s)$, we want to group these linking forms according to the torus knots that appear. We also need to be attentive to the fact that the torus knot~$T(p,q_{i,\ell_i-s})$ is trivial when~$i \leq \ell_i$. As a consequence, for \(s \geq 1\), we consider the sets \begin{align} \label{eq:I_j(q,s)} \mathcal{I}_{1}(q,s) &:= \{1 \leq i \leq m_{1} \ \big\vert \ \ell_{2i-1} > s, \quad q_{2i-1,\ell_{2i-1}-s}=q\}, \nonumber \\ \mathcal{I}_{2}(q,s) &:= \{1 \leq i \leq m_{1} \ \big\vert \ \ell_{2i} > s, \quad q_{2i,\ell_{2i}-s}=q\}, \\ \mathcal{I}_{3}(q,s) &:= \bigcup_{j=2}^{k/2} \{1 \leq i \leq m_{j} \ \big\vert \ \ell_{2M_{j} + 2i-1} > s, \quad q_{2M_{j}+2i-1,\ell_{2M_{j}+2i-1}-s}=q\}, \nonumber \\ \mathcal{I}_{4}(q,s) &:= \bigcup_{j=2}^{k/2} \{1 \leq i \leq m_{j} \ \big\vert \ \ell_{2M_{j} + 2i} > s, \quad q_{2M_{j} + 2i,\ell_{2M_{j}+2i}-s}=q\}. \nonumber \end{align} Note that for some~$q$, the set~$\mathcal{I}_{i}(q,s)$ may well be empty. However, from now on, we will implicitly assume that we only consider~$q$ for which this is not the case. In order to study the consequences of~$B_3^\chi(s) \oplus B_4(s)$ being metabolic, we set \begin{align*} B_3^\chi(q,s) &:= \bigoplus_{k \in \mathcal{I}_{1}(q,s)} \Lambda(p,q,\chi_{\mathbf{a}^{k}},s) \oplus -\bigoplus_{k \in \mathcal{I}_{2}(q,s)} \Lambda(p,q,\chi_{\mathbf{b}^{k}},s), \\ B_4(q,s) &:= \bigoplus_{k \in \mathcal{I}_{3}(q,s)} \Lambda(p,q,s) \oplus -\bigoplus_{k \in \mathcal{I}_{4}(q,s)} \Lambda(p,q,s). \end{align*} Note that~$B_4(q,s)$ is not automatically metabolic as the cardinality of~$ \mathcal{I}_{3}(q,s)$ need not agree with that of~$ \mathcal{I}_{4}(q,s)$. Observe however that if \(K\) is algebraically slice, Proposition~\ref{prop:algebraic-sliceness} implies that \begin{equation} \label{eq:1234} \# \mathcal{I}_{1}(q,s) - \# \mathcal{I}_{2}(q,s) + \# \mathcal{I}_{3}(q,s) - \# \mathcal{I}_{4}(q,s) = 0. \end{equation} Indeed, note that the sets \(\mathcal{I}_{i}(q,s)\) record where \(T(p,q)\) appears in the \(s\)-level of \(K\). Using the~$B_i(q,s)$, we now derive a further consequence of~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ being metabolic. \begin{claim} \label{claim:Ti(q,s)Metabolic} If~$B_3^\chi(s) \oplus B_4(s)$ is metabolic, then~$B_3^\chi(q,s) \oplus B_4(q,s)$ is metabolic for each~$q$. \end{claim} \begin{proof} We have the decompositions~$B_3^\chi(s) = \bigoplus_{q \geq 1} B_3^\chi(q,s)$ and~$B_{4}(s) = \bigoplus_{q \geq 1} B_{4}(q,s)$. Since all the~$q_i$ are positive, for~$u \neq v$, the order of~$B_3^\chi(u,s) \oplus B_4(u,s)$ and the order of~$B_3^\chi(v,s) \oplus B_4(v,s)$ have distinct roots. By Proposition~\ref{prop:Splitting}, Claim~\ref{claim:Ti(q,s)Metabolic} follows. \end{proof} Summarising the content of these claims, we have shown that if the metabelian Blanchfield pairing~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is metabolic, then the linking forms~$B_3^\chi(q,s) \oplus B_4(q,s)$ are metabolic for all~$q,s$. This concludes the second part of the proof. \subsubsection{Building the characters that vanish on metabolisers} \label{subsub:BuildingCharacters} The third part consists in showing that for every~$\mathds{Z}_p$-invariant metaboliser $L$ of~$\lambda_p(T(p,r))^{m_1} \oplus -\lambda_p(T(p,r))^{m_1}$ there are characters $\chi_{\mathbf{a}}=\bigoplus_{i=1}^{m_1} \chi_{\mathbf{a}^i}$ and $\chi_{\mathbf{b}}=\bigoplus_{i=1}^{m_1} \chi_{\mathbf{b}^i}$ such that $\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$ vanishes on $L$, but for which the linking forms~$B_3^\chi(q,s) \oplus B_4(q,s)$ are not all metabolic, where $\chi=\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}} \oplus \theta$ is as in~\eqref{eq:Charac}. The next proposition describes characters for which~$B_3^\chi(q,s) \oplus B_4(q,s)$ is not metabolic. \begin{proposition} \label{prop:NotMetabolic} Let \(q,s>0\) be positive integers with~$q$ coprime to~$p$. If a character \linebreak \(\bigoplus_{i=1}^{m_{1}} \chi_{\mathbf{a}^{i}} \oplus \chi_{\mathbf{b}^{i}}\) satisfies one of the following conditions: \begin{enumerate} \item \(\chi_{\mathbf{b}^{k}} = \theta\) for every \(k \in I_{2}(q,s)\) and \(\chi_{\mathbf{a}^{k_{0}}} \neq \theta\) for some \(k_{0} \in I_{1}(q,s)\), or, \item \(\chi_{\mathbf{a}^{k}} = \theta\) for every \(k \in I_{1}(q,s)\) and \(\chi_{\mathbf{b}^{k_{0}}} \neq \theta\) for some \(k_{0} \in I_{2}(q,s)\), \end{enumerate} then the linking form \(B_{3}^{\chi}(q,s) \oplus B_{4}(q,s)\) is not metabolic. \end{proposition} \begin{proof} We will only consider case~(1). In order to give the proof in case~(2) just exchange the roles of \(\chi_{\mathbf{a}}\) and \(\chi_{\mathbf{b}}\). Assume that \(\chi_{\mathbf{b}^{k}} = \theta\) for every \(k \in \mathcal{I}_{2}(q,s)\) and \(\chi_{\mathbf{a}^{k_{0}}} \neq \theta\) for some \(k_{0} \in \mathcal{I}_{1}(q,s)\). Since \(K\) is algebraically slice, recall from~\eqref{eq:1234} that \[\#\mathcal{I}_{1}(q,s) - \#\mathcal{I}_{2}(q,s) + \#\mathcal{I}_{3}(q,s) - \#\mathcal{I}_{4}(q,s) = 0,\] We thus define \(N := \#\mathcal{I}_{1}(q,s) = \#\mathcal{I}_{2}(q,s) - \#\mathcal{I}_{3}(q,s)+\#\mathcal{I}_{4}(q,s)\) leading to the Witt equivalence \begin{equation} \label{eq:NotMetabolic} B_{3}^{\chi}(q,s) \oplus B_{4}(q,s) \sim \bigoplus_{k \in I_{1}(q,s)} \Lambda(p,q,\chi_{\mathbf{a}^{i}},s) \oplus- \bigoplus_{i=1}^{p\cdot N} \operatorname{Bl}(T(p,q))(t^{p^{s-1}}). \end{equation} We assert that the orders of the modules underlying the summands of the right hand side of~\eqref{eq:NotMetabolic} have distinct roots. First, note that~$r$ is coprime to~$q$: as~$k \in \mathcal{I}_1(q,s)$, we know that~$q \in Q_i$ for some~$i<~2m_1$, and since~$Q_i=(q_{i,1},q_{i,2},\ldots,q_{i,{\ell_i-1}},r)$ for~$i<2m_1$, this follows from the assumption of Theorem~\ref{thm:LinIndep}. It is known that~$\Delta_{T(p,q)}(\xi^{a_1}_rt)$ and~$\Delta_{T(p,q)}(\xi^{a_2}_rt)$ have distinct roots whenever~$a_1 \neq~a_2$ and $r$ and $q$ are coprime~\cite[Theorem~7.1]{HeddenKirkLivingston}. This establishes the assertion. Thanks to the assertion, we may apply Proposition~\ref{prop:NotMetabolic}. Indeed, the fact that \(\chi_{\mathbf{a}^{k_{0}}} \neq \theta\) and Proposition~4.3 now guarantees that the linking form on the right-hand side of~\eqref{eq:NotMetabolic} is not metabolic. This concludes the proof of Proposition~\ref{prop:NotMetabolic}. \end{proof} Before constructing the required characters, we introduce some terminology. We say that the knot~\(K\) is \emph{simplified}, if there are no indices~\(k_{1} \in \mathcal{I}_1(q,s) \) and~\(k_{2} \in \mathcal{I}_2(q,s) \) such that \(Q_{2k_{1}-1} = Q_{2k_{2}}\). If~\(K\) is not simplified, then it contains a slice connected summand \(T(p,Q_{2k_{1}-1}) \# -T(p,Q_{2k_{1}-1})\). \begin{lemma} \label{lemma:constructing-characters} Let \(p\) be a prime power. If the knot \(K\) is simplified, then for any \(\mathds{Z}_{p}\)-invariant metabolizer \(L \subset H_1(\Sigma_p(T(p,r)))^{m_{1}} \oplus H_1(\Sigma_p(T(p,r)))^{m_{1}} \) there exist \(q,s\) and a character \(\ \chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}=\bigoplus_{i=1}^{m_1} \chi_{\mathbf{a}^i} \oplus \chi_{\mathbf{b}^i} \) vanishing on \(L\) such that one of the following conditions is satisfied: \begin{enumerate} \item \(\chi_{\mathbf{b}^{k}} = \theta\) for every \(k \in \mathcal{I}_{2}(q,s)\) and \(\chi_{\mathbf{a}^{k_{0}}} \neq \theta\) for some \(k_{0} \in \mathcal{I}_{1}(q,s)\), or, \item \(\chi_{\mathbf{a}^{k}} = \theta\) for every \(k \in \mathcal{I}_{1}(q,s)\) and \(\chi_{\mathbf{b}^{k_{0}}} \neq \theta\) for some \(k_{0} \in \mathcal{I}_{2}(q,s)\). \end{enumerate} \end{lemma} \begin{proof} Fix a metabolizer \(L \subset H_1(\Sigma_p(T(p,r)))^{m_{1}} \oplus H_1(\Sigma_p(T(p,r)))^{m_{1}} \) of~$\lambda_p(T(p,r))^{m_1} \oplus -\lambda_p(T(p,r))^{m_1}$. For~$i=1,2$ consider the projection \(\operatorname{pr}_{i} \colon H_1(\Sigma_p(T(p,r)))^{m_{1}} \oplus H_1(\Sigma_p(T(p,r)))^{m_{1}} \to H_1(\Sigma_p(T(p,r)))^{m_{1}} \) onto the \(i\)-th factor. The proof is divided into three separate cases. \textbf{Case 1:} \textit{\(\operatorname{pr}_{1}(L)\) is a proper subspace of \( H_1(\Sigma_p(T(p,r)))^{m_{1}} \).} In this case, we can define the characters~\(\chi_{\mathbf{a}}\) and \(\chi_{\mathbf{b}}\) as follows: \(\chi_{\mathbf{b}} = \theta\) and \[\chi_{\mathbf{a}} \colon H_1(\Sigma_p(T(p,r)))^{m_{1}} \to H_1(\Sigma_p(T(p,r)))^{m_{1}}/ \operatorname{pr}_{1}(L) \xrightarrow{\text{nontrivial charater}} \mathds{Z}_{r}.\] It is not difficult to see that \(\chi_{\mathbf{a}}\) and \(\chi_{\mathbf{b}}\) satisfy~(1) and are such that~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$ vanishes on~$L$. \textbf{Case 2:} \textit{\( \operatorname{pr}_{2}(L)\) is a proper subspace of \( H_1(\Sigma_p(T(p,r)))^{m_{1}}\).} In this case, we exchange the roles of~\(\chi_{\mathbf{a}}\) and~\(\chi_{\mathbf{b}}\) and repeat the argument from the first case. This way, we obtain characters \(\chi_{\mathbf{a}}\) and~\(\chi_{\mathbf{b}}\) that satisfy~(2) and are such that~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$ vanishes on~$L$. \textbf{Case 3:} \( \operatorname{pr}_{1}(L) =H_1(\Sigma_p(T(p,r)))^{m_{1}} \) and \( \operatorname{pr}_{2}(L) = H_1(\Sigma_p(T(p,r)))^{m_{1}}\). We wish to apply Proposition~\ref{prop:DirectSummetaboliser} in order to prove that~$L$ is a graph. We verify the hypothesis of this proposition. Using the assumption of Case 3 and the definition of the projections, we have \[0 = \ker(\operatorname{pr}_{1}|_{L}) = L \cap (0 \oplus H_1(\Sigma_p(T(p,r)))^{m_{1}}), \quad 0 = \ker(\operatorname{pr}_{2}|_{L}) = L \cap (H_1(\Sigma_p(T(p,r)))^{m_{1}} \oplus 0).\] Consequently, by Proposition~\ref{prop:DirectSummetaboliser}, \(L\) is the graph of an anti-isometry \[g \colon (H_1(\Sigma_p(T(p,r)))^{m_{1}},\lambda_p(T(p,r))^{m_{1}}) \to (H_1(\Sigma_p(T(p,r)))^{m_{1}},\lambda_p(T(p,r))^{m_{1}}).\] For each \(q,s\) and~$j=1,2$, consider the following subsets of~$H_1(\Sigma_p(T(p,r))^{m_{1}}$ \begin{align*} S_{\mathcal{I}_{j}(q,s)} &= \{(v_{1},v_{2},\ldots,v_{m_{1}}) \in (H_1(\Sigma_p(T(p,r)))^{m_{1}} \colon v_{i} = 0, \text{ for } i \not\in \mathcal{I}_{j}(q,s)\} \\ &= \bigoplus_{k \in \mathcal{I}_{j}(q,s)} H_1(\Sigma_p(T(p,Q_k)), \end{align*} where~$\mathcal{I}_j(q,s)$ is defined in \eqref{eq:I_j(q,s)}. Next, we use these sets and the anti-isometry~$g$ to describe a sufficient criterion to obtain the characters \(\chi_{\mathbf{a}},\chi_{\mathbf{b}}\) required by the statement of Lemma~\ref{lemma:constructing-characters}. \begin{claim}\label{claim:claim} If there exist \(q,s\) such that \(g(S_{\mathcal{I}_{1}(q,s)}) \neq S_{\mathcal{I}_{2}(q,s)}\), then there are characters \(\chi_{\mathbf{a}},\chi_{\mathbf{b}}\) satisfying either~(1) or~(2) and such that~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$ vanishes on~$L$. \end{claim} \begin{proof} If \(g(S_{\mathcal{I}_{1}(q,s)}) \setminus S_{\mathcal{I}_{2}(q,s)} \neq \emptyset\), then choose \(v \in S_{\mathcal{I}_1(q,s)}\) such that \(g(v) \not \in S_{\mathcal{I}_2(q,s)}\). Since~$r$ is a prime,~$H_1(\Sigma_p(T(p,r)))^{m_{1}}$ is an~$\mathbb{F}_r$-vector space and so we obtain a direct sum decomposition~\( H_1(\Sigma_p(T(p,r)))^{m_{1}}= \langle v \rangle \oplus W\) for some~$\mathbb{F}_r$-vector-space~$W$. We can then define the characters~as \[\chi_{\mathbf{a}}(v) = 1, \quad \chi_{\mathbf{a}}|_{W} = \theta, \quad \chi_{\mathbf{b}}(x) =- \chi_{\mathbf{a}}(g^{-1}(x)).\] Such choices of \(\chi_{\mathbf{a}}\) and \(\chi_{\mathbf{b}}\) satisfy condition~(1). We verify that~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$ vanishes on~$L$; where we recall that $L$ is the graph of $g$. For an element~$(h,g(h)) \in L$ of this graph, one \linebreak has~$(\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}})(h,g(h))=\chi_{\mathbf{a}}(h)-\chi_{\mathbf{a}}(g^{-1}(g(h)))=0$. This concludes the proof in this~case. If on the other hand, we assume that \(S_{\mathcal{I}_2(q,s)} \setminus g(S_{\mathcal{I}_1(q,s)}) \neq \emptyset\), and the argument is nearly identical. Choose \(v \in S_{\mathcal{I}_2(q,s)} \setminus g(S_{\mathcal{I}_1(q,s)})\) and write once more \( H_1(\Sigma_p(T(p,r)))^{m_{1}}= \langle v \rangle \oplus W\) and define the required characters as \[\chi_{\mathbf{b}}(v) = 1, \quad \chi_{\mathbf{b}}|_{W} = \theta, \quad \chi_{\mathbf{a}}(x) =- \chi_{\mathbf{b}}(g(x)).\] These choices of \(\chi_{\mathbf{a}}\) and \(\chi_{\mathbf{b}}\) satisfy condition~(2) and \(\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}\) vanishes on $L$. This concludes the proof of the claim. \end{proof} By Claim~\ref{claim:claim}, to prove Lemma~\ref{lemma:constructing-characters}, it is enough to show that there always exist \(q,s\) such that~\(g(S_{\mathcal{I}_{1}(q,s)}) \neq S_{\mathcal{I}_{2}(q,s)}\). Assume by way of contradiction that we have~\(g(S_{\mathcal{I}_1(q,s)}) = S_{\mathcal{I}_2(q,s)}\) for all \(q,s\). We will show in Claim~\ref{claim:NonEmpty} below that this assumption implies that~$K$ is not simplified. This is a contradiction since we assumed that~$K$ is simplified. This proves Lemma~\ref{lemma:constructing-characters} modulo Claim~\ref{claim:NonEmpty}. \end{proof} \begin{claim} \label{claim:NonEmpty} If \(g(S_{\mathcal{I}_{1}(q,s)})= S_{\mathcal{I}_{2}(q,s)}\) for all \(q,s\), then~$K$ is not simplified. \end{claim} \begin{proof} We will observe that under the assumption of the claim, $K$ contains a summand of the form~\(T(p,Q_{2k_{0}-1}) \# -T(p,Q_{2k_{0}-1})\) for some integer~$k_0$. To be precise, choose \(1 \leq k_{0} \leq m_{1}\) such that the length~$\ell_{2k_0-1}$ of the sequence of~$Q_{2k_0-1}$ is maximal among all the~$\ell_{2k-1}$ for~$k=1,\ldots,m_1$, and define \footnote{Note that without the maximality assumption on~$\ell_{2k_0-1}$, we would have had to replace the condition~$Q_{k_0} = Q_k$ by~$Q_{k_0} \subset Q_k$.} \begin{align*} X(k_{0}) &=\lbrace 1 \leq k \leq m_1 \ | \ Q_{2k_0-1} = Q_{2k-1} \rbrace = \bigcap_{s=1}^{\ell_{2k_{0}-1}} \mathcal{I}_{1} (q_{2k_{0}-1,\ell_{2k_{0}-1}-s},s), \\ Y(k_{0}) &=\lbrace 1 \leq k \leq m_1 \ | \ Q_{2k_0-1} = Q_{2k} \rbrace =\bigcap_{s=1}^{\ell_{2k_{0}-1}} \mathcal{I}_{2} (q_{2k_{0}-1,\ell_{2k_{0}-1}-s},s). \end{align*} We will need the following properties of these sets: \begin{enumerate}[label=(\alph*)] \item\label{item:keyproperty-1} since \(k_{0} \in X(k_{0})\), \(X(k_{0})\) is nonempty; \item\label{item:keyproperty-2} if \(j \in X(k_{0})\), then~\(T(p,Q_{2j-1}) = T(p,Q_{2k_{0}-1})\); \item\label{item:keyproperty-3} if~\(j \in Y(k_{0})\), then~\(T(p,Q_{2j}) = T(p,Q_{2k_{0}-1})\). \end{enumerate} It is enough to show that \(Y(k_{0}) \neq \emptyset\). By~\ref{item:keyproperty-1}--\ref{item:keyproperty-3}, this would imply that \(K\) is not simplified since~$K$ contains a summand of the form \(T(p,Q_{2k_{0}-1}) \# -T(p,Q_{2k_{0}-1})\). To show that \(Y(k_{0}) \neq \emptyset\), consider the following subspaces of \(H_1(\Sigma_p(T(p,r)))^{m_{1}}\): \begin{align*} S_{X(k_{0})} &:= \{(v_1,v_2,\ldots,v_{m_1}) \in H_1(\Sigma_p(T(p,r)))^{m_{1}} \colon v_i = 0 \text{ for } i \not\in X(k_0)\}, \\ &= \bigoplus_{k \in X(k_0)} H_1(\Sigma_p(T(p,Q_{2k-1}))), \\ S_{Y(k_{0})} &:= \{(v_1,v_2,\ldots,v_{m_1} \in H_1(\Sigma_p(T(p,r)))^{m_{1}} \colon v_i = 0 \text{ for } i \not\in Y(k_0)\} \\ &= \bigoplus_{k \in Y(k_0)} H_1(\Sigma_p(T(p,Q_{2k}))). \end{align*} The advantage of writing~$X(k_0)$ and~$Y(k_0)$ as intersections of the~$\mathcal{I}_j(q_{k_0,\ell_{k_0-s}},s)$ is that the action of \(g\) on \(S_{X(k_{0})}\) can be described as \[ g(S_{X(k_{0})}) = \bigcap_{s \geq 1} S_{g(\mathcal{I}_{1}(q_{2k_{0}-1,\ell_{2k_{0}-1}-s},s))} = \bigcap_{s \geq 1} S_{\mathcal{I}_{2}(q_{2k_{0}-1,\ell_{2k_{0}-1}-s},s)} = S_{Y(k_{0})},\] where the second equality follows from the assumption. As \(g\) is an~$\mathbb{F}_r$-linear automorphism,~$\dim S_{X(k_{0})} = \dim S_{Y(k_{0})}$. Since the $\mathbb{F}_r$-dimension of~$H_1(\Sigma_p(T(p,r)))$ is~$p-1$, we deduce that \[(p-1) \# X(k_{0}) = \dim S_{X(k_{0})} = \dim S_{Y(k_{0})} = (p-1) \# Y(k_{0}).\] It follows that \(\# X(k_{0}) = \# Y(k_{0})\). Since \(X(k_{0}) \neq \emptyset\) by~\ref{item:keyproperty-1}, it follows that \(Y(k_{0}) \neq \emptyset\). As we mentioned, this implies that~$K$ is not simplified by \ref{item:keyproperty-1}--\ref{item:keyproperty-3} and Claim~\ref{claim:NonEmpty} is proved. \end{proof} This concludes the third part of the proof. We can now conclude. \subsubsection{Conclusion of the proof} \label{subsub:Conclusion} We can now prove Theorem~\ref{thm:LinIndep}. \begin{proof}[Proof of Theorem~\ref{thm:LinIndep}] Let~$K$ be a (non-trivial) linear combination of iterated torus knots of the form~$T(p,Q_i)$ for~$i=1,\ldots, k$. Here, the~\(Q_{i} = (q_{i,1},q_{i,2},\ldots,q_{i,\ell_i})\) are sequences of~\(\ell_{i}\) positive integers where $q_{i,\ell_i}$ is prime for all~$i$, and the integer~$q_{i,j}$ is coprime to~$p$ and to~$q_{i,\ell_i}$ for all~$j$. Assume that~$K$ is slice to obtain a contradiction. In particular~$K$ is algebraically slice and, as we saw in~\eqref{eq:AlgebraicallySliceForm}, we can therefore assume without loss generality that it is of the form \begin{equation} \label{eq:LinearCombination} K = \bigsharp_{i=1}^{m_{1}}\left( T(p,Q_{2i-1}) \# -T(p,Q_{2i}) \right) \# \bigsharp_{j=2}^{k/2} \bigsharp_{i=1}^{m_{j}} \left( T(p,Q_{2M_{j}+2i-1}) \# -T(p,Q_{2M_{j}+2i}) \right). \end{equation} Here we arranged that ~$q_{i,\ell_i}= r$ if and only if~$1 \leq i \leq 2m_{1}$. Furthermore, we can assume that~$K$ is simplified by canceling terms of the form~$J \# -J$ if any such term appears in~\eqref{eq:LinearCombination}. We can also assume that there is an index~$i$ such that~$\ell_i>1$: otherwise~$K$ would be a linear combination of torus knots, which is impossible since the latter are linearly independent in~$\mathcal{C}^{\text{top}}$~\cite{Litherland-signature}. To prove that~$K$ is not slice, we saw that it is enough to show that for every~$\mathds{Z}_p$-invariant metaboliser~$L$ of~$\lambda_{p}(T(p,r))^{m_1} \oplus -\lambda_{p}(T(p,r))^{m_1}$, there is a character~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}=\bigoplus_{k=1}^{m_1} \left( \chi_{\mathbf{a}^k} \oplus \chi_{\mathbf{b}^k} \right)$ that vanishes on~$L$ such that~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is not metabolic, where~$\chi=\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}} \oplus \bigoplus_{j=2}^{k/2} \bigoplus_{i=1}^{m_j} \theta \oplus \theta$; recall Remark~\ref{rem:PrimaryDecompositionMetaboliser}. We then applied satellite formulas to show that~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ decomposes (up to Witt equivalence) as \begin{align*} \operatorname{Bl}_{\alpha(p,\chi)}(K) &\sim B_1^\chi \oplus B_2 \oplus B_3^\chi \oplus B_4 \\ &=B_1^\chi \oplus B_2 \oplus \bigoplus_{q,s} B_3^\chi(q,s) \oplus \bigoplus_{q,s} B_4(q,s). \end{align*} Claim~\ref{claim:T1T3PlusT4Metabolic} shows that if~$\operatorname{Bl}_{\alpha(p,\chi)}(K)$ is metabolic, then~$B_1^\chi$ and~$B_3^\chi \oplus B_4$ are metabolic. By Claims~\ref{claim:Ti(s)Metabolic} and~\ref{claim:Ti(q,s)Metabolic}, it follows that~$B_3^\chi(q,s) \oplus B_3^\chi(q,s)$ must be metabolic \textit{for all}~$q,s$ and all characters~$\chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}$. On the other hand, as the knot~$K$ is simplified, Lemma~\ref{lemma:constructing-characters} implies that for any \(\mathds{Z}_{p}\)-invariant metabolizer \(L \subset H_1(\Sigma_p(T(p,r)))^{m_{1}} \oplus H_1(\Sigma_p(T(p,r)))^{m_{1}} \) there exist \(q,s\) and a character \( \chi_{\mathbf{a}} \oplus \chi_{\mathbf{b}}\) vanishing on \(L\) such that one of the following conditions is satisfied: \begin{enumerate} \item \(\chi_{\mathbf{b}^{k}} = \theta\) for every \(k \in \mathcal{I}_{2}(q,s)\) and \(\chi_{\mathbf{a}^{k_{0}}} \neq \theta\) for some \(k_{0} \in \mathcal{I}_{1}(q,s)\), or, \item \(\chi_{\mathbf{a}^{k}} = \theta\) for every \(k \in \mathcal{I}_{1}(q,s)\) and \(\chi_{\mathbf{b}^{k_{0}}} \neq \theta\) for some \(k_{0} \in \mathcal{I}_{2}(q,s)\). \end{enumerate} Applying Proposition~\ref{prop:NotMetabolic}, we deduce that for such characters and such integers~$q,s$, the linking form~$B_3^\chi(q,s) \oplus B_4(q,s)$ is not metabolic. This is the desired contradiction, and Theorem~\ref{thm:LinIndep} is proved. \end{proof}
2,877,628,089,308
arxiv
\section{Introduction} The analogy between the QCD vacuum and the superconductor is an interesting current topics for the study of the confinement mechanism \cite{YN}. The key point is the use of the maximally abelian(MA) gauge, because QCD reduces into the abelian gauge theory including QCD-monopoles\cite{GtH81}. Remarkably, in the MA gauge, the diagonal part of gluon field plays a dominant role to the nonperturbative quantities like confinement and chiral symmetry breaking\cite{SST}. On the other hand, the off-diagonal part of gluon field behaves as a charged matter field and does not contribute to the long-range phenomena. This is called as the abelian dominance\cite{EI}. The abelian dominance is confirmed by the recent lattice QCD simulation \cite{SH91}. However, the origin of the abelian dominance in the MA gauge is not understood yet. As a possible physical interpretation for the abelian dominance, the effective mass of the charged gluon may be induced in the MA gauge, and therefore the charged gluon propagation is limited within the short-range region because the massive particle propagates within the inverse of its mass. Here, we study the gluon propagator\cite{ANaka} in the MA gauge in terms of the interaction range and strength using the lattice QCD Monte-Carlo simulation. \section{Maximally Abelian (MA) Gauge} In the lattice QCD, the MA gauge is defined by maximizing \\ $R_{\small MA} \equiv \sum_{s,\mu} \makebox{Tr} \[ U_{\mu}(s) \tau^3 U_{\mu}^{\dagger}(s) \tau^3 \]$ using the SU(2) gauge transformation. In the MA gauge, the SU(2) link variable $U_{\mu}(s)$ becomes U(1)-like due to the suppression of the off-diagonal component. As for the residual U(1) gauge symmetry, we impose the U(1) Landau gauge fixing to extract most continuous gauge configration and to compare with the continuum theory. \begin{figure} \epsfbox{AA-r.EPSF} \caption{The gluon propagator in the MA gauge. In the MA gauge, the only diagonal-gluon is dominant in the long-range region, $r ~^{>~}_{\sim~} 0.4$ fm.} \end{figure} \section{Lattice QCD Results for Gluon Propagator} We calculate the gluon propagator in the MA gauge by the lattice QCD Monte Carlo simulation, particularly considering the scalar combination of $G_{\mu\mu}^a(r)\equiv \sum^4_{\mu=1}\langle A_{\mu}^{~a}(r)A_{\mu}^{~a}(0)\rangle~(a=1,2,3)$.Here, the scalar combination $G_{\mu\mu}^a(r)$ is useful to observe the interaction range of the gluon, because it depends only on the four-dimensional Euclidean radial coordinate $r \equiv (x_\mu x_\mu)^{\frac{1}{2}}~$. In Fig.1, in the MA gauge, the off-diagonal (charged) gluon propagates within the short-range region $r~^{<~}_{\sim~}0.4$ fm : the charged gluon behaves as the massive particle and does not contribute to the long-range physics in the MA gauge. On the other hand, the diagonal gluon propagates over the long distance and influences the long-range physics. Thus, we find the abelian dominance for gluon propagator that the only diagonal gluon field is relevant for the long-range physics in the MA gauge. This is the origin of the abelian dominance for the long-range physics. The off-diagonal (charged) gluon propagator decreases more strongly than the massless gauge-boson propagator $G_{\mu\mu}(r)=\frac{3}{4\pi^2}\frac{1}{r^2}$. Therefore, the charged gluon is expected to have an effective mass in the MA gauge. We estimate the effective mass of the charged gluon from the scalar combination $G_{\mu\mu}(r)$ of the gluon propagator. Since the propagator of the massive gauge boson with mass $M$ behaves as the Yukawa-type function $G_{\mu\mu}(r)=\frac{3}{4\pi^2}\frac{1}{r^2}\exp(-M~r)$, we can estimate the effective mass of gluons $M_{\makebox{eff}}$ from the slope of the logalithmic plot of $r^2G_{\mu\mu}(r)\sim \exp(-M_{\mbox{\small eff}} ~r)$ . In Fig.2, the charged gluon correlation $r^2G_{\mu\mu}(r) $ decreases linearly in the long-range region $r~^{>~}_{\sim~} 0.4$ fm. We obtain the effective mass of the charged gluon from this slope in the intermediate region $r=0.35 \sim 1.0$ fm as $M_{\mbox{\small eff}} \approx 4.5 ~\mbox{fm}^{-1}~=0.9~\mbox{GeV}$ . To summarize, the effective mass of the off-diagonal (charged) gluon is induced as $M_{\makebox{eff}} \simeq 1$ GeV in the infrared region in the MA gauge. Accordingly, the off-diagonal gluon can be neglected and does not contribute to the long-range physics as $r ~^{>~}_{\sim~} 0.4$ fm, although its effect appears in the short distance as $r~^{<~}_{\sim~} 0.4$ fm. Thus, only the diagonal gluon propagates the long distance, which leads to the origin of the abelian dominance for nonperturbative QCD. \begin{figure} \begin{minipage}[t]{4.7in} ~~~~~ \epsfbox{r2gpex.EPSF} \caption{The logalithmic plot of $r^2G_{\mu\mu}(r)$ as the function of the distance $r$ in the MA gauge. The charged gluon propagator behaves as the Yukawa-type function, $G_{\mu\mu}(r)\sim \frac{\exp(-Mr)}{r^2}$. The effective mass of the charged gluon can be estimated by the slope of the dotted line. } \end{minipage} \end{figure} \section*{References}
2,877,628,089,309
arxiv
\section{Introduction}\label{sec:intro} The development of optimal, objective, and reproducible methods for the analysis of high-resolution absorption systems is essential for analysing the high quality data now being obtained on new astronomical facilities. Forthcoming facilities like the ELT clearly demand sophisticated analytic tools. Searches for new physics, revealed by temporal or spatial variations in fundamental constants, constitute one of the three main science drivers for the ESPRESSO spectrograph on the European Southern Observatory's VLT \citep{Pepe2014}, and one of the key goals for the forthcoming ELT \citep{Hook2009,ESO_ELTbook2010,ESO_ELTbook2011,ELT_Liske2014,Marconi2016}. \cite{Bainbridge2017,gvpfit17} first developed AI methods that were capable of modelling arbitrarily complex absorption systems without any human interactions. At the heart of the process is VPFIT \citep{VPFIT,web:VPFIT}, a non-linear least squares system with tied parameter constraints. The additional genetic code acts as a wrapper, guiding the descent direction to an optimal solution on the basis of a test statistic (the corrected Akaike Information Criterion). Hybrid algorithms of this sort have been coined ``memetic'' \citep{Moscato1989}. In the present paper, we develop these ideas further, the key aims being to find faster algorithms and improve the overall performance, particularly for complex absorption systems/datasets that may be challenging to model, comprising a wide range of line strengths, from heavily saturated to barely detected, allowing for any blended absorption lines from other systems. Performance checking focuses on estimating the fine structure constant in quasar absorption systems, because this is a particularly challenging problem, more so than simple 3-parameter absorption line fitting. Whilst the newly created methods introduced in this paper have been developed in the context of quasar spectroscopy, they are likely to be useful in any absorption line spectroscopy application, including stellar spectroscopy and terrestrial experiments. In Section \ref{sec:AI} we explain the various stages of the AI algorithm, which we call ``AI-VPFIT''. We partition the process into 6 stages, describing the design of each in detail. Section \ref{sec:synthetic} evaluates the performance of AI-VPFIT using synthetic spectra. Section \ref{sec:Temp-importance} shows that it is important to properly model the absorption line width and that not doing has a significant impact on parameter estimation. In Section \ref{sec:nonuniqueness}, we discuss how an objective AI method removes biases inherent to previous interactive analyses and show that model non-uniqueness limits the accuracy achievable from a single measurement of variation in the fine structure constant $\Delta\alpha/\alpha$. Extensive use is made of the corrected Akaike Information Criterion for comparing the relative improvement in model development at each stage. The pros and cons of this are discussed in Section \ref{sec:Overfitting}. Finally, in Section \ref{sec:conclusions} we summarise AI-VPFIT's key characteristics and performance, the main findings from extensive testing using synthetic data, and the implications and requirements of future measurements to constrain $\Delta\alpha/\alpha$ using quasar spectroscopy. \section{The AI Method}\label{sec:AI} The new algorithms described in this paper follow the broad principles outlined in \cite{Bainbridge2017,gvpfit17} although there are notable differences and many new algorithms. These include: (i) as described in the following text, trial line positions are random; (ii) the ordering of the various tasks carried out here is different here compared to \cite{Bainbridge2017,gvpfit17}; (iii) Bayesian model averaging is not used here; we next describe each stage of the modelling process; (iv) GVPFIT was set up to use either fully turbulent or fully thermal broadening. This turns out to be a significant issue (discussed in detail later in this paper). The new method presented here does not make these assumptions; (v) two new stages are included to refine the model and address the overfitting problem. \subsection{Stage 1 -- Creating a preliminary model using a primary species or primary transitions}\label{subsec:stage1} In general, the combined dataset will be comprised of multiple spectral segments, to be fitted simultaneously. In Stage 1, partially for computing speed considerations, we first build a preliminary model prior to modelling the entire dataset. The choice of the {\it primary reference transition} (or {\it transitions}) or a {\it primary reference species} is important. A simple example might be fitting an absorption system comprised of multiple FeII transitions and the MgII 2796/2803 doublet. Let us assume, in this example, that all transitions fall longwards of the Lyman-$\alpha$ forest. If this is a damped Lyman-$\alpha$ system in which both MgII lines are mostly saturated, the sensible primary species choice is FeII. On the other hand, for a non-DLA system, with generally lower column densities, the MgII lines may be strong but unsaturated and the FeII features may be sufficiently weak that some velocity components seen in MgII are below the detection threshold in FeII. In this case, the MgII absorption complex provides the better constraint on velocity structure and may be more suited as the primary species. To give further examples, the requirement may be to use a single species, e.g. FeII to solve for the fine structure constant $\alpha$, or many $^2$H lines to solve for the electron to proton mass ratio $\mu$. In these cases, the {\it primary transition} (or transitions) are selected to be the data segment (or segments) that are likely to provide the most reliable initial model. To clarify, consider the following example. Suppose we have two FeII transitions, 1608 and 2383{\AA}, the former falling in the Lyman-$\alpha$ forest (and hence being blended with HI lines at various redshifts), the latter falling longwards of the Lyman-$\alpha$ emission line. In this situation, the FeII 2383{\AA} line should be treated as the {\it primary} transition and the FeII 1608{\AA} line as the secondary transition. The fine structure constant $\alpha$ (and/or if fitting molecular lines, the electron to proton mass ratio $\mu$) may be included as a free parameter in this Stage. However, this can only be done if the primary species or primary transitions are comprised of 2 or more transitions with sufficiently different sensitivities to $\alpha$ variation so as to avoid degeneracy with redshift $z$. It is important to include these parameters from the outset, rather than first deriving a best-fit model for the entire dataset and {\it only then} adding those free parameters, since this would inevitably bias $\alpha$ (or $\mu$ towards the terrestrial value). Having decided on the species to be used as the {\it primary species}, Monte Carlo methods are used to construct the model, simultaneously for all primary species' transitions. Initially, a single model absorption line is placed at random within the complex. Its absorption line parameters ($N$ and $b$) are default initial values, always the same, user-defined and provided by an input parameter file. The algorithm above thus provides an initial first-guess model for the primary species. Non-linear least-squares, VPFIT, \citep{VPFIT,web:VPFIT}, is used to refine the initial parameters. At this point (initial iteration), we are likely to have a very poor fit to the data, using only a single profile to model an arbitrarily complex dataset. The goodness-of-fit is quantified using the corrected Akaike Information Criterion (AICc), \begin{eqnarray} \label{eq:AICc} AICc = \chi_p^2 + \frac{2 n k}{n-k-1} \,, \end{eqnarray} where $n$ and $k$ are the numbers of data points and model parameters, respectively, and \begin{eqnarray} \chi^2_p = \sum_{i=1}^n \frac{(F_{i,data} - F_{i,model})^2}{ \sigma^2_i} \end{eqnarray} where $F_{i,data}$, $F_{i,model}$, and $\sigma_i$ are the observed spectral flux, the model fit, and the estimated error on $F_{i,model}$ for the $\it i^{th}$ pixel and the subscript $p$ indicates a summation for the primary data only. As is well known, the second term in Eq. \eqref{eq:AICc} penalises $\chi^2$, to prevent an arbitrarily large number of free parameters being introduced. Other statistics such as the Bayesian Information Criterion (BIC) perform a similar function but impose a stronger penalty. We defer a comparison between the various options to a forthcoming paper. At the end of this process (Generation 1) We now have the best-fit single-profile model and can thus begin to refine the model i.e. increase its complexity until we have a statistically acceptable representation of the data. This best-fit 3-parameter model thus now becomes the parent for Generation 2, in which a second trial line is placed randomly in redshift $z$ within the fitting range\footnote{In practice, the range is defined by the lowest and highest redshift available over the multiple data segments used.}, assigning the trial line the same initial default parameters as before. Non-linear least-squares refinement is again carried out and AICc computed. If AICc has decreased compared to Generation 1, the current model is accepted as the Generation 2 best-fit, becoming the parent for the next generation. if AICc has {\it increased} compared to Generation 1, the model is rejected, the process repeated (i.e. a second trial line is used, again randomly placed in redshift), until AICc decreases. We call each iteration of this procedure one generation. Iterating the above gradually increases the model complexity and improves the fit. The loop is terminated only after no reduction in AICc can be found after a user-defined number of trials (parameter $N_{line}$, set in the same initialisation file). \subsection{Stage 2 -- Include secondary species}\label{subsec:stage2} Stage 1 results in a good fit to the primary species alone (MgII in our example). Stage 2 comprises two steps. Firstly, the overall dataset being fitted is increased by adding in all secondary species (FeII in our example). The absorption system structure derived in Stage 1, i.e. the best-fit set of redshifts, is replicated for each secondary species to be fitted. The initial model for the secondary species is relatively crude in that each redshift component for each of the secondary species is assigned default trial values of the column density $N$. The velocity dispersion parameters $b$ can be related either by the limiting cases of entirely {\it ``thermal''} broadening ($b_s = b_p \sqrt{m_p/m_s}$), where the subscripts $p$ and $s$ refer to primary and secondary species), entirely {\it ``turbulent''} broadening ($b_s = b_p$), or in-between (i.e. the gas temperature $T$ is included as a free parameter). This is discussed in detail in Section \ref{sec:temperature-fit}). Then, non-linear least-squares refines the parameters for all secondary species, after which AICc is again computed and stored. If solving for either $\alpha$ or $\mu$, and if this free parameter was not introduced in Stage 1 (see earlier discussion), it can be included at this Stage. Including $\alpha$ (or $\mu$) {\it prior} to developing a complete model is important to avoid bias. The second step within Stage 2 is to increase the model complexity, i.e. further redshift components are added, one at a time, until no descent in AICc can be found for $N_{line}$ trials. Once this happens, the Stage 2 model is complete. \subsection{Stage 3 -- Adding interlopers}\label{subsec:stage3} So far, we have assumed that no unidentified absorption lines are present anywhere in the data. This is generally not the case. Each quasar sightline typically intersects multiple distinct absorption systems, such that blending between lines arising in different redshift systems is common. There are two types of interlopers: (i) an absorption system at some other known redshift. In this case the interloper species and rest-wavelength may be identified and there may be other transitions in the same ion or from other ions at the same redshift that can be modelled simultaneously in order to best constrain the interloper parameters, or (ii) the interloper species/origin is unknown. If the interloper is identified beforehand as being of type (i), then the overall model to be fitted can include the appropriate free parameters from the outset, that is, the set-up prior to Stage 1 indicates that 2 redshift systems are to be simultaneously fitted. If we have an interloper of type (ii), all previous (non-interloper) line parameters from Stages 1 and 2 are initially (here, in Stage 3) {\it fixed}. The reason for doing this is as follows. $\chi^2$ minimisation will act so as to `prefer' the interloper (since the interloper has no tied parameters, unlike the heavy element line) such that the interloper may tend to replace the heavy element line. Physically, this is obviously incorrect. To identify possible interlopers, there are 2 potential procedures: (a) Each data segment is treated separately. An interloper is added in a random position within each data segment, one at a time, and VPFIT allowed to iterate to derive the best-fit interloper parameters (or to reject the interloper entirely), or (b) All data segments are dealt with at the same time i.e. one interloper is placed in each data segment simultaneously, with wavelength \begin{eqnarray} \lambda_{interloper}^i = (1-R) \times \lambda_{min}^i + R \times \lambda_{max}^i \end{eqnarray} where $R$ is the random seed from $0 - 1$ and $\lambda_{min (max)}^i$ is the minimum (maximum) wavelength of the $i$th segment. One could generate a different random seed for each wavelength segment but this offers no advantage. Superficially, (a) is the favoured option because (b) has a potential systematic trend towards over-fitting the data (i.e. fitting the data with too many absorption components). The reason for this is because introducing one single interloper at a time, and using AICc to check whether that interloper is justified, results in a final model that only contains parameters that are statistically required. On the other hand, introducing one interloper into each segment simultaneously does not permit individual AICc testing on each interloper (because of tied parameters, it is not possible to define AICc on a region-by-region basis). This argument suggests that procedure (a) is preferable in order to avoid over-fitting. However, procedure (a) is computationally very time consuming -- e.g. if we have 10 data segments to be fitted, the time required is 10 times as long. In terms of computing time, it is more efficient to adopt procedure (b) and subsequently test for over-fitting and repair the model where necessary (Stage 5). \subsection{Stage 4 -- Refine and add new parameters (continuum and zero levels)}\label{subsec:stage4} This Stage comprises two steps.\\ \noindent{\it Step 1:} Firstly, we release all the previously-fixed primary and secondary parameters such that they can be freely varied. Then, additional primary and secondary components are added, one per generation. No new interlopers are included. As before, this process terminates only when no AICc descent occurs within $N_{line}$ trials. \noindent{\it Step 2:} At this point we have only considered parameters associated with absorption lines themselves. We have not yet considered additional important parameters, such as the continuum level nor a possible zero level correction. The second step in Stage 4 is therefore to add in these additional parameters (if they are required), and refine the model again using AICc and $N_{line}$. \subsection{Stage 5 -- Mutation; repeat earlier stages} This Stage allows the model to evolve further to allow for the introduction of new continuum and zero-level parameters. By the end of Stage 4, a good model has already been achieved. However, since that model has been derived prior to including continuum and zero-level parameters, several procedures need repeating. Therefore: \vspace{-2mm} \begin{enumerate}[leftmargin=0.5cm] \item All continuum and zero-level parameters are temporarily fixed; \item The algorithm is returned to Stage 2, but the Stage 2 starting (i.e. first guess) model parameters are those from the end of Stage 4 (step 2), but all interlopers previously found are removed; \item Stages 2 to 4 are then repeated. The previously-fixed continuum and zero-level parameters are returned to free parameters are the (repeated) Stage 4/step 2. The loop is closed by a user-set (or default) AI-VPFIT setting, which defines the number of repeats. \end{enumerate} We now explain the reasons behind the 3 processes immediately. If, prior to this stage, the original continuum placement was slightly wrong, then because we may be fitting multiple transitions from a single species (e.g. several FeII transitions), a continuum error may result in a poor fit once secondary transitions are included. This point can be clarified by example. Suppose our primary species is FeII2383 and one secondary transition is the weaker (lower oscillator strength) FeII2344. Suppose further that the original continuum placement for FeII2344 was placed slightly too low. In this example, it could easily be the case that an interloper (coincident with the FeII2344 position) provides a greater reduction in AICc compared to the (real) FeII2344 line. Such an effect clearly produces problems as follows. The lowest AICc may then be found by reducing the FeII column density so as to achieve a good fit at both lines, but an interloper is placed in the FeII2383 line to compensate for the continuum placement error. Slight errors in the original continuum placement can also lead to the following problem. If, again, the original continuum had been slightly too low in places, interlopers which in reality are present in the spectrum may have been missed. Repeating Stages 2 and 3 guards against this possibility. \subsection{Stage 6 -- Check for over-fitting and refine model parameters further}\label{subsec:stage6} \subsubsection{Preparation} \noindent{\bf 1.} Reproduce velocity structure from primary species into all secondary species. Reason: some of the secondary species components could have been dropped. Therefore put all secondary components back but remove all dropping criteria. Setting dropping criteria requires striking a balance between avoiding insignificant parameters in the final model and hence having ill-conditioned matrices yet at the same time avoiding losing relevant parameters. One reason for doing this is that in a system with multiple velocity components, during the iterative sequence, a parameter which is ultimately determined to be important, temporarily falls below a dropping criterion, so could be incorrectly removed. By removing the a dropping criterion, the search direction (and step) can become unstable through ill-conditioning. This would cause difficulties in the absence of a temporary control. Therefore, during this Stage, we place an upper bound on parameter update step sizes. \noindent{\bf 2.} The second action taken at this stage is to fix any continuum and zero level parameters to those best-fit values obtained at the end of Stage 4. This turns out to be helpful for the following reason. Since here, in Stage 5, we re-initialise the secondary species absorption line parameters (either to a set of default values or otherwise), the model is temporarily placed further away from the best-fit solution. The potential consequence of this can be that any sudden increase in (for example) a continuum parameter level can be compensated by a corresponding increase in absorption line strength (i.e. an increase in either $N$ or $b$ or both). In other words, a satisfactory model may sometimes be found corresponding to a local minimum in $\chi^2$ space. This problem is avoided by preventing (temporarily) further adjustment in either the continuum or zero level parameters. \noindent{\bf 3.} The third action taken at this stage is to temporarily fix all redshifts (for both heavy element lines and interlopers). Again, because this Stage temporarily moves the current model away from the final best-fit model, as in point 2 above, the slight decrease in overall stability increases the chance of a column density being momentarily sent to an artificially low value. When this happens, because that line may become very weak, its Hessian components corresponding to its redshift also become small, and its parameter update can become large. This means it can (mistakenly) be moved far from its current (appropriate) position to some arbitrary (inappropriate) position. Since this is hard to recover from, an easy solution is to temporarily fix all absorption line redshifts, whilst the secondary species column densities `recover' from default-initialisation values to near-optimal values. \noindent{\bf 4.} After minimising $\chi^2$ to refine all free parameters above, the fourth action taken at this Stage is to remove the temporary parameter holds described above and again to minimise $\chi^2$ to refine all interesting parameters. \subsubsection{Refining} \noindent{\bf 5.} Interlopers are introduced to the model in Stage 3. If the spectral dataset comprises multiple segments (i.e. several transitions from several different atomic species), one trial interloper is added to every segment simultaneously, the parameters are refined, and interlopers are kept if the {\it overall} AICc decreases; in fact, an interloper might increase the {\it local} AICc (i.e. for the spectral segment it lies in). For computational efficiency, to ensure that spurious interlopers do not remain in the model, this is dealt with in Stage 5 by removing each interloper one at a time and only retaining each if the overall AICc does not increase. \noindent{\bf 6.} It is possible that at this Stage of the whole process that the model contains regions where too many heavy element parameters have been introduced, i.e. over-fitting has occurred. There are several ways in which over-fitting could occur for the heavy element lines: {\it (i)} In Stage 1, the primary model is developed. In Stage 2, the velocity structure from the primary species is replicated to ensure that any velocity components present in the primary species are also present in the secondary species. However, local least-squares fitting in VPFIT could result in one of the secondary species' components falling below a threshold and being rejected from the model. This problem is remedied in the `Preparation' step above, in order to keep the model `physical'. Thus, the replacement of secondary species components is based purely on physical grounds and not on statistical grounds, i.e. in fact the replacement could produce a {\it higher} value of AICc, but this is not considered. For this reason, the replacement just described (i.e. requiring all velocity components to be present in all species), creates a marginal tendency to overfit the data. {\it (ii)} Another potential way in which slight overfitting can be created is as follows. In Stage 1, when the primary species model is developed, velocity structure is built up by adding one line at a time. This is done using a Monte Carlo process (i.e. trial lines are placed randomly within the data fitting region, repeating the procedure several times and selecting the best (lowest AICc) solution from the set of trials. Because the number of trials is finite, it is occasionally possible to find a local rather than an absolute AICc minimum when accepting a velocity component. In this sense, the {\it order} in which velocity components are added into the model can influence the final best-fit primary species model. This provides another mechanism by which spurious components may remain in the best-fit primary species model, which are carried forward to the next Stages. \subsection{Modelling line broadening; temperature fit} \label{sec:temperature-fit} If the incorrect broadening model is used to generate the absorption profiles (when fitting multiple atomic species simultaneously), this can impact adversely on the derived velocity structure. Consider a simple absorption system comprising only MgII2796 and FeII2383 profiles. Suppose the intrinsic gas broadening is thermal but that we use turbulent broadening to model the observed profiles. In this case, the model MgII profile will be slightly too narrow and the model FeII profile will be slightly too broad. Two mechanisms can easily compensate for this: {\it (a)} a model with 2 velocity components can match the observed profiles by adjusting the relative column densities accordingly, or {\it (b)} an interloper can be blended with the MgII profile and $b_{turb}$ adjusted accordingly to match the data. When $\alpha$ is an additional fitting parameter, the effects just described can add a significant systematic uncertainty. The discussion above illustrates the importance of using the correct broadening method. The procedure to achieve this is: \begin{enumerate}[leftmargin=0.5cm] \item Where applicable (i.e. where different atomic species with sufficiently different atomic masses are present) the $b$-parameters are changed such that they are related (tied) by $b^2 = b^2_{turb} + b_{th}^2$, where $b_{th}^2 = 2 k T/ m$, $m$ is atomic mass, and $T$ is the gas temperature. Clearly this requires a new free parameter, $T$, for each velocity component in the absorption complex. \item The temperature parameter $T$ can be included, if appropriate, at the beginning of Stage 2. \end{enumerate} \section{Testing AI-VPFIT with synthetic spectra} \label{sec:synthetic} A fully automated modelling process such as the one described in this paper opens new opportunities to scrutinise the limitations present in previous analyses. In particular, we do not know {\it a priori} whether the broadening of any particular absorption component is purely thermal, purely turbulent, or a hybrid. Many earlier analyses of quasar absorption systems assumed the broadening mechanism from the outset (usually assumed to be turbulent) and constructed models based on that assumption. The justification for doing so was that the assumption should not bias the measured $\Delta\alpha/\alpha$ one way or the other. Whilst this is true, an important point has been overlooked; fitting the wrong model inevitably not only leads to incorrect velocity structure, but importantly, produces systematically over-fitted models. This has important and very undesirable consequences, as we now demonstrate using synthetic spectra. \subsection{Generating synthetic spectra} We base synthetic spectra on the $z_{abs}=1.15$ absorption complex in the spectrum of the bright quasar HE 0515-0414. The choice is unimportant and some other system could equally have been used. This particular absorption complex spans an unusually large redshift range, $1.14688 < z_{abs} < 1.15176$, corresponding to a velocity range $\sim 700$ km/s. It is not necessary to use such an extensive range for the purposes required here. We therefore use a subset of the system spanning the redshift range $1.14688 < z_{abs} < 1.14742$, corresponding to a velocity range $\sim 100$ km/s. For simplicity we simulate only three transitions, Mg II 2796 and 2803 {\AA} and Fe II 2383 {\AA}. The simulated spectra are Voigt profiles convolved with a Gaussian instrumental profile with $\sigma_{res} = 1.11$ km/s. The pixel size is approximately $0.83$ km/s and the signal to noise ratio per pixel is 100 (the noise is taken to be Gaussian). The resolution and pixel size correspond to those of the HARPS instrument on the ESO 3.6m telescope. We derive the input absorption line parameters as follows. In the redshift range $1.14688 < z_{abs} < 1.14742$, using AI-VPFIT to analyse existing HARPS spectra, we find a total of eight heavy element absorption components plus one interloper. We model the (real) data including a temperature parameter for each absorption component in the complex i.e. each component has both a turbulent and a thermal component to its observed $b$-parameter. This AI-VPFIT model becomes one of the synthetic models. We call this the {\it ``temperature''} model. The detailed parameters used to create the {\it temperature} synthetic spectrum are given in Table~\ref{tab:syn_data}. \begin{table*} \begin{tabular}{ c c c c c c c c c c c } \hline & & & & \multicolumn{4}{c}{Temperature} & \multicolumn{1}{c}{Turbulent} & \multicolumn{2}{c}{Thermal} \\ \cmidrule(lr){5-8} \cmidrule(lr){9-9} \cmidrule(lr){10-11} \textbf{No.} & \textbf{$\log N (\text{MgII})$} & \textbf{$\log N (\text{FeII})$} & \textbf{$\text{Redshift}$} & \textbf{$b_{\text{turb}}$} & \textbf{$T$/K} & \textbf{$b_{\text{th}} (\textbf{Mg})$} & \textbf{$b_{\text{th}} (\textbf{Fe})$} & \textbf{$b(\textbf{Mg}) = b(\textbf{Fe})$} & \textbf{$b(\textbf{Mg})$} & \textbf{$b(\textbf{Fe})$} \\[0.5ex] \hline 1 & 12.03 & 11.34 & 1.1468830 & 8.37 & 6.52E+04 & 6.68 & 4.41 & 10.71 & 10.71 & 7.07 \\ 2 & 12.76 & 12.33 & 1.1469678 & 1.67 & 1.82E+04 & 3.53 & 2.33 & 3.90 & 3.90 & 2.57 \\ 3 & 12.31 & 11.91 & 1.1470124 & 5.43 & 0.00E+00 & 0.00 & 0.00 & 5.43 & 5.43 & 3.58 \\ 4 & 12.59 & 11.92 & 1.1471152 & 4.35 & 3.81E+04 & 5.11 & 3.37 & 6.71 & 6.71 & 4.43 \\ 5 & 12.36 & 11.77 & 1.1471692 & 2.86 & 2.39E+04 & 4.04 & 2.67 & 4.95 & 4.95 & 3.27 \\ 6 & 12.35 & 11.74 & 1.1472435 & 0.80 & 1.85E+04 & 3.56 & 2.35 & 3.65 & 3.65 & 2.41 \\ 7 & 12.17 & 11.68 & 1.1472894 & 5.73 & 1.19E+04 & 2.85 & 1.88 & 6.40 & 6.40 & 4.23 \\ 8 & 12.15 & 11.97 & 1.1474175 & 3.58 & 1.11E+04 & 2.76 & 1.82 & 4.52 & 4.52 & 2.98 \\ \hline \textbf{Interloper} & \textbf{$\log N$} & & \textbf{$\text{Redshift}$} & \textbf{b} & \\ \cmidrule(lr){1-5} 1 & 12.00 & & 3.9381818 & 0.80 & \\ \cmidrule(lr){1-5} \end{tabular} \caption{Model parameters used to generate the three synthetic spectra discussed in Sections \ref{sec:synthetic} and \ref{sec:Temp-importance}. The model comprises eight heavy element components and one interloper. Columns 5-8 correspond to the {\it temperature} synthetic spectrum, column 9 to the {\it turbulent} one, and columns 10, 11 to the thermal one. Columns 7 and 8 are the thermal $b$-parameters for MgII and FeII, i.e $\sqrt{ 2 k T/ m(\text{Mg, Fe})}$. MgII and FeII have the same $b$-parameter for turbulent broadening, as shown in column 9. Columns 10 and 11 give the thermal $b$-parameters for MgII and FeII with $b (\text{FeII}) = b (\text{MgII}) \sqrt{ m(\text{Mg})/ m(\text{Fe})}$. All $b$-parameters have units of km/s. \label{tab:syn_data}} \end{table*} \subsection{AI-VPFIT models at each Stage}\label{sec:examples} Fig.\ref{fig:model_evol_stg1} illustrates the evolution of the model at various points during Stage 1. Each panel shows a different generation. The initial (Scratch) model is a single randomly-placed line. At the end of Stage 1, the line has broadened and the column density increased to minimise $\chi^2$. The subsequent generations show how the model gradually evolves as more components are added. Only the primary species is shown (MgII in this case), as secondary species are not yet included. By generation 10, the model is already good, although no interlopers have been considered so the feature at $\sim -60$ km/s is (incorrectly, but knowingly) modelled as MgII. This problem is caught at Stage 3. In the example shown, AI-VPFIT continued iterating up to generation 29 but (on the basis of AICc), decided no additional velocity components were required, i.e. the model did not change for the last 19 generations. Fig.\ref{fig:model_evol} shows the development of the model through each Stage. For Stages 1 through 5, the model illustrated is from the end of those stages. In the example used, generation 10 is the end-point of Stage 1 so is replicated as the top-left panel in Fig.\ref{fig:model_evol}. The model at the end of Stage 1 comprises 10 components. In Stage 2, the secondary species (in this case only FeII) is included. The panel illustrating Stage 6 shows the initial condition of that Stage, where all primary species' velocity components present by the end of Stage 5 are included but {\it all} corresponding secondary species components are {\it replaced}. In the example used, 3 weak FeII components (at about $-60$, $+35$, and $+90$ km/s) are replaced as the initial condition for Stage 6. The motivation for doing so is to check that a genuine heavy element velocity component in the secondary species has not been mistakenly replaced by an interloper in Stage 5 (this is possible since the decision making is based purely on AICc). The final best-fit velocity structure is virtually indistinguishable from true model, labelled (``Fiducial'') and highlighted with a red box to indicate this is the input and not a fitted model: the number of derived absorption components is 8 with one interloper in the MgII2796 line. \begin{figure*} \centering {\includegraphics[width=0.82\linewidth]{stg1_1p.pdf}} {\caption{Model evolution through Stage 1 of the AI-VPFIT process. The black histogram illustrates the synthetic {\it temperature} spectrum. The signal-to-noise per pixel is 100. The continuous red line in the panel labelled ``Scratch'' shows the initial condition i.e. the randomly placed first-guess line. No VPFIT generation has taken place. The continuous red line in subsequent panels (i.e. ``Generation $\#$'') shows the best-fit {\it temperature} model after each generation of VPFIT. The $\pm 1\sigma$ normalised residuals are plotted at the top of each panel.} \label{fig:model_evol_stg1}} \end{figure*} \begin{figure*} \centering {\includegraphics[width=0.85\linewidth]{evol_fid_1p.pdf}} {\caption{Model evolution at each Stage in AI-VPFIT. The black histogram shows the synthetic spectrum (temperature line broadening). The signal-to-noise per pixel is 100. The red continuous line illustrates the best-fit {\it temperature} model. For Stages 1 through 5, the end-model is shown. For Stage 6, we illustrate the model at the commencement of the stage, to show how the current velocity structure of the primary species (Mg in this case) is re-introduced to the secondary species (FeII), for the reasons explained in Section \ref{subsec:stage6}.} \label{fig:model_evol}} \end{figure*} \section{The importance of the free parameter T} \label{sec:Temp-importance} \begin{center} \begin{figure} \centering {\includegraphics[width=1.0\linewidth]{comparison.pdf}} {\caption{Upper panel: FeII 2383 for {\it turbulent} (red continuous curve), {\it thermal} (blue dotted curve) and {\it temperature} (thin black curve) broadening. Lower panel: MgII 2786, where the line broadening is $b_{th}$(Mg) from Table \ref{tab:syn_data}. The figure illustrates the variations between each broadening model and hence highlights the importance of using the correct fitting model. For example, line 2 has a low turbulent $b$-parameter and a relatively high temperature, so the thermal and temperature line centres are similar and much deeper than the turbulent line. Line 8 has a larger turbulent $b$-parameter and a lower temperature, so the turbulent and temperature models are closer, the deeper line centre being the thermal model.} \label{fig:comparison}} \end{figure} \end{center} The decision as to which type of absorption line broadening to use when fitting real data is important. As alluded to earlier, previous analyses have generally assumed turbulent broadening on the basis that the assumption cannot bias estimating $\Delta\alpha/\alpha$. This assumption seems reasonable intuitively and is likely to be correct in a statistical sense. However, in fact it turns out that the choice of line broadening in the modelling procedure is very important. In Section \ref{sec:synthetic}, AI-VPFIT was applied to a synthetic spectrum generated with {\it temperature} broadening. This is of course not the only option. Most modelling of high resolution quasar spectra has previously been carried out assuming either {\it turbulent} or {\it thermal} broadening or both. Here we show the pitfalls of these assumptions and argue that, for reliable results, it is important to include an additional free parameter, T, for each velocity component in the model. We begin by generating 2 additional synthetic spectra; to create the {\it turbulent} synthetic model, all parameters are kept the same except for the $b$-parameter of each absorption component, which is now set to be equal to the Mg II turbulent component. To create the {\it thermal} synthetic model, we again use the observed MgII $b$-parameters, although now the FeII $b$-parameters are adjusted according to their atomic mass. The result of the above is the creation of 3 synthetic spectra, one where the line broadening takes into account both turbulent and thermal components of the $b$-parameter, one where the broadening is assumed to be purely turbulent, and one where the broadening is assumed to be purely thermal. The detailed parameters used to create the {\it turbulent} and {\it thermal} synthetic spectra are given in Table~\ref{tab:syn_data}. Those model spectra (and the corresponding {\it temperature} model), are as shown in Fig.\ref{fig:comparison}. Fig. \ref{fig:vel-structure} shows a set of nine results. Each of the three synthetic spectra (turbulent, thermal, and temperature line broadening) was modelled assuming each line broadening mechanism. The spectrum and model in each panel is as described in the figure caption. The continuous orange vertical lines and blue dashed lines indicate the line centres of the heavy element and interlopers absorption components. Figs. \ref{fig:turb-stat}, \ref{fig:therm-stat}, and \ref{fig:temp-stat} present the central values and errors for the $\log N$, $z$ and $b$-parameters from each best-fit model in Fig. \ref{fig:vel-structure}. These figures illustrate that modelling a {\it turbulent} spectrum with a {\it thermal} model leads to errors, and {\it vice versa}. AI-VPFIT sometimes needs two components to achieve the best-fit model for what is in reality a single component. The parameter error estimates are typically poorly determined for these ``double components''. Whilst ensuring that the model solves simultaneously for both thermal and turbulent components of the observed $b$-parameter, this does come with a cost, and may not always be possible. If the species fitted simultaneously (i.e. the primary and secondary species) have fairly similar atomic masses, $b_{turb}$ and $b_{th}$ approach degeneracy, leading to a huge uncertainty estimates on the gas temperature. In the synthetic models explored here, the only two elements ``observed'' are Mg (A=24) and Fe (A=56). Even then, slight degeneracy translates into quite large uncertainties, as is illustrated in the bottom right panel in Figs. \ref{fig:turb-stat}, \ref{fig:therm-stat}, and \ref{fig:temp-stat}. Table \ref{tab:manyalpha} summarises the results discussed above. When the synthetic spectrum broadening and fitted model broadening match, we can see that the number of absorption components needed to satisfactorily fit the data is minimised (top black line in each section in the table). Conversely, when the ``wrong'' model is used, additional components are needed. It is now well-established that the line broadening mechanism seen in quasar absorption systems is generally (perhaps never) entirely turbulent or entirely thermal i.e. it is important to use a temperature model. Importantly, for all three synthetic spectra, a temperature model performs very well in terms of finding the correct number of components (last column), as would be expected. Fig.\ref{fig:manyalpha} illustrates the results obtained from modelling the synthetic spectra described above. The figure is divided into three panels, corresponding (top to bottom) to the synthetic data being {\it turbulent}, {\it thermal}, and {\it temperature} respectively. Four $\Delta\alpha/\alpha$ measurements are illustrated within each panel. The highest point illustrates the fitted value of $\Delta\alpha/\alpha$ using VPFIT only, where the starting parameter guesses were the input parameters i.e. the parameters used to generate the synthetic spectrum. The second point down illustrates the result of AI-VPFIT fitting the synthetic data using a {\it turbulent} model. The third point down illustrates a {\it thermal} model and the lowest point a {\it temperature} model. The numerical results are listed in Table \ref{tab:manyalpha}. The results can be summarised as follows: \begin{enumerate}[leftmargin=0.5cm] \item When fitting the synthetic data with the correct model, the results are well-behaved. As expected, the fitted $\Delta\alpha/\alpha$ agrees well with the true value (illustrated by the vertical dashed line in Fig.\ref{fig:manyalpha}). \item When the {\it wrong} model is used, the $\Delta\alpha/\alpha$ uncertainty estimate is magnified by a factor of 3 or 4. This is because imposing the wrong broadening mechanism necessarily results in additional velocity components (heavy elements or interlopers or both) being introduced to achieve a satisfactory fit to the data. The consequence of the additional fitting parameters is that $\Delta\alpha/\alpha$ is more porrly constrained. In the last case ({\it temperature} synthetic data), the thermal model produces a $\Delta\alpha/\alpha$ estimate that is almost 2$\sigma$ away from the correct value. \item For all three synthetic spectra, when a {\it temperature} model is used, the results are good, as would be expected, since the model incorporates the full range of possibilities. \end{enumerate} The important conclusion of the above that using the wrong broadening mechanism has highly undesirable consequences. The solution is that wherever possible (i.e. when different species with sufficiently different atomic masses are modelled simultaneously), one should use a {\it temperature} model. \begin{figure*} \centering {\includegraphics[width=0.32\linewidth]{p_tbtb.pdf}} {\includegraphics[width=0.32\linewidth]{p_thtb.pdf}} {\includegraphics[width=0.32\linewidth]{p_tptb.pdf}} \\ {\includegraphics[width=0.32\linewidth]{p_tbth.pdf}} {\includegraphics[width=0.32\linewidth]{p_thth.pdf}} {\includegraphics[width=0.32\linewidth]{p_tpth.pdf}} \\ {\includegraphics[width=0.32\linewidth]{p_tbtp.pdf}} {\includegraphics[width=0.32\linewidth]{p_thtp.pdf}} {\includegraphics[width=0.32\linewidth]{p_tptp.pdf}} {\caption{Synthetic spectra and lowest AICc AI-VPFIT models. Tick marks indicate component positions. The signal to noise per pixel is 100. Other parameters used to generate the synthetic spectra are given in Section \ref{sec:synthetic}. The left hand column is for the {\it turbulent} synthetic spectrum, modelled using turbulent broadening (top), thermal broadening (middle), and temperature broadening (bottom). The middle column illustrates the {\it thermal} synthetic spectrum. The right hand column illustrates the {\it temperature} synthetic spectrum. Normalised residuals are plotted above each spectrum. The horizontal parallel lines indicate the 1$\sigma$ ranges.} \label{fig:vel-structure}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\linewidth]{sum_tb.pdf} {\caption{Results of modelling the synthetic {\it turbulent} spectrum. Top row: the fitted model is a {\it turbulent} model. Middle row: the fitted model is {\it thermal}. Bottom row: the fitted model is {\it temperature}. Blue points (circles) show the best-fit parameters for MgII. Red points (triangles) show the best-fit parameters for FeII. VPFIT error bars for each parameter are shown in all cases. Left: $\log N$ versus velocity. Centre: Redshift uncertainty (from VPFIT) versus velocity. Right: $b$-parameter versus velocity.} \label{fig:turb-stat}} \end{figure*} \begin{figure*} \centering {\includegraphics[width=0.98\linewidth]{sum_th.pdf}} {\caption{Same as Fig.\ref{fig:turb-stat} except using the {\it thermal} synthetic spectrum.} \label{fig:therm-stat}} \end{figure*} \begin{figure*} \centering {\includegraphics[width=0.98\linewidth]{sum_tp.pdf}} {\caption{Same as Fig.\ref{fig:turb-stat} except using the {\it temperature} synthetic spectrum.} \label{fig:temp-stat}} \end{figure*} \begin{table} \begin{tabular}{ l l r r c} \hline Spectrum & Model & $\Delta\alpha/\alpha$ & $\sigma(\Delta\alpha/\alpha)$ & \# of lines \\[0.5ex] \hline \textbf{\it Turbulent} & \textbf{\it Turb (vp)} & 1.99E-06 & 2.89E-06 & 8 + 1 \\ & \textbf{\color{blue} \it Turb} & {\color{blue} 2.00E-06} & {\color{blue} 2.56E-06} & {\color{blue} 10 + 2} \\ & \textbf{\color{blue} \it Therm} & {\color{blue} 3.42E-06} & {\color{blue} 1.25E-05} & {\color{blue} 13 + 2} \\ & \textbf{\color{blue} \it Temp} & {\color{blue} 5.40E-06} & {\color{blue} 3.60E-06} & {\color{blue} 8 + 2} \\ \hline \textbf{\it Thermal} & \textbf{\it Therm (vp)} & 9.23E-06 & 3.69E-06 & 8 + 1 \\ & \textbf{\color{blue} \it Turb} & {\color{blue} -2.86E-07} & {\color{blue} 1.43E-05} & {\color{blue} 11 + 3} \\ & \textbf{\color{blue} \it Therm} & {\color{blue} 9.25E-06} & {\color{blue} 3.61E-06} & {\color{blue} 9 + 2} \\ & \textbf{\color{blue} \it Temp} & {\color{blue} 1.03E-05} & {\color{blue} 4.05E-06} & {\color{blue} 9 + 1} \\ \hline \textbf{\it Temperature} & \textbf{\it Temp (vp)} & 1.17E-06 & 2.66E-06 & 8 + 1 \\ & \textbf{\color{blue} \it Turb} & {\color{blue} 1.42E-05} & {\color{blue} 9.94E-06} & {\color{blue} 11 + 4} \\ & \textbf{\color{blue} \it Therm} & {\color{blue} 1.34E-05} & {\color{blue} 4.23E-06} & {\color{blue} 16 + 2} \\ & \textbf{\color{blue} \it Temp} & {\color{blue} 1.08E-06} & {\color{blue} 2.50E-06} & {\color{blue} 8 + 1} \\ \hline \end{tabular} \caption{The fitted $\Delta\alpha/\alpha$ for each permutation of synthetic spectrum and model type - illustrated graphically in Fig.\ref{fig:manyalpha}. See the caption to that figure for further details. The last column shows the numbers of heavy element velocity components plus the number of interlopers in the model, within the velocity range of $\pm 80$ km/s, as illustrated in Fig.\ref{fig:vel-structure}. \label{tab:manyalpha}} \end{table} \begin{center} \begin{figure} \centering {\includegraphics[width=1.0\linewidth]{alpha_f.pdf}} {\caption{Comparing the fitted $\Delta\alpha/\alpha$ for each permutation of synthetic spectrum and model type. The left-hand column indicates the synthetic spectrum used. The second column indicates the line broadening mechanism used in modelling. ``Turb (vp)'' means that VPFIT was used with the true model parameters as starting guesses; this point is shown in black in the third column, which also shows the best-fit $\Delta\alpha/\alpha$ for a {\it turbulent}, {\it thermal}, and {\it temperature} model, with corresponding parameter 1$\sigma$ uncertainties (from VPFIT).} \label{fig:manyalpha}} \end{figure} \end{center} \section{Model non-uniqueness and impact on $\Delta\alpha/\alpha$} \label{sec:nonuniqueness} \begin{figure} \centering {\includegraphics[width=0.98\linewidth]{thtb_non-uniq.pdf}} {\caption{Illustration of the model non-uniqueness problem. The input spectrum is {\it thermal} and the fitted model is {\it turbulent}. This model passed all acceptance tests described in Section \ref{sec:AI} and is a minimum AICc result with $\chi^2_n = 0.91$. However, it gives $\Delta\alpha/\alpha = 6.78 \pm 1.63 \times 10^{-5}$, compared to the ``true'' (synthetic spectrum) input value of $\Delta\alpha/\alpha = 5.0 \times 10^{-6}$. The discrepancy is caused by a local $\chi^2$ minimum.} \label{fig:wrongmin}} \end{figure} It is important to understand that the modelling process outlined in this paper, in the context of solving for $\Delta\alpha/\alpha$, is {\it not} the same as the interactive process followed by a human being. In previous studies, where no AI methodology has been used, and where a human has interactively constructed an absorption line model, this is done by assuming throughout that $\Delta\alpha/\alpha=0$. Once a good model has been found, the procedure generally adopted in previous analyses has been to only then include $\Delta\alpha/\alpha$ as a free parameter. We show in this Section that in fact very good models can be (and are) found with disparate values of $\Delta\alpha/\alpha$. Given enough computations, different starting points (equivalent to using different random seeds in the AI-VPFIT code) may reach different solutions. Whilst this is a well-known characteristic of non-linear least-squares procedures applied to complex datasets, it has not been studied in any detail in the context of modelling high-resolution quasar absorption spectra. Fig.\ref{fig:wrongmin} shows an AI-VPFIT model, obtained using the {\it thermal} synthetic spectrum fitted using a {\it turbulent} model. The model illustrated gives good normalised residuals and appears to be a good representation of the data. However, the synthetic spectrum was generated using $\Delta\alpha/\alpha = 5 \times 10^{-6}$, yet the best-fit value found for this particular turbulent fit is $\Delta\alpha/\alpha = 68 \pm 16 \times 10^{-6}$. The model is thus 4$\sigma$ away from the ``true'' value in this case. Inspecting the model details reveals how this occurs. Referring to Fig.\ref{fig:wrongmin}, the FeII column density of velocity component AI has decreased relative to its ``true'' value, to permit the interloper 24. The same effect occurs at velocity component AE. The interloper 25 is actually present in the data. The strong line under the AM and AA components is actually a single component. However, AI-VPFIT has introduced two components here, such that the column densities of the FeII AA and MgII AM component dominate the line centroid positions. The ``true'' model in fact comprises 8 heavy element velocity components and one interloper (25). However, Fig.\ref{fig:wrongmin} comprises 10 heavy element components and 5 interlopers. In this example, clearly a false minimum in $\chi^2$ has been found. There is no {\it a priori} reason to reject this model. The only potentially tell-tale signs of there being a problem with this model are (i) an apparent excess of close blends (both heavy element-heavy element and heavy element-interloper) in the stronger components, and (ii) the value of $\Delta\alpha/\alpha$ obtained is inconsistent with that obtained using the {\it turbulent} and {\it thermal} models Fig.\ref{fig:vel-structure}. The effect described above, i.e. finding a statistically acceptable model associated with a false minimum, is a natural consequence of an unbiased modelling the process. (An interactive i.e. human method may artificially avoid the problem by fixing $\Delta\alpha/\alpha = 0$ throughout the model-building process). The correct solution is to repeat the whole fitting process at least twice (and preferably more), searching for additional solutions with fewer velocity components and a lower AICc. We also note that, from calculations done so far, the number of acceptable models (i.e. the degree of non-uniqueness) is likely to be reduced when fitting a temperature model (as would be expected). \section{Overfitting - AIC and BIC} \label{sec:Overfitting} In \cite{Bainbridge2017,gvpfit17} and in the present paper we have made extensive use of the AICc statistic \citep{Hurvich1989} to select candidate models. In the course of this work, we also compared AICc results with those using the Bayesian Information Criterion, BIC \citep{Bozdogan1987}. Both statistics add a penalty to the usual $\chi^2$ statistic, the value of which increases with increasing number of model parameters. The purpose is to end up with a final model having the ``right'' number of parameters. In other words, both AICc and BIC try to limit the model complexity to avoid ``over-fitting''. The AICc (Eq.\eqref{eq:AICc}) and BIC penalties are \begin{equation} \label{eq:penalties} 2nk/(n-k-1) \ \ \mathrm{and} \ \ k \log n \,, \end{equation} where $n$, $k$ are the number of data points and free parameters respectively. Fig.\ref{fig:penalties} illustrates the contribution to the penalty from each free parameter in the model. Placing the spectral fitting boundaries (i.e. defining $n$) is to some extent arbitrary. Thus in this application to absorption line spectroscopy, a penalty that is relatively insensitive to $n$ is preferable. However, including pixels far from line centre is required if we want to fit continuum parameters. These two considerations alone show that AICc is preferable to BIC in our application. Nevertheless, even though we have adopted AICc in this paper, it is not entirely suitable. The spectral fitting boundaries influence the AICc penalty and the former are ill-defined. Moreover, for an unsaturated absorption line (for example), pixels far from the line centre impact little on the line profile whilst the converse is true, yet each pixel across the absorption line or complex carries the same weight in the AICc penalty term. Another way of expressing this is that each parameter can influence specific regions within the dataset (but have little or no impact elsewhere); the parameters describing an absorption component at $-$50 km/s in Fig.\ref{fig:comparison}, for example, have virtually no influence on the model intensity at +50 km/s. The AICc (and BIC) fail to take such an effect into account. AICc is thus not optimal for absorption line spectroscopy and some other (new) statistic is needed. Finally, the details of model non-uniqueness (Section \ref{sec:nonuniqueness}) are likely to depend on the statistic used i.e. AICc or something else. These considerations are beyond the scope of the present paper and will be addressed in subsequent work. \begin{figure} \centering {\includegraphics[width=0.98\linewidth]{penalties.pdf}} {\caption{Penalty per free parameter (Eq.\ref{eq:penalties} divided by $k$) vs. the number of pixels per parameter, $n/k$. The blue (solid), green (dotted), and red (dashed) curves illustrate the BIC penalty term for different $k$. At fixed $n/k$, BIC increases relatively rapidly as more model parameters are introduced. At fixed $n/k$, BIC also increases as the spectral fitting region $n$ is increased. The mageneta (continuous) and black (dot-dash) curves shows the AICc penalty term. The AICc penalty is insensitive to $k$ and relatively insensitive to $n/k$. The vertical grey (dashed) line indicates $n/k = 646/50$ for the synthetic model of Fig.\ref{fig:comparison}. } \label{fig:penalties}} \end{figure} \section{Discussion and conclusions}\label{sec:conclusions} In this paper we have extended previous work that combined a genetic algorithm with non-linear least-squares to automatically model absorption line data. ``Interactive" absorption line fitting is subjective and generally not reproducible. The method presented here brings objectivity and repeatability, important requirements for the increasingly high-quality data from new and forthcoming spectroscopic facilities like ESPRESSO/VLT and HIRES/ELT. An AI method provides an unbiased estimate of $\Delta\alpha/\alpha$. The same is not true of any interactive method that holds $\Delta\alpha/\alpha=0$ during the model construction phase, only to ``switch on'' $\Delta\alpha/\alpha$ as a free parameter at the end, once the model is essentially complete. A procedure of this sort preferentially selects local minima closest to $\Delta\alpha/\alpha=0$. Synthetic spectra, based on a well-known intermediate redshift absorption system, have been created and used to test the method's performance. We have shown that model development goes wrong i.e the wrong velocity structure is obtained, if the wrong broadening mechanism is used to model the observational data. High quality observations of quasar absorption systems show that in general the line broadening does indeed contain both thermal and turbulent contributions simultaneously. However, much of the previous work on estimating $\Delta\alpha/\alpha$ has been done assuming a single broadening mechanism. We have shown that there are highly undesirable consequences of this: the wrong velocity structure may easily be obtained, resulting in far less reliable parameter estimates. Fig. \ref{fig:manyalpha} shows this clearly for $\Delta\alpha/\alpha$. The extent to which this model non-uniqueness issue applies even when the {\it correct} model is used is not yet clear. It is possible, even likely, that $\chi^2$--parameter space is sufficiently complex, with multiple local minima, that false minima result even when solving for $b_{th}$ and $b_{turb}$ simultaneously. If so, model non-uniqueness limits the precision achievable from any single measurement i.e. it is not possible to reach the theoretical statistical limit using a single observation of $\Delta\alpha/\alpha$, no matter how precise the wavelength calibration. The implication is that, by nature, the problem of measuring $\Delta\alpha/\alpha$ is a statistical one, requiring a large sample of measurements to render the non-uniqueness systematic negligible. This principle should perhaps underpin all future scientific efforts to determine whether the fine structure constant varies in time or space. \section*{Acknowledgements} CCL thanks the Royal Society for a Newton International Fellowship during the early stages of this work. JKW thanks the John Templeton Foundation, the Department of Applied Mathematics and Theoretical Physics and the Institute of Astronomy at Cambridge University for hospitality and support, and Clare Hall for a Visiting Fellowship during this work. \bibliographystyle{mnras}
2,877,628,089,310
arxiv
\section{Introduction} \label{sec:intro} Audio source separation aims to separate one or more target audio sources from mixture signals \cite{Virtanen:07:msssbnmfwtcasc,Stefan:17:imssbdnntdanb}. The separated sources often contain distortions, artifacts, and unwanted signals from the other sources in the mixtures. An evaluation of the quality of the separated sources is essential to guide development of separation algorithms or to select the most suitable algorithm for a given mixture signal or application type. This requires either perceptual evaluation where experienced listeners judge the quality of the estimated sources according to different perceptual attributes \cite{emiya:11:soqaass,ward:18:bepppsvs,hagen:17:pessrm,Cartwright:16:fecpae,Coleman:18:pebssobap,Cano:16:eqsssahpqm,itu:15:msaiqlas}, or objective metrics that can estimate the proportion of distortions, artifacts, or interference present in the separated sources, by comparing these with the reference clean sources \cite{vincent:06:pmi,huber:06:pemoq}. In experimental situations, the reference sources are usually available for use in evaluating the performance of a certain source separation approach. However, for practical applications of source separation, the mixtures are available but the separate original sources (the reference signals) are not. Without these reference sources being available, the most common objective metrics cannot be employed, and the only way to evaluate the quality of the separated sources is to ask listeners to give scores for the quality of the separated sources. Using listeners to evaluate the quality of the separated sources is time consuming, and often unfeasible, and hence an automated system of evaluating the quality of the separated signal using neither listeners nor reference signals would be preferable. Such an automated referenceless evaluation method could be useful, for example, for selecting the most appropriate source separation algorithms for soloing or karaoke applications for each song, or automatically evaluating whether the separated signals are of sufficient quality or whether extra work is needed to further improve the quality of the separated signals using post-processing or additional separation techniques, e.g. \cite{emad:13:stpemenbscss,Williamson:14:tsaipqss}. The concept of referenceless quality evaluation for processed signals has been introduced in many signal processing domains, including the perceptual evaluation of image enhancement approaches \cite{hossein:17:lpie}, and evaluating the quality and intelligibility of speech signals \cite{szuwei:18:qnetenisqambblstm,Spille:18:psidnn}. In this paper we propose a referenceless evaluation method to evaluate the quality of the separated audio sources without using the reference sources. The main idea of the proposed method is to train a deep neural network (DNN) to map the estimated separated sources to the output of a reference-based evaluation metric. The metric used in this paper is the Sources-to-Artifacts Ratio (SAR) from the Blind Source Separation Evaluation (BSS Eval) toolkit \cite{vincent:06:pmi}. SAR is selected as a case study, but it is intended that the proposed method will be used for other objective metrics, or the results of subjective judgments. The DNN is first trained to map the separated signals from one or more source separation algorithms to their SAR scores. SAR in the training stage of the DNN is calculated by using the reference signals of each source. The trained DNN is then used to estimate the SAR for separated sources without using any reference signals. We consider three different scenarios of using DNNs to estimate the SAR values. The first scenario is to evaluate how well a DNN can predict the SAR results for the same single source separation algorithm for which it is trained: we refer to this scenario as a \textit{within-algorithm test}. The second scenario is to evaluate how well a DNN can predict the SAR results for a range of separation algorithms when trained using data from that same set of separation algorithms: we refer to this scenario as an \textit{across-known-algorithms test}. The third scenario is to evaluate how well a DNN can predict the SAR results for a range of separation algorithms when trained using data from a different set of separation algorithms: we refer to this scenario as an \textit{across-unknown-algorithm test}. \section{The Blind Source Separation Evaluation toolkit} The Blind Source Separation Evaluation (BSS-Eval) toolkit \cite{vincent:06:pmi} is the most frequently used tool for evaluating source separation algorithms. BSS-Eval decomposes the error between the reference/target source and the extracted/separated source into a target distortion component reflecting spatial or filtering errors, an artifacts component pertaining to artificial noise, and an interference component associated with the unwanted sources. The salience of these components is quantified using three energy ratios: source Image-to-Spatial distortion Ratio (ISR), Sources-to-Artifacts Ratio (SAR), and Source-to-Interference Ratio (SIR). A fourth metric, the Source-to-Distortion Ratio (SDR), measures the global performance (all impairments combined). Computing these metrics depends mainly on comparing the reference signals and their corresponding estimated signals from the source separation system for each source. Without the reference sources, the BSS-Eval toolkit cannot provide information regarding the quality of the estimated sources. \section{Deep neural network for referenceless SAR prediction} In this paper we use a deep neural network to predict the BSS-Eval SAR scores from the output signals of a source separation system. The DNNs we use are fully connected feed forward neural networks as shown in Fig. \ref{fig:dnn}. SAR was selected as a case study: it has been shown to be an indicator of the magnitude of perceptual artifacts in the separated signals \cite{ward:18:bepppsvs,hagen:17:pessrm}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth,height=6.5cm]{DNN2.eps} \caption{\footnotesize{The deep neural networks structure that we use in this work. The input is the estimated separated signal and the output is its corresponding quality score.}} \label{fig:dnn} \end{figure} The DNN is trained to map the extracted features of the separated sources to their corresponding SAR values. In this training stage of the DNN, we assume the reference signals are available. Given the reference or clean signals and their corresponding estimated signals from the source separation technique, the SAR is calculated using BSS-Eval \cite{vincent:06:pmi}. We extract features from the separated sources and use these features as input to the DNN. The features we use in this work are the mel-frequency spectrogram (MFS), which are calculated by converting the spectrograms of the estimated signals to a mel-frequency scale with 128 frequency channels. The training of the DNN parameters is done by minimizing the mean-square-errors between the estimated SAR values from the DNN and their corresponding calculated SAR values using BSS-Eval. The trained DNN is then used to estimate the SAR values for a new set of separated sources without using the reference signals. The MFS features are extracted from the separated sources and fed to the trained DNN to estimate the SAR values of the input features. \section{Experiments} \label{sec:method} We undertook a pilot study to predict the sources-to-artifact ratio (SAR) as provided by BSS-Eval. The audio data and the source separation algorithms were taken from the SiSEC-2016-MUS-task challenge \cite{Liutkus:17:ssec}. The data consists of 100 stereo songs, though four of them are corrupted so were removed. Each song is a mixture of vocals, bass, drums, and other musical instruments. The SiSEC-2016-MUS-task involved separating these four sources from each song in the dataset. In total, 24 different source separation algorithms with differing performance were submitted to this challenge. The following submitted source separation algorithms are blind source separation algorithms: DUR \cite{Durrieu:12:ammrpemass}, KAM \cite{Liutkus:15:saslkam}, OZE \cite{Ozerov:12:gffhpias}, RAF \cite{Rafii:13:rpet}, JEO \cite{Jeong:17:svsrpca}, and HUA \cite{Huang:12:svsmrrp}, and the following submitted algorithms are supervised source separation algorithms using deep neural networks: STO \cite{Stoter:16:cfmuss}, UHL \cite{Stefan:17:imssbdnntdanb}, NUG \cite{arie:16:masswdnn}, CHA \cite{chandna:17:massudcnn}, GRA \cite{Emad:16:scassdnne}, and KON \cite{site:sisec17}. The separated signals using the Ideal Binary Mask (IBM) \cite{Liutkus:17:ssec} are also included in this data. More details about each algorithm can be found in the SiSEC-2016 website \cite{site:sisec17}. These source separation algorithms produced separated signals with a wide range of SAR values (from $-10$\,dB to $20$\,dB). In our experiments we aimed to predict the SAR for the vocal separated from each song for all the source separation algorithms that were submitted to this challenge. We tested three different scenarios of varying difficulty: \begin{itemize} \item Test 1: The DNN model was used to predict the SAR for the source separation algorithm for which it had been trained. We call this test a \textit{within-algorithm test}. This was conducted separately for each separation algorithm to examine any algorithm-dependence in the results. \item Test 2: The DNN model was trained using data from all 24 source separation algorithms simultaneously, then used to predict SAR values of each of the 24 source separation algorithms. We call this test an \textit{across-known-algorithms test}. \item Test 3: The DNN model was trained using data from 17 source separation algorithms simultaneously, then used to predict SAR values for 7 source separation algorithms not used in the training. We call this test an \textit{across-unknown-algorithm test}. \end{itemize} The 96 available songs (non-corrupted) from SiSEC-2016 dataset were split into 67 training songs and 29 test songs, all processed by the algorithms used in the tests. As the perceptual quality varies over time for musical signals, the SAR was calculated every 117 milliseconds (ms) over a time window of 464 ms on an 116 seconds (s) excerpt of every song. The goal of the trained DNNs was to predict the time-varying SAR for every song and source separation algorithm in the test data set. The DNNs were deep fully connected feed forward networks as shown in Fig. \ref{fig:dnn}, consisting of three hidden layers using a rectified linear unit (ReLU) activation function for all but the last layer, which used a linear activation function. The number of nodes in each hidden layer was 500. The input features were calculated as follows: the stereo inputs were converted to mono by taking the average between the two channels; the spectrogram was calculated and converted to mel-frequency spectrograms (MFS) with 128 frequency channels. We stacked 40 neighbour MFS frames to form the inputs of the DNN with dimension $40\times128=5120$ MFS values, where 40 is the number of stacked frames, and each frame contains 128 frequency bands. To evaluate how well the DNNs could predict the SAR values without using the reference signals, we compared the estimated SAR as output from the DNNs with the SAR values calculated from the BSS-Eval toolkit using the reference signals; the average absolute error and the correlation between these were used to evaluate the performance of the DNN accuracy. \section{Results} \label{sec:results} Table \ref{all_sisec2016} shows the mean absolute error and the mean correlation between the referenceless estimated SAR values using DNNs and the calculated SAR using BSS-Eval with reference signals (reference SAR) for the three scenarios (Test 1 to Test 3). \begin{table}[t] \centering \scalebox{0.8} { \begin{tabular}{||c | c c | c c | c c||} \hline\hline & \multicolumn{2}{|c|}{Test1} & \multicolumn{2}{|c|}{Test2} & \multicolumn{2}{|c||}{Test3} \\ [0.5ex] Algorithm & Error & Corr. & Error & Corr. & Error & Corr. \\ \hline CHA & 1.2 & 0.82 & 1.5 & 0.83 & 0.7 & 0.89 \\ GRA2 & 1.4 & 0.87 & 1.5 & 0.86 & 1.3 & 0.92 \\ GRA3 & 1.3 & 0.80 & 1.6 & 0.81 & 1.7 & 0.89 \\ IBM & 1.3 & 0.90 & 2.9 & 0.86 & 3.1 & 0.93 \\ JEO1 & 0.8 & 0.89 & 1.3 & 0.76 & 0.9 & 0.89 \\ KAM1 & 1.2 & 0.83 & 1.2 & 0.79 & 0.9 & 0.87 \\ KAM2 & 0.9 & 0.81 & 1.0 & 0.75 & 0.6 & 0.85 \\ KON & 1.3 & 0.90 & 1.3 & 0.88 & 1.3 & 0.92 \\ NUG1 & 1.4 & 0.89 & 1.1 & 0.88 & 0.5 & 0.95 \\ NUG2 & 1.3 & 0.89 & 1.1 & 0.88 & 0.5 & 0.96 \\ NUG3 & 1.4 & 0.89 & 1.2 & 0.89 & 0.8 & 0.95 \\ OZE & 1.0 & 0.72 & 1.1 & 0.73 & 0.9 & 0.80 \\ RAF1 & 0.9 & 0.75 & 1.3 & 0.72 & 1.2 & 0.78 \\ STO1 & 1.1 & 0.90 & 1.0 & 0.87 & 0.5 & 0.94 \\ UHL3 & 1.5 & 0.86 & 1.8 & 0.85 & 1.5 & 0.93 \\ NUG4 & 1.5 & 0.89 & 1.2 & 0.89 & 1.6 & 0.92 \\ UHL2 & 1.5 & 0.84 & 1.7 & 0.85 & 1.5 & 0.90 \\ \hline DUR & 1.2 & 0.75 & 1.7 & 0.72 & 3.7 & 0.74 \\ HUA & 0.8 & 0.66 & 1.1 & 0.61 & 4.4 & 0.30 \\ JEO2 & 0.8 & 0.95 & 1.1 & 0.93 & 1.6 & 0.93 \\ RAF2 & 1.0 & 0.77 & 1.1 & 0.73 & 1.4 & 0.70 \\ RAF3 & 1.0 & 0.82 & 1.4 & 0.78 & 2.0 & 0.79 \\ STO2 & 1.1 & 0.90 & 1.0 & 0.88 & 1.1 & 0.88 \\ UHL1 & 1.4 & 0.85 & 1.3 & 0.86 & 1.5 & 0.86 \\ \hline \end{tabular} } \\ [1ex] \caption{\footnotesize{The mean absolute error in dB and the mean correlation between the referenceless estimated SAR values using DNNs and the calculated SAR using BSS-Eval with reference signals (reference SAR) for each source separation algorithm. The horizontal line separates the algorithms used for training (above the line) and those used for testing (below the line) in Test 3.}} \label{all_sisec2016} \end{table} \subsection{Test 1: the within-algorithm test} Test 1 was intended to be a case where a DNN could be trained individually for a given separation algorithm, and hence should give the most favourable results as the DNN is customised for a single case. For this, we independently trained 24 DNNs: one for each source separation algorithm. Each DNN in this case was used to estimate the SAR for the separation algorithm for which is was trained. The same set of training songs and the same set of test songs was used for each algorithm, with no overlap between the two sets of songs. The error in the predictions was calculated as the difference between the predicted SAR from each DNN, and the reference SAR for the same separated signal. The mean absolute error between the predicted and reference SAR was $1.2$\,dB, and ranged from $0.8$\,dB to $1.5$\,dB for each separation algorithm. The correlation between the predicted and measured SAR ranged from $0.66$ to $0.95$ for each algorithm, with an average over the 24 algorithms of $0.84$. Compared to the range of SAR values of $-10$\,dB to $20$\,dB, the mean absolute error of $1.2$\,dB represents 4\% of the range. This suggests that the SAR values estimated without using a reference could be used to discriminate between the performance of some combinations of algorithm and song. However, it may not be able to discriminate between the average results of some of the algorithms in the SiSEC-2016-MUS-task \cite{Liutkus:17:ssec}, and hence further refinement is required. \subsection{Test 2: the across-known-algorithms test} Test 2 was intended to be a case where a single DNN was trained using a set of separation algorithms, and this used to attempt to predict the results of any separation algorithm included in its training set. This requires a more generalised set of predictions compared to Test 1, and hence was intended to be a more challenging test. The single DNN was trained using the same training set of songs employed in Test 1, though this time using the results from all 24 source separation algorithms. The trained DNN was then used to evaluate the separated vocal signals from the test set songs individually for each of the same 24 source separation algorithms. The results are shown in Table \ref{all_sisec2016}: the mean absolute error between the predicted and reference SAR was $1.4$\,dB, and ranged from $1.0$\,dB to $2.9$\,dB for each separation algorithm. The correlation between the predicted and measured SAR ranged from $0.61$ to $0.93$ for each algorithm, with an average over the 24 algorithms of $0.82$. \begin{figure}[t] \includegraphics[width=1\columnwidth]{Corr.eps} \caption{\footnotesize{The correlation between the estimated and reference SAR values for a song separated by source separation algorithm GRA2.}} \label{fig:exp2} \end{figure} As an example of the correlation between the estimated and actual SAR results, Fig. \ref{fig:exp2} shows the correlation between the estimated and reference SAR values for a song separated by source separation algorithm GRA2. As can be seen from the figure, the estimated SAR values are highly correlated with the reference SAR. Compared to the range of SAR values of $-10$\,dB to $20$\,dB, the mean absolute error of $1.4$\,dB represents nearly 5\% of the range. Though the performance is less accurate for this more challenging test, even the worst-case mean absolute error of $2.9$\,dB indicates that the referenceless SAR prediction could be used to discriminate between the performance of some combinations of algorithm and song, but again further refinement is required. \subsection{Test 3: the across-unknown-algorithm test} Test 3 was intended to be a case where a single DNN was trained using a set of separation algorithms, and this used to attempt to predict the results of any separation algorithm, including those not included in its training set. This requires further generalisation of the results, to both songs and algorithms outside of the training set, and is the most challenging of the tests used. For this, the first 17 source separation algorithms in Table \ref{all_sisec2016} were used for training and validation, and the last 7 algorithms (separated by a horizontal line in Table \ref{all_sisec2016}) were used for testing; the training and testing were again undertaken using a separate sets of songs. In addition, the DNN was tested separately for each source separation algorithm using solely the songs from the test set, with the results shown in Table \ref{all_sisec2016}). The mean absolute error between the predicted and reference SAR was $2.3$\,dB, and ranged from $1.1$\,dB to $4.4$\,dB for each separation algorithm in the test set, and from $0.5$\,dB to $3.1$\,dB for each separation algorithm in the training set. The average correlation between the predicted and measured SAR time series was $0.74$, with a range of $0.3$ to $0.93$ for the test set and $0.78$ to $0.96$ for the training set. As expected, the performance was less accurate for this test, though the worst-case error would still allow discrimination between some combinations of algorithm and song. \section{CONCLUSIONS} \label{sec:CONCLUSIONS} In this paper we introduced a novel referenceless evaluation method to assess a range of audio source separation systems without the need for the original sources. We used a deep neural network to predict the sources-to-artifacts ratio (SAR) \cite{vincent:06:pmi} of singing-voice recordings extracted from music mixtures of varying genres. Our experimental results show that the DNNs were capable of predicting the SAR without the reference signals, in most cases resulting in an error that was low enough (mostly $<$1.5dB) to allow discrimination between the performance of some combinations of algorithm and song, and with a high correlation (mostly $>$0.80) between the computed SAR from BSS-Eval that uses the reference signals. This work indicates that the idea of using DNNs to predict the output of objective source separation evaluation toolkits without the use of reference signals produces useful results, and can be extended to train the DNNs to predict the other metrics of the BSS-Eval or predict perceptual related quality scores. \ninept \section*{Acknowledgment} This work is supported by grant EP/L027119/2 from the UK Engineering and Physical Sciences Research Council (EPSRC). \bibliographystyle{IEEEbib.bst}
2,877,628,089,311
arxiv
\section{Introduction} The problem of determining whether a Boolean formula is unsatisfiable is called Boolean unsatisfiability (UNSAT) problem. Its opposite the Boolean satisfiability (SAT) problem is famous in mathematical logic and computing theory, which is one of the first proven NP-complete problems \cite{Cook1971, Levin1973}. SAT is widely studied because of its well known significance on both of theory and practice \cite{SATSurvey96,cookPvsNP,Kautz2007Survey,Aaronson2016}. Despite the worst-case exponential running time of all known algorithms for SAT, a lot of impressive progresses have been made in solving practical SAT problems with up to a million variables \cite{LargeVariablesApplication03,LargeVariableApplication2003}. Based on the DPLL method \cite{DP1960, Davis1962}, there were developed a large number of high-performance algorithms for SAT: local search algorithms \cite{Selman95localsearch, Li03qingting, UnitWalk2005, StochasticLocalSearch2014}, stochastic algorithms \cite{PPSZ1998, ProbabilisticAlgorithm1999, PPSZ2005}, conflict-driven clause learning algorithms \cite{GRASP1999, chaff2001, CDLZhang2001}, and so on. These algorithms are somehow logic search methods. A second category of interesting methods are related to constraint satisfaction problems which are used to employ various optimization strategies, for example, Lagrangian techniques \cite{Chang1995, Shang98adiscrete}, Newton's method and descent method for Universal SAT \cite{Gu1999}. A third typical method is based on statistical physics analysis which sugguests new effective heuristic algorithms for finding SAT assigments for random $k$-SAT problems \cite{yedidia2001generalized, maneva2007a}. There are too many elegant work to mention, we apologize for missing references, and for more related work please refer to some survey papers such as \cite{Gu1999,Kautz2007Survey,Aaronson2016} and the references therein. Many of them also can be used to study UNSAT indirectly. Comparing with the large number of studies about SAT, there were a few direct work \cite{GUNSAT2007} about UNSAT. Note that, a formula being unsatisfiable is logically equivalent to its negation being valid. So UNSAT amounts to the tautology problem which is co-NP-complete. When an unsatisfiable conjunctive normal form (CNF) formula contains too many clauses, its searching space is intractably huge. Naturally, any DPLL based method for such unsatisfiable formula will require a huge time. In practice, it was ever for a long time that there was no local search algorithm for UNSAT before GUNSAT \cite{GUNSAT2007} was proposed. Therefore it \cite{Kautz2007Survey} was eagerly concerned whether a procedure dramatically different from DPLL can be found for handing UNSAT. This study presents a novel method called linear algebra formulation (LAF) to address this issue efficiently. In the study of SAT, researchers are used to restrict SAT to a special categories such as $k$-SAT, XOR-SAT, Horn-SAT, 1-in-3-SAT and so on. By the Schaefer's dichotomy theorem \cite{Schaefer1978}, each restriction is either in $\mathbf{P}$ or $\mathbf{NP}$-complete. In above categories, the 1-in-3-SAT received our attention because we can establish a natural relation between it and a system of linear equations. A $3$-CNF formula is called 1-in-3 satisfiable if there is a truth assignment to its Boolean variables such that each clause has exactly one true literal, otherwise 1-in-3 unsatisfiable. The 1-in-3-SAT problem is to determine whether a $3$-CNF formula is 1-in-3 satisfiable, which is $\mathbf{NP}$-complete \cite{Schaefer1978}. Similarly, the 1-in-3-UNSAT problem is to determine whether a $3$-CNF formula is 1-in-3 unsatisfiable. The basic idea of LAF for UNSAT is as follows. It first converts the UNSAT problem into a 1-in-3-UNSAT problem. Then it converts the 1-in-3-UNSAT problem into a Boolean solution (BoS) problem of the corresponding system of linear equations, where a BoS is a solution composed merely of $0$ and $1$. For the resulted linear system, we develop a linear algebra formulation to efficiently test whether it has any BoS. Through this approach, we obtain some sufficient conditions for UNSAT. Let's explain the idea by the following toy example. \begin{exam} Consider the following $3$-CNF formula \begin{eqnarray} (X_1\vee X_2\vee X_3) \wedge (X_2\vee X_3\vee X_4) \wedge (X_1\vee X_4) \label{fm:unsatexam} \end{eqnarray} which is 1-in-3 unsatisfiable. First, Formula (\ref{fm:unsatexam}) is transformed into a linear system \begin{eqnarray} X_1+X_2+X_3 & = & 1 \nonumber\\ X_2+X_3+X_4 & = & 1 \label{eq:unslb} \\ X_1+X_4 & = & 1 \nonumber \end{eqnarray} with an abuse of notations of Boolean variable and equation variable by the same symbol. As a result, formula (\ref{fm:unsatexam}) is 1-in-3-SAT iff system (\ref{eq:unslb}) has some BoS. To restrict $X_1,\cdots, X_4$ being $0$ or $1$, it just puts the following quadratic constraints \begin{eqnarray}\label{eq:qdct} X_1 = X_1^2, X_2 = X_2^2, X_3 = X_3^2, X_4 & = & X_4^2 \label{eq:unslbs} \end{eqnarray} Now, formula (\ref{fm:unsatexam}) is 1-in-3-SAT iff the polynomial system consisting of (\ref{eq:unslb}) and (\ref{eq:unslbs}) has a real solution. Unfortunately, so far there is no fast (polynomial time) algorithm for deciding if a quadratic system has a real solution. We then turn to a quadratic system which has the same real solution and contains merely two degree monomials as follows \begin{eqnarray}\label{exam:inerprod} X_1^2+X_2^2+X_3^2 & = & 1 \label{eq:cd} \\ X_2^2+X_3^2+X_4^2 & = & 1 \\ X_1^2+X_4^2 & = & 1 \label{eq:sqeqn} \\ (X_1+X_2+X_3)^2 & = & 1 \\ (X_2+X_3+X_4)^2 & = & 1 \\ (X_1+X_4)^2 & = & 1 \\ (X_1+X_2+X_3)\cdot(X_2+X_3+X_4) & = & 1 \label{eq:intps}\\ (X_1+X_2+X_3)\cdot(X_1+X_4) & = & 1 \\ (X_2+X_3+X_4)\cdot(X_1+X_4) & = & 1 \label{eq:intpd} \\ X_1\cdot(X_2+X_3+X_4) & = & X_1^2 \label{eq:intprod} \\ X_2\cdot(X_1+X_4) & = & X_2^2 \\ X_3\cdot(X_1+X_4) & = & X_3^2 \\ X_4\cdot(X_1+X_2+X_3) & = & X_4^2 \label{eq:szd} \\ X_1\cdot(X_1+X_2+X_3) & = & X_1^2 \label{eq:zqm} \\ X_2\cdot(X_1+X_2+X_3) & = & X_2^2 \\ X_3\cdot(X_1+X_2+X_3) & = & X_3^2 \\ X_2\cdot(X_2+X_3+X_4) & = & X_2^2 \\ X_3\cdot(X_2+X_3+X_4) & = & X_3^2 \\ X_4\cdot(X_2+X_3+X_4) & = & X_4^2 \\ X_1\cdot(X_1+X_4) & = & X_1^2 \\ X_4\cdot(X_1+X_4) & = & X_4^2 \label{eq:zud} \end{eqnarray} Similarly, system (\ref{eq:unslb}) has a BoS iff the quadratic system consisting of euqations (\ref{eq:cd}-\ref{eq:zud}) has a BoS. Now, we relinearize system (\ref{eq:cd}-\ref{eq:zud}) by substituting all monomials $X_iX_j$ and $X_jX_i$ by a single variable with $i\leq j$. Solving the linearization system obtains \begin{eqnarray} X_1X_2=X_1X_3 =X_1X_4=X_2X_3=X_2X_4 =X_3X_4= 0 \label{exam:cdxz} \end{eqnarray} From equations (\ref{eq:intprod}-\ref{eq:szd}) and (\ref{exam:cdxz}), it must be \begin{eqnarray}\label{exam:sqz} X_1^2 =X_2^2=X_3^2=X_4^2= 0 \end{eqnarray} It is evidently contradicting with the system composed by equations (\ref{eq:cd}-\ref{eq:sqeqn}). That is, the relinearized system of the quadratic system (\ref{eq:cd})-(\ref{eq:zud}) is inconsistent. Thus, the quadratic system (\ref{eq:cd})-(\ref{eq:zud}) has no real solution. As a result, system (\ref{eq:unslb}) has no BoS. Therefore, formula (\ref{fm:unsatexam}) is 1-in-3 unsatisfiable. \end{exam} The above example shows that the inconsistency of the relinearized system can be utilized to infer 1-in-3-UNSAT of a $3$-CNF formula. In the following, we formalize and generalize the idea and techniques in above toy example. For the sake of clarity, through the article we take the notations roughly as following: capital letters with subindex are used for Boolean variables and variables for equations; script letters $\ell $, $\mathcal C$ and $\mathcal F$, etc, stand for literals, clauses and formulas; low case letters $f, g, h$ and capital Greek letters $\Delta, \Theta$, etc, are used for Boolean or real functions; $\mathbb B=\{ 0, 1\}$ be the set of Boolean values, accordingly $\mathbb B^n$ is the $n$-dimensional Boolean space; $\mathbb R$/$\mathbb N$ is the set of real/integer numbers. \section{Formal Linear Algebra Formulation}\label{sec:basic} Because Boolean unsatisfiability problem can be efficiently reduction to 1-in-3-UNSAT which is co-NP-complete, it suffices to study efficient method for 1-in-3-UNSAT. To resolve 1-in-3-SAT/UNSAT, our basic idea is to convert it into consistency problems of the related linear system, over various fields. It consists of several crucial processes as follows \begin{enumerate} \item First one is the linear transformation (LT) that converts a Boolean formula into a linear system such that the Boolean formula is 1-in-3 satisfiable iff the linear system has some BoS; \item Second one is the quadratic propagation (QP) that extends the transformed linear system into some quadratic system such that these two systems have the same BoSs; \item Third one is the relinearization (ReL) that abstracts the quadratic system as a linear system such that they have the same BoSs. \end{enumerate} In above procedure, it contains linearization twice. One is in the conversion from a Boolean formula into a linear system. Second one is abstracting a quadratic system as a linear system. So this approach is called {\it linear algebra formulation} (LAF) to highlight the status of linearization. In the following, the previous procedure will be formulated formally. To this end, we follow the standard concepts of propositional logic formula in terms of {\it literals} and {\it lauses}, conjunctive normal form (CNF). Given $n$-many Boolean variables $X=\{X_1, \cdots, X_n\}$, a CNF formula $\mathcal F$ is defined as \begin{eqnarray}\label{fm:CNF} \mathcal F & = & \wedge_{i\leq m} \mathcal C_i = \wedge_{i\leq m} \vee_{j\leq j_i}\ell_{ij}(X) \label{fm:CNFC} \end{eqnarray} where $\mathcal C_i= \vee_{j\leq j_i}\ell_{ij}(X)$ are clauses, and each literal $\ell_{ij}(X)$ is of form $X_u$ or $\neg X_u$ for some $1\leq u\leq n$. As convention, $\mathcal F$ has {\it pure polarity} if at most one of $X_u$ and $\neg X_u$ can occur in $\mathcal F$ for any $1\leq u\leq n$; $\mathcal F$ is a {\it positive formula} if merely $X_u$ can occurs in $\mathcal F$; it is a {\it $k$-CNF formula\footnote{In this article, a $k$-CNF formula is a CNF formula in which each clause has at most $k$ literals. In some other literatures, a $k$-CNF formula is a CNF formula in which each clause has exactly $k$ literals.}} if $j_i\leq k$ for all $i\leq m$. The notion of 1-in-3-SAT/UNSAT is crucial to LT, which will be extended to general CNF formula. A CNF formula $\mathcal F$ of form (\ref{fm:CNF}) is called {\it exactly one satisfiable} (EOS) if there is a truth assignment $X^*$ to the variables $X$ such that each clause $\mathcal C_i$ has exactly one true literal, otherwise called {exactly one unsatisfiable} (EOU). For a given positive formula $\mathcal F$ of form (\ref{fm:CNF}), it can make a reduction of its EOS to the existence of BoS of a linear system defined by \begin{eqnarray}\label{eq:ATLBS} \sum_{j\leq j_i} \ell_{i,j}(X) & = & 1 \quad \quad \quad 1\leq i\leq m, \label{eq:TLBS} \end{eqnarray} where all $\ell_{i,j}(X)=X_k$ become equation variables from positive literals in $\mathcal F$. Let $\textbf{True}\leftrightarrow 1, \textbf{False}\leftrightarrow 0$ be the one-to-one relation between truth values $\{\textbf{True}, \textbf{False}\}$ and Boolean values $\{1, 0\}$, then \begin{prop}\label{prop:BoS-EOS} A positive CNF formula $\mathcal F$ of form (\ref{fm:CNF}) is EOS iff the corresponding linear system (\ref{eq:ATLBS}) has a BoS. \end{prop} As a result, we can study the EOS of a positive formula through investigating the BoS of the corresponding linear system. Two formulas $\mathcal F$ and $\mathcal H$ are said {\it equi-exactly-one-satisfiable} (equi-EOS) if $\mathcal F$ is exactly one satisfiable whenever $\mathcal H$ is and vice versa. If the $\mathcal F$ defined by (\ref{fm:CNF}) is of pure polarity, then we could construct an equi-EOS positive formula $\mathcal F^*$ by simply substituting all negation literal $\neg X_u$ with $X_u$. As $X_u$ does not occur in $\mathcal F$, $\mathcal F$ and $\mathcal H$ must be equi-EOS. Therefore, we can conclude that \begin{prop}\label{prop:PP2P} Each pure polarity CNF formula is equi-EOS to a positive CNF formula. \end{prop} Therefore, the EOS of each pure polarity formula can also be studied through some linear system. In general, a CNF formula $\mathcal F$ may have no pure polarity property. In this case, we introduce auxiliary variables $Y_u$ for all $\neg X_u$. Then we could construct an equi-EOS positive formula $\mathcal F^*$ as follows. Let $\mathcal F(Y/\neg X)$ be the resulted formula by substitute all negative literals $\neg X_u$ with $Y_u$ in $\mathcal F$. Let \begin{eqnarray}\label{eq:G2PF} \mathcal F^* & = & \wedge_u (X_u\vee Y_u) \wedge \mathcal \mathcal F(Y/\neg X) \label{eq:GtoPF} \end{eqnarray} where $u$ ranges in the index set $I$ such that if $\neg X_u$ occurs in $\mathcal F$ then $u\in I$. Such $\mathcal F^*$ is called a {\it positivization} of $\mathcal F$, also denoted by $\mathcal P(\mathcal F)$. Now \begin{prop}\label{prop:G2P} Each CNF formula $\mathcal F$ is equi-EOS to a positive formula $\mathcal P(\mathcal F)$. \end{prop} Accordingly, it can convert the EOS of $\mathcal F$ into the existence of BoS of the following linear system \begin{eqnarray}\label{eq:GATLBS} \sum_{j\leq j_i} \ell_{i,j}^*(X) & = & 1, \quad \quad \quad 1\leq i\leq m \label{eq:GLBS} \label{eq:PL} \\ X_u+Y_u & = & 1 \quad \quad \quad u\in I \label{eq:PLC} \end{eqnarray} where $\ell_{i,j}^*(X)=X_u$ if $\ell_{i,j}(X)=X_u$ and $\ell_{i,j}^*(X)=Y_u$ if $\ell_{i,j}(X)=\neg X_u$. As a consequence, it can convert the EOS problem of a generic CNF formula $\mathcal F$ into the existence of BoS of the related linear system. The transformation from a CNF formula $\mathcal F$ into a linear system like (\ref{eq:ATLBS}) or (\ref{eq:GATLBS}) is the so-called {\it linearizing transformation}. Without loss of generality, we study the BoS of linear system (\ref{eq:ATLBS}) for positive formula in stead of generic ones in the sequel. Generally, it is $\mathbf{NP}$-hard to decide whether a linear system (\ref{eq:GATLBS}) has a BoS. Anyway, we could exploit the idea behind the toy example in the previous section to approximate BoS. The basic idea is to resort to some easily solving BoS-equisolvable linear system, where two linear systems $\mathcal L1$ and $\mathcal L2$ are BoS-equisolvable if they satisfy: $\mathcal L1$ has BoS iff $\mathcal L2$ has BoS. In light of this, we extend a given linear system $\mathcal L$ to some BoS-equisolvable linear system $\mathcal L^*$ containing $\mathcal L$ as a subsystem. To this end, the equations in $\mathcal L^*$ are obtained by two consecutive algebraic operations on $\mathcal L$, which are QP and ReL. Herein, the QP over (\ref{eq:ATLBS}) is consisted of two sorts of operations. One is done by mutually multiplying equations inside $\mathcal L$ side by side, which is called {\it inner quadratic propagation} (IQP), another is accomplished through side by side multiplications over equations of $\mathcal L$ and the following quadratic constraints \begin{eqnarray}\label{eq:QCT} X_u & = & X_u^2, \quad \quad \quad 1\leq u\leq n \label{eq:QC} \end{eqnarray} which is called {\it constraint quadratic propagation} (CQP). Formally, IQP and CQP are carried out respectively by \begin{eqnarray} (\sum_{j\leq j_i} \ell_{i,j}(X))\cdot (\sum_{j\leq j_t} \ell_{t,j}(X)) & = 1, & 1\leq i\leq t\leq m \label{eq:QPF} \label{eq:QP} \label{eq:IQP} \\ X_u \cdot \sum_{j\leq j_i} \ell_{i,j}(X) & = X_u^2, &\quad 1\leq u\leq n, \quad 1\leq i\leq m \label{eq:CQP} \end{eqnarray} ReL is to substitute all quadratic monomials $X_iX_j$ in the quadratic system (\ref{eq:QP}) with new variables, $Z=(Z_1, \cdots, Z_u, \cdots, Z_v)$ says, in order to transform such quadratic system into a linear system. Wherein, $u\leq v= \frac{n(n+1)}{2}$. Let $ReL(Z)$ denote such linear system, then for a positive $k$-CNF formula \begin{thm}\label{thm:main} The following three assertions are equivalent: \begin{enumerate} \item The linear system (\ref{eq:ATLBS}) has BoS; \item The quadratic system (\ref{eq:QP}-\ref{eq:CQP}) has BoS; \item The $ReL(Z)$ has BoS. \end{enumerate} \end{thm} \begin{proof} It is easy to show implications from 1) to 2), and from 2) to 3). In fact, $ReL(Z)$ can be simplified so that it contains (\ref{eq:ATLBS}) as a subsystem. Therefore, 3) naturally implies 1). \end{proof} Based on this theorem and Proposition \ref{prop:BoS-EOS}, we obtain a sufficient condition for that a positive formula $\mathcal F$ has no BoS, as follows. \begin{prop}\label{prop:sound} If $ReL(Z)$ has no solution over anyone of $\mathbb R^v$ and $\mathbb N^v$, then it definitely has no BoS. Therefore, $\mathcal F$ is EOU. \end{prop} As a result, the 1-in-3-SAT/UNSAT problem can be converted into a consistency problem of certain linear system over some field or ring. Sometime, it can obtain a satisfying assignment for certain 1-in-3-SAT instance. \begin{exam}\label{exam:unqsat} Consider the following $3$-CNF formula \begin{eqnarray} (X_1\vee X_2 \vee X_3)\wedge (X_2\vee X_4 \vee X_5) \wedge (X_2\vee X_6) \wedge (X_3\vee X_4 \vee X_6) \label{fm:toySAT} \end{eqnarray} Applying LAF, we get it's ReL \begin{eqnarray} Y_{11}+Y_{22}+Y_{33} & = & 1 \nonumber\\ & \vdots & \nonumber \\ Y_{33}+Y_{44}+Y_{66} & = & 1 \label{eq:rlq}\\ & \vdots & \nonumber \\ Y_{36}+Y_{46}+Y_{66} & = & Y_{66} \nonumber \end{eqnarray} It is easy to verify that system (\ref{eq:rlq}) has only one solution \begin{eqnarray} Y_{11}=Y_{15}=Y_{16}=Y_{55}=Y_{56}=Y_{66}=1 \label{eq:BoSu}\\ Y_{12}=Y_{13}=Y_{14}=Y_{22}=Y_{23}=Y_{24}=Y_{25}=Y_{26}=\cdots=Y_{45}=0 \label{eq:BoSz} \end{eqnarray} Accordingly, $(X_1,X_2,X_3,X_4,X_5,X_6)=(1,0,0,0,1,1)$ must be the unique BoS to the linear system associated with (\ref{fm:toySAT}), and correspondingly, $(T,F,F,F,T,T)$ is the unique truth assignment by which formula (\ref{fm:toySAT}) is 1-in-3 satisfiable. \end{exam} \section{Algorithms and Experiments}\label{sec:alg} In the previous section, the LAF for EOS/EOU was established on mathematically rigorous foundation. It is the core of LAF for general SAT. Hence, we present it as Algorithm \ref{alg:LAT4EOS}. Herein, outcome `EOS' means that $\mathcal F$ is exactly one satisfiable; outcome `EOU' means that $\mathcal F$ is not so; outcome `Unk' stands for that the answer is unknown by the method. The soundness of Algorithm \ref{alg:LAT4EOS} is guaranteed by Propositions \ref{prop:BoS-EOS} and \ref{prop:sound}. \begin{algorithm}[t] \scriptsize \caption{Kernel Algorithm: LAT for EOU} \label{alg:LAT4EOS} \Input A 3-CNF formula $\mathcal F(X) = \mathcal C_i(X) =\wedge_i \vee_j L_{ij} (X)$ with variables $X = \{X_1,\cdots, X_n\}$; \\ \Initial Answer= `Unknown'; \\ \Output Answer=`EOS', or `EOU', or `Unk'. \begin{algorithmic}[1] \State Do LT on $\mathcal F(X)$ and obtain a linear system (\ref{eq:ATLBS}) \label{code:FL-LBS} \State Decide whether (\ref{eq:ATLBS}) is consistent over $\mathbb R^n$ and $\mathbb N^n$ \If {it is not consistent over anyone of them} \State Set Answer=`EOU', return \If {it is consistent} \State check if (\ref{eq:ATLBS}) has a unique solution over any of $\mathbb R^n$ and $\mathbb N^n$ \label{code:US} \If {(\ref{eq:ATLBS}) has a unique solution, $X^*$ says,} \State Decide whether $X^*$ is a BoS \If {$X^*$ is a BoS} \State Set Answer= `EOS', return \If {$X^*$ is not a BoS} \State Set Answer=`EOU', return \If {(\ref{eq:ATLBS}) always has more than one solution} \State Go to \ref{code:QP} \EndIf \EndIf \EndIf \EndIf \EndIf \EndIf \State Do QP on (\ref{eq:ATLBS}), and obtain a quadratic system (\ref{eq:QP}) \label{code:QP} \State Do relinearization by substitution on (\ref{eq:QP}), and obtain a linear system, says $ReL(Z)$ \label{code:LA} \State Decide whether $ReL(Z)$ is consistency over $\mathbb R^v$ and $\mathbb N^v$ \label{code:lz} \If {$ReL(Z)$ is inconsistency over anyone of them} \State Set Answer=`EOU', return \If {$ReL(Z)$ is always consistency} \State Set Answer= `Unk', return \EndIf \EndIf \\ \Return Answer \label{code:end} \end{algorithmic} \end{algorithm} Table \ref{tab:eos} reports the experiment results using Algorithm \ref{alg:LAT4EOS}. In the experiments, the instances are randomly generated $3$-CNF formulas. In Table \ref{tab:eos}, {\bfseries $\#$T} stands for the number of instances with {\bfseries $\#$V} many variables and {\bfseries $\#$C} many clauses; {\bfseries $\#$Unk}, {\bfseries $\#$EOS} and {\bfseries $\#$EOU} denote the numbers of corresponding answers. The experiment results confirmed that LAF make essential significance for EOU. However, the results showed that LAF is not good for EOS instances. Anyway, it also provides us some insights of 1-in-3-SAT. For example, a 3-CNF formula often is 1-in-3-UNSAT when the number of its clauses is more than 90 percent of the number of its variables. \begin{table}[!h] \caption{Implementation of Kernel Algorithm} \label{tab:eos} \centering \begin{tabular}{c|c|c|c|c|c} \hline ~~\bfseries $\#$V~~ &~~ \bfseries $\#$C~~ &~~ \bfseries $\#$T ~~ & \bfseries $\#$Unk & \bfseries $\#$EOS &\bfseries $\#$EOU \\ \hline 50 & 41 & 100 & 12 & 9 & 79 \\ \hline 50 & 46 & 100 & 0 & 0 & 100 \\ \hline 70 & 58 & 100 & 8 & 0 & 92 \\ \hline 70 & 66 & 100 & 0 & 1 & 99 \\ \hline 90 & 74 & 100 & 11 & 0 & 89 \\ \hline 90 & 82 & 100 & 0 & 0 & 100 \\ \hline 130 & 109 & 100 & 15 & 0 & 85 \\ \hline 130 & 118 & 100 & 0 & 0 & 100 \\ \hline 150 & 125 & 100 & 36 & 0 & 64 \\ \hline 150 & 136 & 100 & 0 & 0 & 100 \\ \hline \end{tabular} \end{table} As 1-in-3-SAT is $\mathbf{NP}$-complete \cite{Schaefer1978}, SAT can be reducible to 1-in-3-SAT efficiently in polynomial time. Therefore, LAF can be used for efficiently resolving general SAT/UNSAT. The recipe is to perform a series of equisatisfiable transformations as follows. Given a general formula $\mathcal G(X)$ with variables $X=\{X_1,\cdots, X_n\}$, we carry out the following process: \begin{enumerate} \item First, we transform $\mathcal G(X)$ into an equisatisfiable CNF formula, says $\mathcal G^*(X,Y)$. \item For $\mathcal G^*(X,Y)$, it computes an equisatisfiable 3-CNF formula $\mathcal T(X,Y,Z)$. \item Based on $\mathcal T(X,Y,Z)$, a positive formula $\mathcal F(X,Y,Z,U)$ is computed such that $\mathcal T(X,Y,Z)$ is satisfiable if and only if $\mathcal F(X,Y,Z,U)$ is 1-in-3 satisfiable. \item Algorithm \ref{alg:LAT4EOS} is applied to $\mathcal F(X,Y,Z,U)$. If Algorithm \ref{alg:LAT4EOS} outputs ``EOS'' for $\mathcal F(X,Y,Z,U)$ then $\mathcal G(X)$ is satisfiable, similarly, if ``EOU'' is output then $\mathcal G(X)$ is unsatisfiable. Otherwise, the answer is ``Unk'', that is, satisfiability of $\mathcal G(X)$ is not decided by LAF. \end{enumerate} The whole procedure could be formally summarized by Algorithm \ref{alg:LAT4SAT}. \begin{algorithm}[t] \scriptsize \caption{LAF Test for SAT} \label{alg:LAT4SAT} \Input A CNF formula $\mathcal G(X)$ with variables $X$; \\ \Initial Answer= `Unknown'; \\ \Output Answer=`SAT', or `UNSAT', or `Unk'. \begin{algorithmic}[1] \State Transform $\mathcal G(X)$ into an equisatisfiable CNF formula $\mathcal G^*$ \label{code:CNFTrans} \State Compute an equisatisfiable 3-CNF formula $\mathcal T$ for $\mathcal G^*$ \label{code:3-SATTrans} \State Compute a 3-CNF positive formula $\mathcal F$ for $\mathcal T$ \label{code:PFTrans} \State Call Algorithm \ref{alg:LAT4EOS} for $\mathcal F$ \If {Output is `EOS'} \State Set Answer=`SAT', return \If {Output is `EOU'} \State Set Answer=`UNSAT', return \If {Output is `Unk'} \State Set Answer=`Unk', return \EndIf \EndIf \EndIf \\ \Return Answer \label{code:satend} \end{algorithmic} \end{algorithm} Another major concern of an algorithm is its computational complexity. A short complexity analysis of these two algorithms is performed in what follows. For Algorithm \ref{alg:LAT4EOS}, its complexity is mainly due to the decision of consistency of a linear system and the implementation of QP. Given a 3-CNF positive formula $\mathcal F$ of $m$-clauses and $n$-variables with $m\leq n$, in the final linear system $ReL(Z)$, there are $\frac{n(n+1)}{2}$ many variables and $mn+\frac{m(m+1)}{2}+n$ many linear equations. For consistency of linear systems, there is an algorithm \cite{Kaltofen2015} of complexity $O(MNr)$ to decide whether a linear system of $M$ linear equations and $N$ variables is consistent over $\mathbb R$, where $r$ is the rank of the coefficient matrix. When we use the consistency over $\mathbb N$, it needs to compute a full row rank form coefficient matrix to compute Hermit normal form (HNF) \cite{Schrijver1998}. For HNF, the algorithm in \cite{Micciancio2001} is capable to convert an integer $s\times t$ matrix into HNF with complexity $O(st^4)$, where integers $s\leq t$. Therefore, the complexity of Algorithm \ref{alg:LAT4EOS} is about $O(n^{10})$. Similarly, Algorithm \ref{alg:LAT4SAT} terminates in polynomial time since its additional actions for converting a general CNF formula into a 3-CNF positive formula is of polynomial size of the numbers of variables and clauses. Interestingly, if the inconsistency of $ReL(Z)$ over real number field is also a necessary condition for that the corresponding 3-CNF positive formula is not 1-in-3 satisfiable, then it can modify Algorithm \ref{alg:LAT4EOS} by substituting `Unk' with `EOS'. Accordingly, 1-in-3-SAT would be decided in time $O(n^4m^2+m^4n^2+m^3n^3)$. As a result, SAT would be solved in polynomial time. Unfortunately, the inconsistency of $ReL(Z)$ over $\mathbb R$ cannot be a such necessary condition, here is a counterexample \begin{eqnarray} \mathcal F & = &(X_1 \vee X_2 \vee X_3) \wedge (X_2\vee X_4\vee X_5) \wedge (X_2\vee X_6) \wedge (X_3\vee X_4 \vee X_6) \nonumber \\ & & \wedge (X_1 \vee X_7 \vee X_8) \wedge (X_1\vee X_{9}\vee X_{10}) \wedge (X_1\vee X_{11}\vee X_{12}) \nonumber \\ & & \wedge (X_7 \vee X_{13} \vee X_{14}) \wedge (X_{9}\vee X_{13} \vee X_{15}) \wedge (X_{11}\vee X_{14} \vee X_{15}) \end{eqnarray} However, its corresponding $ReL(Z)$ has no solution over $\mathbb N$, which still can show that $\mathcal F$ is 1-in-3-UNSAT. Nevertheless, in our experiments there are several such cases. Therefore, it is interesting to ask \begin{ques}\label{question} Given a 3-CNF formula $\mathcal F$, whether the inconsistency of $ReL(Z)$ over $\mathbb N$ is a sufficient condition for it being 1-in-3-UNSAT? \end{ques} If the answer is `yes', then it can modify Algorithm \ref{alg:LAT4SAT} accordingly and obtain a definite answer `SAT' or ``UNSAT' for each formula input. In such case, the SAT and UNSAT both can be solved in polynomial time. This will lead to P={NP}=co-NP. Anyway, $\mathbf P\neq \mathbf{NP}$ is a overwhelming opinion \cite{cookPvsNP,Aaronson2016} currently. So the most possible answer might be `no'. In this case, it is natural to ask what is the class of Boolean formulas whose satisfiability can be determined by the inconsistency of $ReL(Z)$ over $\mathbb N$? \section{Summary}\label{sec:con} This study proposes a novel method LAF to SAT/UNSAT. This method mainly establishes an equivalent relation between satisfiability of Boolean formulas and Boolean solvability of linear system, and brings up a new approach for Boolean solution to linear system. As can be seen, LAF is a procedure dramatically different from DPLL. Hence, it gave an affirmative answer to the question in the end of \textbf{Challenge 1} in \cite{Kautz2007Survey}. More importantly, we developed two polynomial time algorithms for unsatisfiability testing based upon LAF. However, it can only say that LAF is an incomplete method for SAT unless Question \ref{question} has a affirmative answer. Nevertheless, LAF has been employed to successfully prove 1-in-3-UNSAT for many nontrivial cases in the experiment. So far, LAF is mainly used to show EOU especially 1-in-3-UNSAT. In addition to Question \ref{question}, it is also interesting to study how to develop LAF to compute a satisfying assignment for satisfiable formulas. \bibliographystyle{splncs03}
2,877,628,089,312
arxiv
\section{Introduction} The periconception window, which runs from 14 weeks before conception till the end of the first trimester, is crucial for pre- and postnatal health \citep{SteegersTheunissen2013}. Prenatal growth and development during this period is predominantly monitored via ultrasound imaging \citep{Liu2019}. This is done by manual or semi-automatic measurements of the length and volume of the embryo and detection of standard planes \citep{ISUOG}. However, these measurements of the embryo are time consuming and prone to human errors \citep{Carneiro2008}. Automating this process would lead to less investigation time and human errors. Despite its challenges, 3D first trimester ultrasound is suitable for learning based methods, as the whole embryo, thanks to its limited size, can be imaged in one dataset. Here, we propose a deep learning based framework to automatically segment and spatially align the embryo to a standard orientation. These two tasks form the basis for automatic monitoring of growth and development: placing the embryo in an standard orientation enables derivation of the standard planes, and the segmentation of the embryo provides the embryonic volume and simplifies automation of other measurements such as crown-rump length, head circumference and trans-cerebellar diameter \citep{ISUOG}. To achieve segmentation and spatial alignment simultaneously, we take an atlas-based registration approach using deep learning. We register all data to an atlas in which the embryo has been segmented and put in a standard orientation. To make our framework applicable to the full first trimester and to increase its robustness to variations in appearance of the embryo across pregnancies, we take a multi-atlas approach by using data of multiple pregnancies with longitudinally acquired images. To address how to combine data from multiple pregnancies we compared three strategies, namely: 1) training the framework using atlas images from a single pregnancy, 2) training the framework with data of all available atlases and 3) ensembling of the frameworks trained per pregnancy. Per ultrasound image, we selected the atlas images closest in gestational age (GA) and we investigated the influence of the number of atlas images that is selected. By taking an atlas-based registration approach we circumvent the need for ground truth segmentations during training, as atlas-based registration can be fully unsupervised. Manual segmentations of the embryo are laborious to obtain and typically not available, since in clinical practice mainly the length of the embryo and various diameters are used \citep{ISUOG}. Due to the rapid development of the embryo in the first trimester and wide variation in spatial position and orientation of the embryo and presence of other structures such as placenta, umbilical cord and uterine wall, we choose to add supervision using landmarks to our framework. We proposed to use the crown and rump landmarks of the embryo, since the crown-rump length measurement is relatively easy to obtain and a standard measure in clinical practice \citep{Rousian2018}. The main contributions of the research presented in this paper are: \begin{enumerate} \item we propose the first automated method to segment and simultaneously spatially align the embryo in 3D ultrasound images acquired during the first trimester; \item we compare different strategies to incorporate data of multiple atlases and address the question how many atlas images to select; \item we circumvent the need for manual segmentations for training of our framework by relying only on minimal supervision based on two landmarks. \end{enumerate} \section{Related work} \subsection{Segmentation and spatial alignment of prenatal ultrasound} In prenatal ultrasound imaging, most published studies focus on performing spatial alignment and segmentation separately; for a comprehensive overview see \cite{Torrents-barrena2019}. When both spatial alignment and segmentation are of interest performing them sequentially raises the question of the optimal order of operations. Especially since some studies focusing on performing segmentation require prior knowledge about spatial orientation of the embryo \citep{Gutierrez2013,Yaqub2013} and some studies focusing on standard plane detection require the segmentation or detection of structures within the embryo \citep{Ryou2016,Yaqub2015}. To circumvent this, a few studies perform both tasks at once \citep{Chen2012,Kuklisova-Murgasova2013,Namburete2018}. \cite{Chen2012} achieved both tasks for the fetal head using spatial features from eye detection and image registration techniques, for ultrasound images acquired at 19-22 weeks GA. However, the eye is not always clearly detectable in 3D first trimester ultrasound, making this method not applicable in our case. \cite{Kuklisova-Murgasova2013} achieved both tasks for the fetal brain, using a magnetic resonance imaging (MRI) atlas and block matching using image registration techniques, for ultrasound images acquired at 23-28 weeks GA. However, such an atlas or reference MRI image is not available for the first trimester \citep{OISHI2019}. \cite{Namburete2018} achieved both alignment and segmentation of the fetal head with a supervised multi-task deep learning approach, using slice-wise annotations containing a priori knowledge of the orientation of the head, and manual segmentations, for ultrasound images acquired at 22-30 weeks GA. However, this method assumes that during acquisition the ultrasound probe is positioned such that the head is imaged axially, which is not always the case for 3D first trimester ultrasound. Although none of the studies are applicable to first trimester ultrasound, they all employ image registration techniques to perform segmentation and spatial alignment simultaneously. In line with this, we take an image registration approach and tailor our method to be applicable to first trimester ultrasound. \subsection{Deep learning for image registration} Recently, deep learning methods for image registration have been developed and show promising results in many areas; for an extensive overview see \cite{Boveiri2020}. The common assumption for most of the deep learning approaches for image registration is that the data is already affinely registered. However, due to the rapid development of the embryo and the wide variation in position and spatial orientation the affine registration is challenging to obtain and therefore has to be incorporated in the framework. There are a few studies focusing on achieving both affine and nonrigid registration. For example, \cite{DeVos2019}, proposed a multi-stage framework, dedicating the first network to learning the affine transformation and the second network to learning the voxelwise displacement field. These networks are trained in stages. \cite{Shen2019} proposed a similar approach where the main difference is that they used a diffeomorphic model for the nonrigid deformation. We also take a multi-stage approach with a dedicated network for the affine and nonrigid deformation. We based the design of both networks on Voxelmorph by \cite{Balakrishnan2018}, which is developed for deep learning-based non-rigid image registration and is publicly available. \subsection{Multi-atlas segmentation with deep learning} Using multiple atlases is a common and successful technique for biomedical image segmentation. Adding data of multiple atlases gives in general better and more robust results and is widely used in classical not learning-based methods \citep{Iglesias2015}. However, learning-based methods can also benefit from using multiple atlases \citep{Ding2019,Fang2019,Lee2019}. \cite{Fang2019} omits the registration step by directly guiding the training of a segmentation network using multiple atlases as ground truth. \cite{Ding2019} and \cite{Lee2019} both employ deep learning for the image registration step, but they use different strategies for training. \cite{Ding2019} trained a network that warps all available atlas images to the target image at once. On the other hand, \cite{Lee2019} trains the network by giving every epoch a random atlas to the network to register to the target image. Both approaches train the network to be able to register all available atlases to every image. In our case, the atlases are longitudinally acquired, and due to the rapid development of the embryo, not every atlas matches every image. Hence, we compared different fusion strategies that take this into account. \subsection{Extension of preliminary results} Preliminary results of this framework were published in \cite{Bastiaansen2020} and \cite{Bastiaansen2020a}. In \cite{Bastiaansen2020} we proposed a fully unsupervised atlas-based registration framework using deep learning for the alignment of the embryonic brain acquired at 9 weeks GA. We showed that having a separate network for the affine and nonrigid deformations improved the results. Subsequently, in \cite{Bastiaansen2020a} our previous work was extended by adding minimal supervision using the crown and rump landmarks to align and segment the embryo. Here, we substantially extend our previous work by making it applicable to the full first trimester. We achieve this by taking a multi-atlas approach with atlases consisting of longitudinally acquired ultrasound images from multiple pregnancies. We compare different fusion strategies to combine data from multiple pregnancies and address the influence of differences in image quality of the atlas images. Compared to our previous work, we performed a more in-depth hyperparameter search for our deep learning framework and improved the division of the data in training, validation and test set taking into account the distribution of GA and available information used for evaluation, such as embryonic volume. Finally, to obtain the segmentation of the embryo, in the proposed model the inverse of the nonrigid deformation is needed. To make our deformations invertible, we adopt a diffeomorphic deformation model as proposed by \cite{Dalca2019}, using the stationary velocity field. \section{Method} \subsection{Overview} We present a deep learning framework to simultaneously segment and spatially align the embryo in image $I$. We propose to achieve this by registering every image $I$ to an atlas image via deformation $\phi$: \begin{equation}A(x) \approx I \left(\phi(x)\right) \quad \forall x \in \Omega, \end{equation} with $\Omega$ the 3D domain of $A,I$. We assume that the atlas image $A$ is in a predefined standard orientation and the segmentation $S_A$ is available. Now, $I \circ \phi$ is in standard orientation and the segmentation of image $I$ is given by: \begin{equation} S_{I} := S_{A} \circ \phi^{-1}. \end{equation} The deep learning framework learns to estimate deformation $\phi$ given atlas image $A$ and the image $I$. We propose to use a multi-atlas approach by registering every image to the $M$ atlases that are closest in GA. In Fig. \ref{fig:network_archi} a detailed overview of our framework can be found; all components are explained in the remainder of this section. \begin{figure} \centering \includegraphics[width=\textwidth]{overview_framework_network_only.jpg} \caption{Overview of our framework to register image $I$ to atlas $A_{i,j}$. Atlas $A_{i,j}$ corresponds to pregnancy $i$ with an ultrasound image taken in week $j$ of pregnancy. The network learns the affine and nonrigid deformations $\phi_a$ and $\phi_{d_{i,j}}$. Using the inverse deformations and majority voting we obtain the segmented images $S_I$. $I\circ\phi_a$ gives the image $I$ in standard orientation. } \label{fig:network_archi} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{aligned_atlas.jpg} \caption{From left to right: example of an atlas at 8, 9, 10, 11 and 12 weeks GA, with their crown and rump landmarks. Actual size of the embryo: 16 mm, 22 mm, 32 mm, 44 mm, 58 mm.} \label{fig:aligned_atlas} \end{figure} \subsubsection{Learning to estimate the deformation $\phi$} Firstly, note that our framework consists of two networks, one dedicated to learning an affine transformation and the other to learning a nonrigid deformation. In previous work we showed that separating these tasks was needed due to the wide variety in positions and orientation of the embryo \citep{Bastiaansen2020}. Therefore, the deformation $\phi$ consists of an affine transformation $\phi_{a}$ and a nonrigid deformation $\phi_{d}$, such that: \begin{equation} \phi:=\phi_{a} \circ \phi_{d}. \end{equation} The affine network to estimate $\phi_a$ is trained in two stages, the first stage uses the crown and rump landmarks for supervision, while in the second stage the network is trained in an unsupervised way. The nonrigid network to estimate $\phi_d$ is trained in a fully unsupervised way. $\phi_{a}$ and $\phi_{d}$ are obtained by two convolutional neural networks (CNN), which model the functions $g_{\theta_{\text{affine}}}$: $(\phi_{a},I \circ \phi_a)=g_{\theta_{\text{affine}}}(I)$, with $\theta_{\text{affine}}$ the network parameters and $g_{\theta_{\text{nonrigid}}}$: $(\phi_{d},I_{\phi_{a}} \circ \phi_d)=g_{\theta_{\text{nonrigid}}}(I_{\phi_{a}},A)$, with $I_{\phi_{a}}=I \circ \phi_a$, and $\theta_{\text{nonrigid}}$ the network parameters. \subsubsection{The atlas $A$} To deal with the rapid growth and variation in appearance of the embryo we propose to use a multi-atlas approach. We define $A_{i,j}$ to be an atlas image, with $i=1,...,n_p$, where $n_p$ represents the number of pregnancies and $j \in W$, with $W$ the set of the weeks in which an ultrasound image is available. For example, atlas $A_{1,8}$ represent the atlas image of pregnancy 1 acquired at 8 weeks GA. Define $\mathcal{A}:=\{A_{i,j} |i = 1,...,n_p, \forall j \in W \}$ to be the set of all available atlases. We manually rigidly aligned every atlas $A_{i,j}$ to the same standard orientation. Subsequently, all atlas images, were scaled to the same size by matching the annotated crown and rump landmarks, as shown in Fig. \ref{fig:aligned_atlas}. We define the coordinates of the crown and rump landmarks in the standard orientation as $x^{\mathcal{A}}_c$ and $x^{\mathcal{A}}_r$. Finally, the segmentation $S_{A_{i,j}}$ was obtained manually. \subsubsection{Atlas selection and fusion strategies} For every image $I$ the $M$ atlas images closest in GA are selected. Furthermore, from every pregnancy maximally one atlas image is selected, therefore we have: $M\leq n_p$. We define three different atlas fusion strategies: \begin{description} \item [\textbf{Single subject strategy}] The framework is trained using atlas images from a single\\ pregnancy: \begin{align} \mathcal{A}_{k}^{s}&:=\{A_{i,j} | i=k, j \in W\} \end{align} for pregnancy $k$. \item [\textbf{Multi-subject strategy}] The framework is trained with data of all available atlases: \begin{equation} \mathcal{A}^m:=\{A_{i,j}|i=1,...,n_p, j \in W\}. \end{equation} \item [\textbf{Ensemble strategy}] $\mathcal{A}^{e}$ Ensemble of the results of single subject strategy for $k=1,...,n_p$. \end{description} \subsubsection{Output of framework} The framework outputs the segmentation of the embryo and the embryo in standard orientation. The segmentation $S_I$ is defined as the voxelwise majority vote of: \begin{equation}\label{eq:final_seg} S_{I}^{i,j} : = S_{A_{i,j}} \circ (\phi^{-1}_{d_{i,j}} \circ \ \phi^{-1}_{a}). \end{equation} over all $M$ selected atlas images. The deformation to put the embryo in standard orientation is given by $\phi_a$ and the image in standard orientation is given by: \begin{equation} I_{\phi_a} := I \circ \phi_a \end{equation} \subsection{Affine registration network} The input of the affine registration network is the image $I$. The affine registration network gives as output the estimated affine transformation $\phi_a$ and the corresponding affinely registered image $I \circ \phi_a$. Recall that we affinely registered every atlas image to the same standard orientation. Hence, the affine transformation $\phi_a$ registers the image $I$ to all atlas images in $\mathcal{A}$. Furthermore, this allowed us to directly compare the image similarity of $I \circ \phi_a$ and all selected atlas images in the loss function. Hence, as shown by the arrows in Fig. \ref{fig:network_archi}, in contrast to the nonrigid network, the atlas image $A_{i,j}$ is not given as input to the network. \subsubsection{Network architecture} The affine registration network consists of an encoder, followed by a global average pooling layer. The encoder consist of convolutional layers with a stride of 2, where the images are down-sampled. The numbers of filters $(f_1,f_2)$ are hyperparameters. The global average pooling layer gives as output one feature per feature map, which forces the network to encode position and orientation globally. The pooling layer is followed by four fully connected layers with 1000 neurons and ReLu activation. The output layer consists of the affine transformation $\phi_{a}$ and is defined as a $12$-dimensional vector containing the coefficients of part of the affine transformation matrix $T\in\mathbb{R}^{4 \times 4}$. \subsubsection{Loss function} The loss function for the first training stage of the affine registration network is defined as: \begin{align}\begin{split}\label{eq:loss1} \mathcal{L}\left(\mathcal{A}, I,\phi_a,x^\mathcal{A}, x^I\right)= \frac{1}{M}\sum_{i,j}\delta_{i,j}\mathcal{L}_{\text{sim}}\left(A_{i,j},I \circ \phi_a\right) \\ + \lambda_{l} \mathcal{L}_{\text{landmark}}\left(\phi_a \left(x^\mathcal{A}\right), x^I\right), \end{split}\end{align} with $\delta_{i,j}$ defined as: \begin{equation} \delta_{i,j} = \begin{cases} 1 & \quad \text{if } A_{i,j} \text{ selected}\\ 0 & \quad \text{else.} \end{cases} \end{equation} The first term of the loss function promotes similarity between the image $I$ after alignment and the atlas. The similarity loss is only calculated within a mask $\tilde\Omega$, since there are other objects in the 3D ultrasound images besides the embryo. Mask $\tilde\Omega$ is obtained by dilating the union of all segmentations $S_{A_{i,j}}\forall i,j$ using a ball with radius 1. $\mathcal{L}_{\text{sim}}$ is chosen as the masked local (squared) normalized cross-correlation (NCC), which is defined as follows: \begin{align} \text{NCC}\left(A,Y\right)= \label{eq:ncc}\frac{1}{|\tilde\Omega|} \sum_{p \in \tilde\Omega} \frac{\left ( \sum_{q} [A(q)-\bar{A}(p)][Y(q)-\bar{Y}(p)]\right)^2}{\left ( \sum_{q} [A(q)-\bar{A}(p)]^2\right)\left(\sum_{q} [Y(q)-\bar{Y}(p)]^2\right)} ,\end{align} where $\bar{A}$ (and similarity for $\bar{Y}$) denote: $\bar{A}(p)=A(p)-\frac{1}{j^3} \sum_{q} A(q)$, where $q$ iterates over a $j^3$ volume around $p\in \Omega$ with $j=9$ as in \cite{Balakrishnan2018}. The second term in Eq. \ref{eq:loss1} minimizes the mean squared error in voxels between the landmarks after registration and is defined as: \begin{equation} \mathcal{L}_{\text{landmark}}\left(\phi_a\left(x^\mathcal{A}\right), x^I\right)= \frac{1}{n_l} \sum_{i=1}^{n_l} \left(x^I_i-\phi_a\left(x_i^\mathcal{A}\right)\right)^2,\end{equation} with $n_l$ the number of annotated landmarks, with $n_l=2$ in our case. During the second stage of training the weights $\theta_{\text{affine}}$ learned in the first stage are used as initialization of the network. The loss function for the second training stage of the affine registration network is defined as: \begin{align}\begin{split}\label{eq:loss2} \mathcal{L}(\mathcal{A},I,\phi_a)=& \frac{1}{M}\sum_{i,j}\delta_{i,j}\mathcal{L}_{\text{sim}}\left(A_{i,j},I \circ \phi_a\right)\\ &+ \lambda_{s} \mathcal{L}_{\text{scaling}}\left(\phi_a\right), \end{split}\end{align} where $\mathcal{L}_{\text{sim}}=-\text{NCC}(A_{i,j},I \circ \phi_a)$ as defined in Eq. \eqref{eq:ncc}. When objects in the background are present, penalizing extreme zooming is beneficial, as was showed in \cite{Bastiaansen2020}. Hence, $\mathcal{L}_{\text{scaling}}$ is defined as: \begin{equation} \mathcal{L}_{\text{scaling}}\left(\phi_a\right)=\sum_{i=1}^3 \log(s_i)^2, \end{equation} following \cite{Ashburner1999}, with $s_i$ the scaling factors of $\phi_a$ obtained using the singular value decomposition. \subsection{Nonrigid registration network} The input of the nonrigid registration network is an affinely registered image $I_{\phi_a}:=I \circ \phi_a$ together with a selected atlas image $A_{i,j}$. During training we provide as input to the network the image $I$ and one randomly chosen selected atlas image $A_{i,j}$. During inference we give as input to the network all the possible pairs of image and selected atlas image. The output of the network consists of $\phi_{d_{i,j}}$, along with the registered images $I_{\phi_a} \circ \phi_{d_{i,j}}$. To obtain the segmentation in Eq. \ref{eq:final_seg}, the deformation $\phi_{d_{i,j}}$ must be inverted. To ensure invertibility, we adopt diffeomorphic deformation for $\phi_{d_{i,j}}$. Following \cite{Dalca2019}, we use a stationary velocity field (SVF) representation, meaning that $\phi_{d_{i,j}}$ is obtained by integrating the velocity field $\nu$ \citep{Ashburner2007}. Now, $\phi_{d_i}^{-1}$ is obtained by integrating the velocity field using $-\nu$. \subsubsection{Network architecture} For the nonrigid registration network we used the architecture of Voxelmorph proposed by \cite{Balakrishnan2018}. Voxelmorph consists of an encoder, with convolutional layers with a stride of 2, a decoder with convolutional layers with a stride of 1, skip-connections, and an up-sampling layer. This is followed by convolutional layers at full resolution, to refine the velocity field $\nu$. The numbers of filters $(f_3, f_4)$ are hyperparameters. The velocity field $\nu$ is integrated in the integration layer to give the dense displacement field. In both networks, all convolutional layers have a kernel size of 3 and have a LeakyReLU activation with parameter $0.2$. \subsubsection{Loss function} The loss function for the nonrigid registration network is: \begin{equation}\label{eq:loss3} \mathcal{L}(A_{i,j},I_{\phi_a},\phi_{d_{i,j}})=\mathcal{L}_{\text{sim}}\left(A,I_{\phi_a}\circ \phi_{d_{i,j}} \right) +\lambda_{\text{d}} \mathcal{L}_{\text{diffusion}}\left(\phi_{d_{i,j}} \right), \end{equation} where $I_{\phi_a(x)}:=I\circ\phi_a$ is the output of the affine network. $\mathcal{L}_{\text{sim}}$ is again defined as the NCC in Eq. \ref{eq:ncc}. $\phi_{d_{i,j}}$ is regularized by: \begin{equation} \mathcal{L}_{\text{diffusion}}\left(\phi_{d_{i,j}}\right)= \frac{1}{|\tilde \Omega|}\sum_{p \in \tilde\Omega} \|\nabla d_{i,j}(p)\|^2,\end{equation} This loss term penalizes local spatial variations in $\phi_{d_{i,j}}$ to promote smooth local deformations \citep{Balakrishnan2018}. \subsection{Implementation details} For training the ADAM optimizer was used with a learning rate of $10^{-4}$, as in \cite{Balakrishnan2018}. For the second training stage of the affine registration network, we lower the learning rate to $10^{-5}$, since the network has already largely converged in the first stage. We trained each part of the network for 300 epochs with a batch size of one. In the first training stage of the affine network data augmentation was applied. Each scan in the training set was either randomly flipped along one of the axes or rotated 90, 180 or 270 degrees. For the second stage and nonrigid network preliminary experiments showed that data augmentation did not improve the results. The framework is implemented using Keras \citep{chollet2015keras} with Tensorflow as backend \citep{Abadi2016}. The code is publicly available at:\\ \href{https://github.com/wapbastiaansen/multi-atlas-seg-reg}{https://github.com/wapbastiaansen/multi-atlas-seg-reg} \section{Experiments} \subsection{The Rotterdam Periconceptial Cohort} The Rotterdam Periconceptional Cohort (Predict study) is a large hospital-based cohort study conducted at the Erasmus MC, University Medical Center Rotterdam, the Netherlands. This prospective cohort focuses on the relationships between periconceptional maternal and paternal health and embryonic and fetal growth and development \citep{Rousian2021, Steegers-Theunissen2016}. 3D ultrasound scans are acquired at multiple points in time during the first trimester. Ultrasound examinations were performed on a Voluson E8 or E10 (GE Healthcare, Austria) ultrasound machine. Furthermore, maternal and paternal food questionnaires are taken and other relevant data such as weight, height and bloodsamples are collected. \subsubsection{In- and exclusion criteria} Women are eligible to participate in the Rotterdam Periconception Cohort if they were at least 18 years of age, with an ongoing singleton pregnancy less then 10 weeks of GA. Pregnancies were only included if the GA could be determined reliably and the pregnancy did not end in miscarriage, discontinuation, intra-uterine fetal death, stillbirth, or passing away postpartum. GA was calculated according to the first day of the last menstrual period (LMP), and in cases with an irregular menstrual cycle, unknown LMP or a discrepancy of more then seven days, GA was determined by the crown-rump length measurements performed in the first trimester. In case of conception by means of in vitro fertilisation or intracytoplasmic sperm injection, the conception date was used to calculate the GA. We included pregnancies regardless of the mode of conception. We included images that had sufficient quality to manually annotate the crown and rump landmarks. We included pregnancies with ultrasound images acquired between 8+0 and 12+6 weeks GA. This range was selected, since previous research by \cite{Rousian2013} showed that firstly the crown and rump landmarks could be reliably determined up until and including the 12th week of pregnancy, and secondly that other relevant measurements which are relevant for more in-depth insight in growth and development, such as head circumference, trans-cerebellar diameter, biparietal diameter and abdominal circumference can be measured reliable starting at a GA of 7+4. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{all_atlas.png} \caption{Overview of all atlases $A_{i,j}$ with $i=1,...,8$ the number of the pregnancy and $J=8,9,10,11,12$ the week of pregnancy the ultrasound image was made. Below the gestational age (GA) is given along with the image quality score giving during EV measurements in Virtual Reality. ++: good quality, +: sufficient quality, -: poor quality.} \label{fig:all_atlas} \end{figure} \subsubsection{Manual annotations and image pre-processing} In the Predict study the embryonic volume (EV) is semi-automatically measured using Virtual Reality techniques \citep{Rousian2018}. We choose to use the available EV measurements for evaluation and refer to them as $EV_{gt}$. Additionally, for 186 images corresponding segmentations were saved. We used these segmentation, referred to as $S_I^{gt}$, for evaluation as well. The included images have isotropic voxel sizes varying between 0.06 mm and 0.75 mm. The resolution varied between 121 and 328 voxels in the first dimension, between 83 and 257 voxels in the second dimension and 109 and 307 voxels in the third dimension. To speed up training all images were re-scaled to $64 \times 64 \times 64$ voxels. To achieve this while keeping isotropic voxel sizing, all images were padded with zeros to a square shape and subsequently rescaled to the right size. \subsubsection{Atlas, training, validation and test set} We included data of 1282 pregnancies collected between December 2009 and August 2018. Eight pregnancies were used to create the atlas images, the remaining 1274 pregnancies were divided as follows over the training set (874, 68\%), independent test set (200, 16\%) and validation set (200, 16\%) for hyperparameter optimization and fusion strategy comparison. The eight pregnancies that formed the atlas images were selected because they had weekly ultrasound images and had the highest overall image quality rating, this rating was assigned during the measurement in Virtual Reality \citep{Rousian2013}. We choose to use eight atlases since adding more did not lead to improved performance in preliminary experiments. The choice to consider only pregnancies with weekly images was made to ensure that we would have a good representation of every week GA among the atlases. In Fig. \ref{fig:all_atlas} a visualization of all atlases along with their GA in days and quality score can be found. We divided pregnancies over the train, validation and test set by following the rules below in the given order: \begin{enumerate} \item all segmentations $S_I^{gt}$ must be divided equally over the validation and test set; \item all data of a pregnancy should be in the same set; \item the majority of the EV measurements should correspond to images in the validation or test set. \item within every week GA the division of unique pregnancies is approximately 60 \% - 20\% - 20\% over train, validation and test set. \end{enumerate} In Tab. \ref{tab:charsplit} the characteristics of the final split can be found. \begin{table}[b!] \caption{\label{tab:charsplit} Characteristics of the splitting of the data. Per gestational week the number of unique pregnancies, images, images with embryonic volume measurement (EV) and images with a ground truth segmentation (seg) is given.} \centering \begin{tabular}{c|rrrr|rrrr|rrrr} & \multicolumn{4}{|c|}{Training} & \multicolumn{4}{|c|}{Validation}& \multicolumn{4}{|c}{Test}\\ week & preg & img & EV & seg & preg & img & EV & seg& preg & img & EV & seg\\ \hline 8 & 252 & 457 & 373 & & 64 & 142 & 130 & & 63 & 156 & 130 & \\ 9 & 605 & 1333 & 1112 & & 195 & 404 & 394 & 34 & 193 & 396 & 394 & 34 \\ 10 & 237 & 551 & 376 & & 81 & 150 & 140 & & 83 & 156 & 150 & \\ 11 & 463 & 1190 & 865 & & 193 & 420 & 387 & 58 & 194 & 428 & 408 & 60 \\ 12 & 185 & 407 & 193 & & 68 & 122 &121 & & 67 & 113 & 113 & \\ \hline all & 874 & 3938 & 2919 & & 200 & 1238 & 1172 & 92 & 200 & 1249 & 1205 & 94 \\ \end{tabular} \end{table} \subsection{Evaluation metrics} The following two metrics were used to evaluate the results. Firstly, we report the relative error between the volume calculated from estimated segmentation $S_I$, referred to as $EV(I)$, and the ground truth embryonic volume $EV_{\text{gt}}(I)$: \begin{equation} \label{eq:ev}\text{EV}_{\text{error}}(I):= \frac{\left |EV(I)-EV_{\text{gt}}(I)\right|}{EV_{\text{gt}}(I)}.\end{equation} Secondly, we report the Dice similarity coefficient between the available manual segmentations $S_{I}^{\text{gt}}$ and the estimated segmentation $S_I$ of the image $I$ \citep{Dice}. To compare the results of different experiments, we performed the two-sided Wilcoxon signed-rank test with a significance level of 0.05, we report the p-values for both metrics as $p_{EV}$ and $p_{Dice}$. \subsection{Experiments} We performed the following six experiments. \\ \\ \textbf{Experiment 1: sensitivity metrics}\\ In literature there are no comparable studies available, to get insight in the performance of the presented framework we addressed the sensitivity to perturbations of the used evaluation metrics. We performed four operations on the 186 available ground truth segmentations: dilation using a ball with radius of one voxel; erosion using a ball with a radius of one voxel; dilation using a ball with a radius of two voxels; erosion using a ball with a radius of two voxels. We compared the four obtained segmentations with the original segmentation and report the mean and standard deviation of every operation.\\ \\ \textbf{Experiment 2: influence of hyperparameters}\\ Using the single subject strategy $\mathcal{A}_{1}^{s}$ we performed a hyperparameter search on the validation set for $\lambda_l \in \{0.01,1,100\}$ in Eq. \ref{eq:loss1}, $\lambda_s \in \{0, 0.005, 0.05, 0.5, 5\}$ in Eq. \ref{eq:loss2} and $\lambda_d \in \{1, 10, 100\}$ in Eq. \ref{eq:loss3} and the number of filters $(f_1, f_2)$ with $f_2=2 f_1$ and $f_1 \in \{16,32,64\}$ in the encoder of the affine network and $(f_3, f_4)$ with $f_4= 2 f_3$ and $f_3 \in\{1,2,4,8,16\}$ in the encoder and decoder of the nonrigid network. Furthermore, we took out training stage 2 of the affine network to evaluate its necessity, we call this strategy $\mathcal{A}_{1}^{s*}$. \\ \\ \textbf{Experiment 3: single subject strategy}\\ We used the best hyperparameters from experiment 2 to train $\mathcal{A}_k^s$ for $k=2,3,...,8$ and compare the results on the validation set to the results for $\mathcal{A}^s_1$, to evaluate whether the choice of pregnancy $k$ influences the results. Finally, we applied $\mathcal{A}^s_k$ for best performing pregnancy $k$ to the test set.\\ \\ \textbf{Experiment 4: comparison of the multi-subject and ensemble strategy}\\ To evaluate what is the best approach to combine data of multiple pregnancies we compared the results on the validation set of the multi-subject strategy $\mathcal{A}^{m}$ for $M=1$, $M=2$, $M=4$ and $M=8$ to the ensemble strategy $\mathcal{A}^e$. \\ \\ \textbf{Experiment 5: influence of atlas image quality}\\ We investigated the influence of the quality of the ultrasound images on the results by excluding pregnancy $i=8$, since this pregnancy had the lowest quality score (see Fig. \ref{fig:all_atlas}). We repeated experiment 4 without pregnancy $i=8$ on the validation set and compared the results to the original results in experiment 4.\\ \\ \textbf{Experiment 6: Analysis of the best models}\\ We compared the best strategy using data of multiple pregnancies to the best single subject strategy on the test set to investigate whether utilizing data of multiple pregnancies improves the results. For the best performing model, we analyzed the results per week GA and visualized the resulting segmentations $S_I$ and aligned images $I\circ \phi_a$. Finally, to gain more insight into reasons why our framework might fail we visually inspected all images in the test set having a $\text{EV}_{\text{error}}$ higher than $Q_3$. \section{Results} \subsection{Experiment 1: sensitivity metrics} We performed four operations on the 186 available ground truth segmentations to investigate the sensitively to pertubations, in Tab. \ref{tab:dileroexp} the mean $\text{EV}_{\text{error}}$ and Dice score can be found. \begin{table} \caption{Mean $\text{EV}_{\text{error}}$ and Dice score for the four operations. Standard deviation is given between brackets. The radius of the ball is given in voxels. } \begin{subtable}{0.45\textwidth} \caption{$\text{EV}_{\text{error}}$} \centering \begin{tabular}{c | l | l} Radius & Dilation & Erosion \\ \hline 1 & 0.37 (0.06) & 0.31 (0.05) \\ 2 & 0.83 (0.17) & 0.55 (0.07) \\ \end{tabular} \label{tab:evdilero} \end{subtable} \hfill \begin{subtable}{0.45\textwidth} \caption{Dice} \centering \begin{tabular}{c | l | l} Radius & Dilation & Erosion \\ \hline 1 & 0.84 (0.02) & 0.81 (0.03) \\ 2 & 0.71 (0.04) & 0.61 (0.07) \\ \end{tabular} \label{tab:evdildice} \end{subtable} \label{tab:dileroexp} \end{table} \subsection{Experiment 2: influence of hyperparameters} In Fig. \ref{fig:exp1a_combined} the mean $\text{EV}_{\text{error}}$ and mean Dice score can be found for the validation set for different hyperparameters. For the first stage of the affine registration network, the best results were found for $\lambda_l=1$ and $(f_1,f_2)=(32,64)$. Setting $\lambda_l=0.01$ gave worse results since then in the optimization the supervison by landmarks is neglected. For $\lambda_l=100$ the results are worse, since in this case the image similarity term in Eq. \eqref{eq:loss1} is neglected in the optimization. Furthermore, Fig. \ref{fig:exp1a_combined} shows that after applying the second training stage of the affine network in general the results improved. Only for $\lambda_s=5$ the results deteriorate, which is caused by penalizing scaling too much. The best results were found for $\lambda=0.05$. Regarding the results of training the nonrigid registration network, Fig. \ref{fig:exp1a_combined} shows that setting $\lambda_d=100$ results in low flexibility of the deformation field, which leads to no improvements of the results after nonrigid registration. Setting $\lambda_d=1$ resulted in irregular deformation fields. The best results were found for $\lambda_d=10$ and $(f_3,f_4)=(8,16)$. \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{hyperparam_heatmap.jpg} \caption{The mean $\text{Error}_{\text{EV}}$ and mean Dice score over the validation set for different values of $\lambda_l$ in \eqref{eq:loss1}, $\lambda_s$ in \eqref{eq:loss2}, $\lambda_d$ in \eqref{eq:loss3} and the number of filters $(f_1,2f_1)$ in the affine network and $(f_3,2f_3)$ in the nonrigid network. Result for the best combination of hyperparameters is underlined. Hyperparameter combinations that did not converge are blank.} \label{fig:exp1a_combined} \end{figure} \newpage \begin{figure}[h] \centering \includegraphics[width=\textwidth]{exp1_all_atlas_combined.jpg} \caption{Results for all eight single subject strategies evaluated over the validation set. $\mathcal{A}^{s*}_1$ refers to the single subject strategy where stage 2 of the affine network is omitted. The dotted line represents the mean.} \label{fig:exp1_boxplot} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{combined_4.jpg} \caption{Boxplots of results for single subject strategy $\mathcal{A}_4^s$ for the validation and test set. The dotted line represents the mean. } \label{fig:exp1_test_boxplot} \end{figure} \newpage Finally, we compared using the results of stage 2 ($\mathcal{A}_1^s$) or the results of stage 1 ($\mathcal{A}_1^{s*}$) as input for the nonrigid registration network. Comparing the results for $\mathcal{A}_1^s$ and $\mathcal{A}^{s*}_1$ in Fig. \ref{fig:exp1_boxplot} shows that the results are better for both metrics for $\mathcal{A}_1^s$. The statistical test confirmed these observations ($p_{EV}<0.001$, $p_{Dice}<0.01$). \subsection{Experiment 3: single subject strategy} Using the best hyperparameters determined in experiment 2, we trained the framework using the single subject strategy for every pregnancy $k=1,...,8$. Fig. \ref{fig:exp1_boxplot} shows for the validation set that for $\mathcal{A}_k^s$, for $k=1,..,8$ the results improved between the stages of the affine network and after nonrigid registration. Furthermore, a wide spread is observed in the results, which comes from cases where the affine registration fails and this can not be compensated for by the subsequent nonrigid registration network. In general our approach gives similar results independent of the choice of atlas. However, there are differences in results between the different pregnancies, that are consistent with the variation in atlas image quality. We applied the framework on the test set using the best hyperparameters found and the best single subject strategy. Fig. \ref{fig:exp1_boxplot} shows that $\mathcal{A}_4^s$ performs the best in terms of Dice score and comparable in terms of $\text{EV}_{\text{error}}$. Fig. \ref{fig:exp1_test_boxplot} shows that the results on the test set are comparable to those on the validation set for $\mathcal{A}_4^s$. \begin{table}[b!] \caption{P-values of the two-side Wilcoxon signed-rank test for both metrics over the validation set. Statistically significant results are boldface.} \label{tab:test_exp2} \centering \begin{tabular}{c|ccccc} \diagbox{Dice}{$EV_{\text{error}}$} & $\mathcal{A}^m, M=1$ & $\mathcal{A}^m, M=2$ &$\mathcal{A}^m, M=4$ & $\mathcal{A}^m, M=8$ & $\mathcal{A}^e$ \\ \hline \\[-1em] $\mathcal{A}^m, M=1$ & & $\mathbf{<0.001}$& $\mathbf{<0.001}$ & $0.064$ & $\mathbf{<0.001}$\\ $\mathcal{A}^m,M=2$ & $\mathbf{0.001}$ & & $0.780$ & $\mathbf{0.005}$ & $\mathbf{<0.001}$\\ $\mathcal{A}^m,M=4$ & $\mathbf{<0.001}$ & $\mathbf{0.016}$ & & $0.051$ & $\mathbf{<0.001}$ \\ $\mathcal{A}^m,M=8$ & $\mathbf{<0.001}$ & $0.147$ & $0.304$ & & $\mathbf{<0.001}$ \\ $\mathcal{A}^e$ &$\mathbf{<0.001}$ & $\mathbf{0.006}$ & $0.471$ & $0.052$ & \\ \end{tabular} \end{table} \newpage \subsection{Experiment 4: comparison of the multi-subject and ensemble strategy} In experiment 4 we compared the multi-subject strategy $\mathcal{A}^m$, for different values of $M$, with the ensemble strategy $\mathcal{A}^e$. Fig. \ref{fig:exp34_boxplot}a show that using the multi-subject strategy $\mathcal{A}^m$ gives for all tested values of $M$ better results than taking the ensemble strategy $\mathcal{A}^e$ . This is confirmed by the statistical test for the $\text{EV}_{\text{error}}$ (see Tab. \ref{tab:test_exp2}). Overall, the results for $\mathcal{A}^m$ with $M=4$ are the best with a median $\text{EV}_{\text{error}}=0.32$ and median $\text{Dice Score} = 0.72$. \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{exp3_4_combined.jpg} \caption{(a) Results for the validation set for experiment 4, where multi-subject strategy $\mathcal{A}_m$ for different values of $M$ is compared to the ensemble strategy $\mathcal{A}^e$. (b) Results for the validation set for experiment 5, where experiment 4 is repeated without the data of pregnancy $i=8$ with the lowest image quality.} \label{fig:exp34_boxplot} \end{figure} \subsection{Experiment 5: influence of quality atlas images} After excluding pregnancy $i=8$ with the lowest image quality, experiment 4 was repeated. Fig. \ref{fig:exp34_boxplot}b shows that for multi-pregnancy strategy $\mathcal{A}^m$ with $M=1$ and the ensemble strategy $\mathcal{A}^e$ the results are significantly better when excluding the pregnancy with lowest image quality ($\mathcal{A}^m$, $M=1$: $p_{EV}=0.001$, $p_{Dice}<0.001$, $\mathcal{A}^e$: $p_{EV}<0.001$, $p_{Dice}<0.001$). This can be explained by the fact that if fewer atlases are taken into account, atlas images with worse quality will have more influence. When comparing the multi-pregnancy strategy $\mathcal{A}^m$ with $M=7$ against the multi-pregnancy strategy $\mathcal{A}^m$ with $M=8$ only the $\text{EV}_{\text{error}}$ was significantly better for $M=7$ ($p_{EV}=0.01$, $p_{Dice}=0.43$). For the multi-pregnancy strategy with $M=2$ and $M=4$ the results were not significantly different ($M=2$: $p_{EV}=0.22$, $p_{Dice}=0.12$, $M=4$: $p_{EV}=0.40$, $p_{Dice}=0.45$). \subsection{Experiment 6: Analysis of the best models} The results for the best single subject strategy $\mathcal{A}^s_4$ and best multi-subject strategy $\mathcal{A}^m$ with $M=4$ were compared on the test set and no significant differences were found ($p_{EV}=0.75$, $p_{Dice}=0.71$). Taking the multi-subject strategy with $M=4$ omits choosing the right pregnancy as atlas and in experiment 5 we saw that the results are not affected by the presence of atlas images with lower image quality. Therefore, we conclude that the multi-subject strategy with $M=4$ is a robust method. Next, multi-subject strategy $\mathcal{A}^m$ with $M=4$ was analyzed in more detail. Fig. \ref{fig:M_4_week}a shows that distribution of the estimate EV and the ground truth $EV_{gt}$ per week GA is similar. This indicates that our method is robust for the variation due to rapid growth and development over time. Fig. \ref{fig:M_4_week}b shows for every gestational week the mid sagittal plane of an image from the test set. GA, $\text{EV}_{\text{error}}$ and Dice score are given per image. Finally, we visually inspected all images in the test set with a $\text{EV}_{\text{error}}$ above $Q_3$. We observed that for $8\%$ of the images the embryo was lying against the border of the image, for $47\%$ of the images the embryo was lying against the uterine wall, for $34\%$ the image quality was very low due to noise, low contrast or motion artefacts and for $11\%$ of the images there was no apparent visual cue why the affine registration might fail. Fig. \ref{fig:failed_affine} shows some examples of an embryo lying against the border of the image, of the embryo lying against the uterine wall and of low image quality. \begin{figure}[b!] \centering \includegraphics[scale = 0.275]{exp5_combined.jpg} \caption{Results of multi-pregnancy strategy $\mathcal{A}^m$ with $M=4$ for the test set. (a) joint plot for $EV$ and $EV_{gt}$ in $\text{mm}^3$, per gestational week. (b) For every week GA the midsaggital plane of an image from the test set is shown. The slice shown for $S_I$ corresponds to the slice of the midsaggital plane in the standard orientation.} \label{fig:M_4_week} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{visual_affine_failed.jpg} \caption{Examples of the original image for which the the affine registration network failed. (a) example of an image where the embryo lies in the image border. (b,c) examples of images where the embryo lies against the uterine wall and there is no clear distinction between embryo and uterine wall. (d,e) examples of images with low quality.} \label{fig:failed_affine} \end{figure} \newpage \section{Discussion and conclusion} Monitoring first trimester embryonic growth and development using ultrasound is of crucial importance for the current and future health of the fetus. A first key step in automating this monitoring, is achieving automatic segmentation and spatial alignment of the embryo. Automation of those tasks will result in less investigation time and human errors. Here we present the first framework to perform both tasks simultaneously for first trimester 3D ultrasound images. We have developed an multi-atlas segmentation framework and showed we could accurately segment and spatially align the embryo captured between 8+0 and 12+6 weeks GA. We evaluated three atlas fusion strategies for deep learning based multi-atlas segmentation, namely: 1) single-subject strategy: training the framework using atlas images from a single pregnancy, 2) multi-subject strategy: training the framework with data of all available atlas images and 3) ensemble strategy: ensembling of the frameworks trained per pregnancy. The results on the test set for the best single subject strategy were a median relative embryonic volume error of 0.29 and median Dice score of 0.73. The multi-subject strategy significantly outperformed the ensembling strategy and the best results on the test set were found when taking the $M=4$ atlas images closest in gestational age in account (median $\text{EV}_{\text{error}}$ = 0.32, median Dice = 0.72). The best single subject strategy and the best multi-subject strategy did not significantly outperform each other. However, the multi-subject strategy circumvents the need for selecting the most suitable pregnancy to create atlas images and is therefore a more robust method. Regarding the values of the metrics for taking a multi-subject approach with $M=4$, we observed that dilation and erosion of the ground truth segmentations with a ball with a radius of one voxel gave a mean Dice score of 0.84 and 0.81 and a mean $\text{EV}_{\text{error}}$ of 0.37 and 0.31. The results found in the experiments were for the embryonic volume error in the same range. We took a deep learning approach for image registration. Our framework consist of two networks, the first one dedicated to learning the affine transformation and the second one to learning a nonrigid registration. The affine network is trained in two stages: the first stage is trained supervised using the crown and rump landmarks and subsequently in the second stage the results is refined in an unsupervised manner. In our experiments using the single subject strategy we showed that this second stage significantly improves the final result over skipping this stage. The nonrigid network is trained fully unsupervised. We performed an rigorous search to select the hyperparameters. Training the framework for all eight single subject strategies gave on the validation set similar results for the selected hyperparameters. The multi-subject strategy outperfored the ensemble strategy. The key difference between these two techniques is the way they were trained. In the multi-subject strategy one network was trained using all available atlases. In the ensemble strategy eight individual networks were trained. We saw that, when excluding the pregnancy that gave the worst individual results, this ensemble significantly outperformed the original one. Hence, in this set-up taking an ensembling approach is influenced by the individual results. Taking the multi-subject strategy is a robust approach against this, since during training all atlas images were presented. For the multi-subject strategy we evaluated how many images $M$ should be registered to every image $I$. A possible explanation why $M<4$ was not optimal is that this may not present enough variation to find a suitable match for every image $I$. Taking $M>4$, leads to a comparison with atlases covering a wide age range. This happens because the atlas images cover almost every day with 8+0 and 12+6 weeks gestation, see Fig. \ref{fig:all_atlas}. This wide age range might result in some atlases not matching with the image, due to the rapid development of the embryo. The main limitation of our work is that when the affine registration fails, the nonrigid network cannot correct for this. This resulted in a wide spread in the results that was propagated from the affine to the nonrigid network for both metrics and all atlas fusion strategies. We observed that cases with a high relative embryonic volume error ($\text{EV}_{\text{error}}>Q_3$) had clear visual explanation for the failing of the registration: either the embryo lying on the image border, against the uterine wall or the image had low quality. A good direction for further research might be to use the embryonic volume error during training. Having the embryonic volume available during training might simplify finding the embryo. Finally, another interesting topic for further research is applying our framework to other problems. We created a flexible framework that easily can be adapted to work with or without landmarks and with or without multiple atlas images. Furthermore, these atlas images could be longitudinal, like presented here, but the framework can also be applied to cross-sectional atlas images. The atlas selection is currently based on gestational age, but could be based on any other relevant meta-data. We conclude that the presented multi-atlas segmentation and alignment framework enables the first key steps towards automatic analysis of first trimester 3D ultrasound. Automated segmentation and spatial alignment pave the way for automated biometry and volumetry, standard plane detection, abnormality detection and spatio-temporal modelling of growth and development. This will allow for more accurate and less time-consuming early monitoring of growth and development in this crucial period of life. \newpage \acks{The authors thank the Rotterdam Periconception Cohort team of the department of Obstetrics and Gynaecology for data acquisition and the participating couples and researchers for their contributions. This research was funded by the Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).} \ethics{This study was approved by the local Medical Ethical and Institutional Review Board of the Erasmus MC, University Medical Center, Rotterdam, The Netherlands. Prior to participation, all participants provided written informed consent.} \coi{Wiro Niessen is founder, shareholder and scientific lead of Quantib BV. The authors declare no other conflicts of interest.}
2,877,628,089,313
arxiv
\section{Introduction} Over the years, the study of thermal properties in quantum field theories has been of great interest. In the last decade, the discovery of the gauge/gravity correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} has provided tools for the study of a large class of strongly-coupled non-abelian gauge theories. Aside from pure theoretical motivation, one the the main reasons for pursuing such studies is the possible connection with the phenomenology of ultra-relativistic heavy ion collisions and/or strongly correlated condensed matter systems. An excellent account of these efforts can be found in the reviews \cite{Gubser:2009md,CasalderreySolana:2011us,Hartnoll:2009sz,McGreevy:2009xe} and the references therein. The simplest and best-understood example of the correspondence relates four-dimensional ${\mathcal{N}}=4$ Super-Yang-Mills (SYM) to type-IIB string theory (or supergravity) in asymptotically AdS$_5 \times {\rm S}^5$. In particular, the inclusion of a black hole in the bulk is known to be dual to a strongly-coupled gauge theory at finite temperature. The prescription to compute correlation functions using the gauge/gravity duality was developed a decade ago in \cite{Gubser:1998bc,Witten:1998qj} and was subsequently generalized to the real-time finite-temperature formalism in \cite{Son:2002sd,Herzog:2002pc}. This tool opened the possibility of computing the decay rates and timescales for the approach to thermal equilibrium of certain disturbances \cite{Horowitz:1999jd,Birmingham:2001pj,Kovtun:2005ev}, which in the gravity side are mapped to the computation of quasinormal frequencies. Perhaps, the most intriguing result in this front has been the well-known result of $1/4\pi$ for the shear viscosity to entropy density ratio at strong coupling \cite{Policastro:2001yc,Kovtun:2004de} which in turn led to the holographic calculation of many other transport coefficients \cite{Son:2007vk,Rangamani:2009xk}. More recently, it has been possible to gain further insight in the previously uncharted out-of-equilibrium regime by considering geometries that evolve in time \cite{Hubeny:2010ry,Balasubramanian:2010ce,Balasubramanian:2011ur,Caceres:2012em}. In addition, other thermal properties have been inferred by considering various types of partonic probes, and analyzing the manner in which the plasma damps their motion. These studies include quarks \cite{Herzog:2006gh,Gubser:2006bz,CasalderreySolana:2006rq}, mesons \cite{Peeters:2006iu,Liu:2006nn,Chernicoff:2006hi}, baryons \cite{Chernicoff:2006yp,Krishnan:2008rs}, gluons \cite{Chernicoff:2006yp,Gubser:2008as} k-quarks \cite{Chernicoff:2006yp} and various types of defects \cite{Janiszewski:2011ue}. Although all these probes provide different information regarding the nature of the plasma, in this paper we focus on the dynamics of heavy quarks and their interactions with the thermal bath. In the context of this duality, a heavy quark on the boundary theory corresponds to the endpoint of an open string that, at finite temperature, stretches between the boundary and the black hole horizon. The seminal works \cite{Herzog:2006gh,Gubser:2006bz} focused on the energy loss of a quark that is either moving with constant velocity as a result of being pulled by an external force, or is unforced but moving nonrelativistically and about to come to rest. Further analyses \cite{Gubser:2006nz,CasalderreySolana:2007qw,Chernicoff:2008sa} made it clear that this mechanism of energy loss is closely related with the appearance of a worldsheet horizon (not to be confused with the spacetime horizon). The study of quark fluctuations due to its interaction with the thermal bath involves going beyond the classical description of the string. As customary, small perturbations about the average embedding are described by free scalar fields propagating on the corresponding induced worldsheet geometry. These fields can be excited due to Hawking radiation emitted by the worldsheet horizon, which in turn populates the various modes of oscillation of the string. It was then found that, once these modes are quantized, the induced motion of the string endpoint is correctly described in terms of Brownian motion and its associated Langevin equation \cite{brownian,Son:2009vu}. This result was obtained by two different approaches: the authors of \cite{brownian} reached this conclusion by assuming (following \cite{Lawrence:1993sg,Frolov:2000kx}) that the state of the quantized fields is the usual Hartle-Hawking vacuum, which describes the black hole in equilibrium with its own thermal radiation. The authors of \cite{Son:2009vu} followed a different but equivalent route, employing the relation between the Kruskal extension of the Schwarzschild-AdS geometry and the Schwinger-Keldysh formalism \cite{Maldacena:2001kr,Herzog:2002pc}, together with the known connection between the latter and the generalized Langevin equation. These calculations were later elaborated on in \cite{Giecold:2009cg,CasalderreySolana:2009rm,deboer2,Caceres:2010rm,Ebrahim:2010ra,CaronHuot:2011dr}. In this paper, we generalize the original computation of \cite{brownian} to the case of non-commu\-ta\-ti\-ve Super-Yang-Mills (NCSYM). Non-commutative theories are known to lead to many qualitatively new phenomena, both classically and quantum mechanically; in particular, the existence of non-local interactions reminiscent of the UV/IR mixing found in string theory\cite{Matusis:2000jf}\footnote{For a review of non-commutative quantum field theories, see for example \cite{Minwalla:1999px, Douglas:2001ba}.}. A simple model of non-commutativity was described by Bigatti and Susskind \cite{Bigatti:1999iz} where they considered a pair of opposite charges moving in a strong magnetic field. In the limit of large magnetic field, the charges are frozen into the lowest Landau levels and the interactions of such particles include the Moyal bracket characteristic of field theories on noncommutative space. Similarly, in the context of string theory, it was shown that the endpoints of open strings constrained to a D-brane in the presence of a constant Neveu-Schwarz $B$-field\footnote{By gauge invariance, this is equivalent to a constant magnetic field on the brane.} satisfy also the non-commutative algebra. A holographic realization of NCSYM was given a short time afterwards in \cite{Hashimoto:1999ut,Maldacena:1999mh}. One of the main motivations to study Brownian motion in the presence of non-\-commuta\-ti\-vi\-ty is the idea that non-local interactions might lead to significant deviations in the behavior of the thermal properties of the theory \cite{Fischler:2000fv,Fischler:2000bp}. In \cite{Edalati:2012jj} for instance, it was found that the rate of decay of a fluctuation propagating in this thermal bath is remarkably larger than in the case of ordinary SYM, which leads to faster thermalization. Such a property is possibly linked to the connection between the ultraviolet and infrared regimes of the theory, which implies in particular that the transverse size of dipoles grows with their longitudinal momentum \cite{Bigatti:1999iz}. In order to investigate further properties of this non-commutative system, we study in this paper the holographic realization the Brownian motion of a heavy quark. More specifically, our aim is to formulate a Langevin equation that accounts for the effects of non-commutativity and to study diffusion processes within the plasma. The structure of this paper is as follows: in section \ref{section2}, we describe the gravity dual to NCSYM presented in \cite{Hashimoto:1999ut,Maldacena:1999mh} and, following \cite{brownian}, we review the generalities of Brownian motion in the context of the gauge/gravity correspondence. In section \ref{secSYM} we study the case of Brownian motion in ordinary SYM with a magnetic field, which is achieved by the introduction of a gauge field in the open string sector\footnote{Here, we should emphasize that we do not expect to recover the non-commutative results in the strong $B$ limit. On one hand, there is a critical magnetic field at which Schwinger pair production is energetically favored \cite{Bolognesi:2012gr}. On the other hand, for large magnetic fields the backreaction on the geometry is unsuppressed.}. Apart from being an interesting problem in its own right\footnote{Indeed, many holographic systems present interesting features when a background magnetic field is turned on. For a recent review on this topic see \cite{Bergman:2012na}.}, this exercise helps us to gain some intuition and to set the grounds of our computations. In section \ref{sec4} we turn to the study of Brownian motion in NCSYM. We begin by postulating a Langevin equation for the non-commutative plasma similarly to the one describing the quark in the presence of a magnetic field. We show that this equation correctly captures behavior of Brownian particle. This equation is expressed naturally in terms of matrices and given its structure, it automatically implies that fluctuations along different directions are correlated. We then compute holographically the drag coefficient from the response of the Brownian particle to an external force and, finally, we study the diffusion process of the quark within the plasma, which turns out to be unaffected by non-commutativity. In section \ref{conclusions} we make some comments about our results and close with conclusions. \section{Background and String Action\label{section2}} \subsection{A gravity dual of NCSYM} Quantum field theories on noncommutative spaces have been studied intensively in the last few years. The essential postulate of non-commutativity is the commutation relation \begin{equation} \left[x^\mu,x^\nu\right]=i\theta^{\mu\nu}, \end{equation} where $\theta^{\mu\nu}$ is a real and antisymmetric rank-2 tensor. The algebra of functions on a non-commutative space can be viewed as an algebra of ordinary functions with the product deformed to the non-commutative Moyal product, defined by \begin{equation} \left(\phi_1\star\phi_2\right)(x)\equiv e^{\frac{i}{2}\theta^{\mu\nu}\partial_\mu^y\partial_\nu^z}\phi_1(y)\phi_2(z)\big|_{y=z=x}. \end{equation} Non-commutative field theories emerge in string theory, as the worldvolume theory of D-branes with a constant Neveu-Schwarz $B$-field provided that one takes a special limit to decouple the open and closed string sectors \cite{Douglas:1997fm, Ardalan:1998ce,Seiberg:1999vs}. Basically, one scales the string tension to infinity and the closed string metric to zero while keeping the $B$-field fixed. For the discussion in the present paper, we will focus specifically on the four-dimensional $SU(N)$ NCSYM at finite temperature, taking the non-commutativity parameter to be non-zero only in the $(x^2,x^3)$-plane, that is $[x^2,x^3]\sim i \theta$. In the spirit of the gauge/gravity correspondence, the dynamics of this theory at large $N$ and at strong 't Hooft coupling should be described by a bulk gravitational system. Indeed, a gravity dual for this theory was proposed in \cite{Hashimoto:1999ut,Maldacena:1999mh} which, in the string frame, reads \begin{eqnarray}\label{backg} ds^2 &=& G_{mn}dx^mdx^n=\ap R^2 \left[u^2\left(-f(u) dx_0^2+dx_1^2+h(u)\left(dx_2^2+dx_3^2\right)\right)+ \frac{du^2}{u^2f(u)}+ d\mathrm{\Omega}_5^2\right]\nonumber\\ e^{2\Phi} &=& {\hat g}^2h(u)\nonumber\\ B_{23} &=& \ap R^2a^2u^4h(u)\\ C_{01} &=& \frac{\ap a^2R^2}{\hat g}u^4,\nonumber\\ \tilde F_{0123u} &=& \frac{4\ap ^2R^4}{{\hat g}}u^3h(u)\nonumber \end{eqnarray} where $R^4 = 4 \pi \hat g N=\lambda$ and $\hat g$ is the value of the string coupling in the IR. In the expressions above, \begin{equation} f(u)=1-\frac{u_h^4}{u^4} \end{equation} is the usual Schwarzschild factor and \begin{equation} h(u)=\frac{1}{1+a^4u^4}. \end{equation} The parameter $a$ which appears in the above expressions is related to the non-commutativity parameter $\theta$ of the boundary theory through $a=\lambda^{1/4}\sqrt{\theta}$. Here, $\lambda$ is the 't Hooft coupling of the boundary large-$N$ NCSYM theory. The radial direction $u$ is mapped holographically into a energy scale in the gauge theory, in such a way that $u\rightarrow\infty$ and $u\rightarrow 0$ are respectively the UV and IR limits\footnote{Throughout this paper, we use the terms ``UV'' and ``IR'' with respect to the boundary energy. Then, in bulk terms, UV means near the boundary whereas IR means near the horizon.}. The directions $x^\mu\equiv(t,\vec{x})$ are parallel to the boundary and are directly identified with the gauge theory directions. The five-sphere coordinates are associated with the global $SU(4)$ internal symmetry group, but they will play no role in our discussion. The Hawking temperature of the above solution, which is interpreted as the temperature of the non-commutative boundary theory, is given by $T=u_h/\pi$. Notice that this temperature is the same as the temperature one obtains for the Schwarzschild-AdS$_5$ solution, which is dual to a thermal state of four-dimensional SYM. Indeed, it is easy to show that all the thermodynamic quantities obtained from \eqref{backg} are the same as the ones obtained from the Schwarzschild-AdS$_5$ solution \cite{Maldacena:1999mh}. In the limit of vanishing temperature ($u_h\rightarrow0$), we are left with a geometry that is dual to the vacuum of the NCSYM. The closed string sector describing small or large fluctuations on top of this background captures the nonperturbative gluonic (+ adjoint scalar and fermionic) physics. The black hole geometry (\ref{backg}) is a special case of such a fluctuation, and is dual to a thermal ensemble of the theory. For $u \ll a^{-1}$, the background \eqref{backg} goes over to the AdS$_5 \times {\rm S}^5$ solution. This observation just reflects the fact that the non-commutative boundary theory goes over to the ordinary commutative SYM theory at length scales much greater than $\lambda^{1/4}\sqrt{\theta}$. On the other hand, for $u\gg a^{-1}$ the background \eqref{backg} shows significant deviation form the AdS$_5 \times {\rm S}^5$ solution and the bulk spacetime is no longer asymptotically AdS. The boundary theory interpretation of this regime just means that the effect of non-commutativity becomes pronounced for length scales which are at the order of, or smaller than, $\lambda^{1/4}\sqrt{\theta}$ \footnote{Since the reliability of the solution \eqref{backg} requires $R^4=\lambda$ to be large, the effect of non-commutativity in the boundary theory is visible even at large length scales \cite{Maldacena:1999mh}. This should be compared with the weak-coupling regime of the theory where the threshold length scale (beyond which the theory becomes effectively commutative) is roughly at the order of $\sqrt{\theta}$.}. To summarize, the background \eqref{backg} represents a flow in the boundary theory from a UV fixed point which is NCSYM at large $N$ and large $\lambda$ to an IR fixed point given by the ordinary commutative SYM (again, at large $N$ and large $\lambda$). \subsection{Heavy quarks and holographic Brownian motion\label{holbrom}} From the gauge theory perspective, the introduction of an open string sector associated with a stack of $N_f$ D7-branes in the geometry (\ref{backg}) is equivalent to the addition of $N_f$ hypermultiplets in the fundamental representation of the gauge group $SU(N)$, and these are the degrees of freedom that we will refer to as quarks. For $N_f\ll N$, we can neglect the backreaction of the D7-branes on the geometry; in the gauge theory perspective, this corresponds to working in a ``quenched'' approximation that ignores quark loops. For simplicity, we will take $N_f=1$. The probe brane covers the four gauge theory directions $x^\mu$, and is spread along the radial direction from $u\rightarrow\infty$ to $u=u_m$ where it ends smoothly\footnote{At this position the $\mathrm{S}^3\subset\mathrm{S}^5$ that it is wrapped on shrinks down to zero size.}. An isolated quark is dual to an open string that extends radially from the flavor brane at $u=u_m$ to the horizon at $u=u_h$\footnote{For a review of quark dynamics in the context of the gauge/gravity correspondence see \cite{Chernicoff:2011xv}.}. The string couples to both the metric and the $B$-field so its dynamics follows from the action $S=S_{\text{NG}}+S_{\text{B}}$,\footnote{The string also couples to the dilaton but this coupling is suppressed by a factor of the string length.} \begin{equation}\label{nambugoto} S\equiv\int_\Sigma d\tau d\sigma\, \mathcal{L} = \frac{1}{2\pi\ap}\int_\Sigma d\tau d\sigma\left(\sqrt{-\det g_{ab}}+B_{mn}\partial_\tau X^m\partial_\sigma X^n\right) \end{equation} where $g_{ab}=G_{mn}\partial_aX^m\partial_bX^n$ is the induced metric on the worldsheet $\Sigma$. We choose to work in the static gauge, where $\tau=t$, $\sigma=u$. One can easily verify that the embedding $X^m=(t,0,0,0,u)$ is a trivial solution and this correspond to a quark that is is equilibrium in the thermal bath. Notice that the string is being described in first-quantized language and, as long as it is sufficiently heavy, we are allowed to treat it semiclassically. In gauge theory language, we are coupling a first-quantized quark to the SYM fields, and then carrying out the full path integral over the strongly-coupled fields, treating the path integral over the quark trajectory in a saddle-point approximation. The mass of the quark $m$ is related to the position of the flavor brane $u_m$ and can be obtained by a straightforward computation: \begin{equation}\label{mass} m= {1\over 2\pi\ap}\int_{u_h}^{u_m} du\,\sqrt{g_{tt}\,g_{uu}} ={R^2\over 2\pi}(u_m-u_h)\approx{R^2\over 2\pi}u_m,\qquad\text{for}\quad u_m\gg u_h. \end{equation} Now, we want to study fluctuations over the above embedding. In particular, we are interested in motion on the Moyal plane, so our ansatz for the perturbations will be $X^m=(t,0,X_2(t,u),X_3(t,u),u)$. Using this, the induced metric has the following components: \begin{eqnarray} g_{tt}&=&\ap R^2u^2\left[-f+h\left(\dot{X}_2^2+\dot{X}_3^2\right)\right],\\ g_{uu}&=&\ap R^2u^2\left[h\left(X_2'^2+X_3'^2\right)+\frac{1}{u^4f}\right],\\ g_{tu}&=&\ap R^2u^2h\left(\dot{X}_2X'_2+\dot{X}_3X'_3\right), \end{eqnarray} where $\dot{X}_i\equiv\partial_tX_i$ and $X_i'\equiv\partial_uX_i$. Up to quadratic order in the perturbations, the action can be written as \begin{equation}\label{action} S\approx\frac{R^2}{4\pi}\int dt du\left[u^4fh\left(X_2'^2+X_3'^2\right)-\frac{h}{f}\left(\dot{X}_2^2+\dot{X}_3^2\right)+2a^2u^4h\left(\dot{X}_2X_3'-\dot{X}_3X_2'\right)\right]. \end{equation} Note that we dropped the constant term that does not depend on $X_i$. We can consider also the situation in which one has a forced motion due to a electromagnetic field in the background. This can be easily realized by turning on a world-volume $U(1)$ gauge field on the flavor brane. Since the endpoint of the string is charged, this amounts to add the minimal coupling to the action $S=S_{\text{NG}}+S_{\text{B}}+S_{\text{EM}}$, where \begin{equation}\label{actem} S_{\text{EM}}=\int_{\partial\Sigma}\left(A_t+A_i \dot{X}_i\right) dt, \end{equation} This will exert the desired force on our heavy quark. However, this coupling is just a boundary term, so it will not play any role for the string dynamics in the bulk, other than modify the boundary condition. We shall ignore this part of the action for now but we will come back to it later on. Because $t$ is an isometry of the background (\ref{backg}), we can set \begin{equation}\label{fourier} X(t,u)\sim e^{-i\omega t}g_\omega(u) \end{equation} and use the frequency $\omega$ to label the basis of solutions to the equations of motion. Since the action (\ref{action}) is quadratic in the perturbations, we expect linear differential equations. The solutions are particularly easy to obtain near the horizon limit $u\sim u_h$, where the action reduces to \begin{eqnarray} S\!&\approx&\! u_h^2 h(u_h){R^2\over 4\pi}\int dt du_* \left[\left(X_2'^2+X_3'^2\right)-\left(\dot{X}_2^2+\dot{X}_3^2\right)+2a^2u_h^2\left(\dot{X}_2X_3'-\dot{X}_3X_2'\right)\right]\nonumber\\ \!&\approx&\! u_h^2 h(u_h){R^2\over 4\pi}\int dt du_* \left[\left(X_2'^2+X_3'^2\right)-\left(\dot{X}_2^2+\dot{X}_3^2\right)\right].\label{action_NH} \end{eqnarray} Here, the primes denote derivatives with respect to the tortoise coordinate $u_*$, which is defined by \begin{equation} du_*={du\over u^2f(u)}.\label{tortoise} \end{equation} Note also that the last term drops out because it is a total derivative. Thus, the equations of motion are then \begin{align} (\partial_{u_*}^2-\partial_{t}^2)X_i= 0,\label{eom_NH} \end{align} which show that in this region $X_i$ behave like massless Klein-Gordon scalars in flat space. The two independent solutions are \begin{equation} X^{(\text{out})}_i(u)=e^{-i\omega t}g_i^{(\text{out})}(u)\sim e^{-i\omega(t-u_*)}\label{solout} \end{equation} and \begin{equation} X^{(\text{in})}_i(u)=e^{-i\omega t}g_i^{(\text{in})}(u)\sim e^{-i\omega(t+u_*)},\label{solin} \end{equation} corresponding to outgoing and ingoing waves respectively. Near the horizon one finds that \begin{align} u_*\sim {1\over 4u_h}\log\left({u-u_h\over u_h}\right) \label{tortoise_NH} \end{align} up to an additive numerical constant, so \begin{equation}\label{normal} g^{(\text{out/in})}(u)\sim\left(\frac{u}{u_h}-1\right)^{\pm i\omega/4u_h}. \end{equation} Away from the horizon, these solutions will have a complicate dependence, but it still holds that $g^{(\text{out})}=g^{(\text{in})}\,\!^*$ (see appendix \ref{appsols}). Standard quantization of quantum fields in curved spacetime \cite{Birrell} leads to a mode expansion of the form \begin{equation}\label{modex} X_i(t,u)= \int_0^\infty {d\omega\over 2\pi} [a_{\omega} u_{\omega}(t,u)+a_{\omega}^\dagger u_{\omega}(t,u)^*], \end{equation} where the functions $u_{\omega}(x)$ correspond to a basis with positive-frequency modes. These modes can be expressed as a linear combination of outgoing and ingoing waves with arbitrary coefficients, i.e., \begin{equation}\label{defab} u_\omega(t,u)= A \left[g^{(\text{out})}(u)+B \, g^{(\text{in})}(u)\right]e^{-i\omega t}. \end{equation} The constant $B$ is fixed through the boundary condition at $u=u_m$ but one generally obtains that it is a pure phase $B=e^{i\theta}$ (see sections \ref{hawk1} and \ref{hawk1}). The outgoing and ingoing modes have, then, the same amplitude and this implies that the black hole, which emit Hawking radiation, can be in thermal equilibrium \cite{Hemming:2000as}. The constant $A$ on the other hand, is obtained by requiring the normalization of the modes through the standard Klein-Gordon inner product. For any functions $f_i(t,u)$ and $g_j(t,u)$ satisfying the equations of motion, the Klein-Gordon inner product is defined by \begin{equation}\label{inner} (f_i,g_j)_\sigma=-{i\over 2\pi\ap}\int_\sigma \sqrt{\tilde g}\, n^\mu G_{ij}\, (f_i\partial_\mu g_j^*-\partial_\mu f_i\, g_j^*), \end{equation} where $\sigma$ is a Cauchy surface in the $(t,u)$ part of the metric, $\tilde g$ is the induced metric on $\sigma$ and $n^\mu$ is the future-pointing unit normal to $\sigma$. It can be shown that this inner product is independent of the choice of $\sigma$ \cite{Birrell}, but for simplicity we take it as a constant-$t$ surface. We want to normalize $u_\omega$ using (\ref{inner}). However the main contribution to the integral comes from the IR region \cite{deboer2}, which in terms of the tortoise coordinate is just \begin{equation} (f_i,g_j)_\sigma= -i\delta_{ij}{R^2 \over 2\pi}u_h^2h(u_h)\int_{u_*\sim-\infty}\!\!\!\!\!\!\!\!\!\!\!\! du_* (f_i\, \dot{g}_j^*- \dot{f}_i\, g_j^*).\label{inner_prod_NH} \end{equation} Of course, there is a contribution to the inner product from regions away from the horizon, but because the near-horizon region is semi-infinite in the tortoise coordinate $u_*$, the normalization of solutions is completely determined by this region. After some algebra, we find that \begin{equation} A=\sqrt{\frac{\pi}{\omega R^2u_h^2h(u_h)}}, \end{equation} so that $(u_\omega,u_\omega)=1$, ensuring that the canonical commutation relations are satisfied: \begin{align} [a_{\omega},a_{\omega'}]= [a_{\omega}^\dagger,a_{\omega' }^\dagger]=0,\qquad [a_{\omega},a_{\omega'}^\dagger]=2\pi\delta(\omega-\omega').\label{CCR} \end{align} In the semiclassical approximation, the string modes are thermally excited by Hawking radiation emitted by the worldsheet horizon. In particular, they satisfy the Bose-Einstein distribution: \begin{equation} \left\langle a_\omega^\dagger\, a_{\omega'}\right\rangle ={2\pi\delta{(\omega-\omega')}\over e^{\beta \omega}-1}. \end{equation} Using this and the mode expansion given in (\ref{modex}), we can derive a general formula for the displacement squared of the Brownian particle. First of all, let us identify the position of the heavy quark as the string endpoint at the boundary $u=u_m$, i.e., \begin{equation} x_i(t)\equiv X_i(t,u_m)= \int_0^\infty {d\omega\over 2\pi} \left[a_\omega u_\omega(t,u_m) + a_\omega^\dagger u_\omega(t,u_m)^*\right]. \end{equation} Then, it follows that \begin{equation} \left\langle x_i(t)x_i(0)\right\rangle=\int_0^\infty {d\omega d\omega'\over (2\pi)^2}\left[\langle a_\omega a_{\omega'}^\dagger\rangle u_\omega(t,u_m)u_{\omega'}(0,u_m)^*+\langle a_\omega^\dagger a_{\omega'}\rangle u_\omega(t,u_m)^*u_{\omega'}(0,u_m)\right]. \end{equation} This has an IR divergence that comes from the zero point energy, which exists even at zero temperature. To avoid this we simply regularize it by implementing the normal ordering ${:\! a_\omega a_\omega^\dagger \!:} \equiv {:\! a_\omega^\dagger a_\omega\! :}$, and after doing so we get\footnote{Another way to regularize it is to use the canonical correlator introduced as in \cite{brownian}. However, this does not change the late-time or low-frequency behavior of the correlator.} \begin{eqnarray}\label{xxcorr} \left\langle:\! x_i(t)x_i(0)\! :\right\rangle&=&\int_0^\infty{d\omega\over 2\pi}\frac{1}{e^{\beta\omega}-1}\left[u_\omega(t,u_m)u_{\omega}(0,u_m)^*+u_\omega(t,u_m)^*u_{\omega}(0,u_m)\right],\nonumber\\ &=&\int_0^\infty{d\omega\over 2\pi}\frac{2|A|^2\cos(\omega t)}{e^{\beta\omega}-1}\left|g^{(\text{out})}(u_m)+B \, g^{(\text{in})}(u_m)\right|^2. \end{eqnarray} From this correlator, we can compute the displacement squared of the quark as: \begin{equation}\label{disquared} s_i^2(t)\equiv\left\langle:\!\left[x_i(t)-x_i(0)\right]^2\!:\right\rangle=\frac{4}{R^2u_h^2h(u_h)}\int_0^\infty \frac{d\omega}{\omega}\frac{\sin^2\left(\tfrac{\omega t}{2}\right)}{e^{\beta\omega}-1}\left|g^{(\text{out})}(u_m)+B \, g^{(\text{in})}(u_m)\right|^2, \end{equation} where we have replaced the explicit dependence on $A$. For future reference we also compute the general form of the momentum correlator, \begin{eqnarray}\label{ppcorr} \left\langle:\! p_i(t)p_i(0)\! :\right\rangle&=&-m^2\partial_t^2\left\langle:\! x_i(t)x_i(0)\! :\right\rangle,\nonumber\\ &=&\int_0^\infty{d\omega\over 2\pi}\frac{2m^2\omega^2|A|^2\cos(\omega t)}{e^{\beta|\omega|}-1}\left|g^{(\text{out})}(u_m)+B \, g^{(\text{in})}(u_m)\right|^2. \end{eqnarray} \section{Brownian Motion in SYM with a Magnetic Field\label{secSYM}} Non-commutative SYM theory has a dual interpretation in terms ordinary SYM with a large and constant magnetic field. We start by studying the Brownian motion in this second system which, aside from being an interesting problem in its own right, helps us to gain some intuition and to set the physical grounds of our computations. Although we find that there are some similarities between these two configurations, our final results show that there are some features that are qualitatively different. \subsection{Langevin dynamics in the presence of a magnetic field\label{BroM}} The problem of the Brownian motion of a charged particle in an external magnetic field was first investigated almost fifty years ago in the seminal papers \cite{BM1,BM2}. This is an old topic that has originated a lot of interest and it is of great importance in the description of diffusion and transport of plasmas and heavy ions. Nowadays, together with the free Brownian motion, it is widely used as a classic textbook example of how transport properties and correlation functions should be computed in generic situations governed by the Langevin equation. The discussion of this section will be around the field theory description of Brownian motion in the presence of a magnetic field. Later on, we will show how to realize this phenomenon at strong-coupling, in terms of a probe string living in a black hole background. Let us consider the Langevin equation of a charged particle of mass $m$ and unit charge $q$, in presence of a magnetic field $\mathbf{B}$: \begin{equation}\label{langeB} \dot{\mathbf{p}}(t)=-\gamma_o\, \mathbf{p}(t)+\mathbf{v}(t)\times\!\mathbf{B}+\mathbf{R}(t), \end{equation} where $\mathbf{p}(t)=m\,\mathbf{v}(t)$ is the momentum of the Brownian particle and $\mathbf{v}(t)=\dot{\mathbf{x}}(t)$ its velocity. The terms in the right-hand side of (\ref{langeB}) correspond to the friction, lorentz force and random force, respectively, and $\gamma_o$ is a constant called the friction coefficient. One can think of the particle as moving under the influence of the magnetic field, losing energy to the medium and at the same time, getting random kicks as modeled by the random force. As a first approximation, we can assume that the random force is white noise, with the following averages: \begin{equation} \langle R_i(t)\rangle=0,\qquad\langle R_i(t)R_j(t')\rangle=\kappa_o\delta_{ij}\delta(t-t'), \end{equation} and where $\kappa_o$ is a constant which, due to the fluctuation-dissipation theorem, is related to the friction coefficient through \begin{align}\label{flucdis} \gamma_o&={\kappa_o\over 2\,m\,T}. \end{align} This is due to the fact that the frictional and random forces have the same origin at the microscopic level, i.e., collisions with the particles of the thermal bath. If the magnetic field $\mathbf{B}=B\hat{x}$ is pointing along the $x$-direction, we can write (\ref{langeB}) in the matrix form \begin{equation}\label{langeB2} \dot{p}_i(t)=-\mathrm{\Lambda}_{ij}p_j(t)+R_i(t), \end{equation} where \begin{equation} \mathrm{\Lambda}_{ij}=\left( \begin{array}{ccc} \gamma_o & 0 & 0 \\ 0 & \gamma_o & -\omega_o \\ 0 & \omega_o & \gamma_o \\ \end{array} \right). \end{equation} Here $\omega_o=B/m$ denotes the Larmor frequency. Since the magnetic field is oriented along the $x$-direction, it affects the motion in the $y$ and $z$ directions only. Fluctuations along the $x$-direction decouple and are unaffected by the presence of the magnetic field. We thus restrict our attention to the fluctuations in the transverse plane. In order to decouple the remaining equations we have to diagonalize the the above matrix. The normal modes are $p_{\pm}=p_2\pm i p_3$ with corresponding eigenvalues $\lambda_{\pm}=\gamma_o\mp i \omega_o$. Thus, defining $x_\pm=x_2\pm i x_3$ and $R_{\pm}=R_2\pm i R_3$ we get \begin{equation} \dot{p}_{\pm}(t)=-\lambda_\pm p_{\pm}(t)+R_{\pm}(t), \end{equation} whose formal solution is given by \begin{equation}\label{momentum} p_{\pm}(t) = e^{-\lambda_{\pm} t}p_\pm(0) + \int_0^t e^{-\lambda_{\pm} (t-t')}R_{\pm}(t')dt', \end{equation} \begin{equation}\label{momentumx} x_\pm(t) - x_\pm(0) = {{1}\over{m \lambda_{\pm}}}\left( (1-e^{-\lambda{\pm} t}) p_\pm(0) +\left[ \int_0^t{R_\pm(t') dt'} -\int_0^t {e^{-\lambda (t-t')} R_{\pm}(t') dt'}\right] \right). \end{equation} Using the above, we can immediately obtain $x_2(t), x_3(t)$ by taking the real and imaginary parts of $x_{\pm}(t)$. $p_2(t)$ and $p_3(t)$ can also be obtained in a similar way. In thermal equilibrium, the two-point correlation functions of $p_2(t)$ and $p_3(t)$ are given by \begin{eqnarray}\label{twopnt1} \left \langle p_2(t) p_2(0) \right \rangle & = & \frac{\kappa_o}{2\gamma_o} e^{-\gamma_o t} \cos(\omega_o t),\\ \left \langle p_3(t) p_3(0) \right \rangle & = & \left \langle p_2(t) p_2(t') \right \rangle,\label{twopnt2}\\ \left \langle p_2(t) p_3(0) \right \rangle & = & \frac{\kappa_o}{2\gamma_o} e^{-\gamma_o t} \sin(\omega_o t)\label{twopnt3}. \end{eqnarray} The relevant point here is that, unlike in the case of free Brownian motion, now the components $p_2(t)$ and $p_3(t)$ are correlated. But, the autocorrelator $\left\langle p_i(t)^2\right\rangle$ of each individual component has the same value $ \sim\kappa_o/2\gamma_o= mT$ as in the case of zero magnetic field. This can be easily understood by recognizing that $\left\langle p_i(t)^2\right\rangle$ is twice the kinetic energy multiplied by the mass of the particle and that this quantity does not change by application of a magnetic field. Thus, the time scale associated to the energy loss due to drag, $t_{\text{relax}}\sim1/\gamma_o$, is then independent of the magnetic field. Another important scale related to the Brownian motion of the quark is the time associated to diffusion. This can be derived by computing the two-point correlation function of $x_2(t)$ and $x_3(t)$, from which one can infer the following late-time behavior for the displacement squared: \begin{equation}\label{difct} s_i^2(t)=\left\langle\left[x_i(t)-x_i(0)\right]^2\right\rangle\sim 2Dt,\qquad\text{for}\quad t\gg1/\gamma_o. \end{equation} where $D$ is called the diffusion constant. In the presence of a magnetic field, the fluctuation-dissipation theorem leads to \begin{equation} D=\frac{1}{2m^2}\frac{\kappa_o}{\gamma_o^2+\omega_o^2}=\frac{T}{m}\frac{\gamma_o}{\gamma_o^2+\omega_o^2}, \end{equation} which decreases with increasing magnetic field. Thus, the relevant time scale for correlations $\sim D$ is smaller in this case and this implies that the diffusion process is more efficient. The Langevin equation, as presented in (\ref{langeB}), captures the essential properties of the stochastic processes in Brownian motion, but it fails to give a physically consistent picture for sufficiently short times $t$, in which the particle suffers only a few or no impacts. It is a general feature of any dynamical system that the dynamical coherence becomes predominant in short time scales, or at high frequencies. Thus, we are led to a natural extension of the Langevin equation in the form \cite{mori,kubo} \begin{equation} \label{genlangeB} \dot{\mathbf{p}}(t)=-\int_{-\infty}^t dt'\, \gamma(t-t')\, \mathbf{p}(t')+\mathbf{v}(t)\times\!\mathbf{B}+\mathbf{R}(t)+\mathbf{E}(t), \end{equation} where \begin{equation}\label{RR} \langle R_i(t)\rangle=0,\qquad\langle R_i(t)R_j(t')\rangle=\kappa_{ij}(t-t'). \end{equation} The main difference between the generalized Langevin equation (\ref{genlangeB}) and the usual one (\ref{langeB}) is that the friction depends now on the past history of the particle through $\gamma(t)$, called the memory kernel, and that the random forces at different times are not independent. Note that we have also included a fluctuating external force $\mathbf{E}(t)$ that can be applied to the system (e.g., an external electric field). For a magnetic field pointing in the $x$-direction $\mathbf{B}=B\hat{x}$ and focusing on the transverse fluctuations, we get \begin{equation} \dot{p}_{\pm}(t)=-\int_{-\infty}^t dt'\,\gamma(t-t') p_{\pm}(t')\pm i \omega_o p_{\pm}(t)+R_{\pm}(t)+E_{\pm}(t), \end{equation} where $p_{\pm}=p_2\pm i p_3$, $R_{\pm}=R_2\pm iR_3$, $E_{\pm}=E_2\pm i E_3$ and $\omega_o=B/m$. In frequency domain, the above equation is simply\footnote{Causality imposes that $\gamma(t)=0$ for $t<0$ so $\gamma[\omega]$ in this expression denotes the Fourier-Laplace transform, $$\gamma[\omega]=\int_{0}^\infty dt\, \gamma(t) \,e^{i\omega t},$$ while $p_{\pm}(\omega)$, $R_{\pm}(\omega)$, and $E_{\pm}(\omega)$ are Fourier transforms, e.g., $$p_{\pm}(\omega)=\int_{-\infty}^\infty dt\, p_{\pm}(t)\,e^{i\omega t}.$$} \begin{equation}\label{langefour} p_\pm(\omega)={R_\pm(\omega)+E_\pm(\omega)\over \gamma[\omega]-i\omega\mp i\omega_o}, \end{equation} and taking the statistical average of the same we get \begin{equation} \left\langle p_{\pm}(\omega)\right\rangle= \mu_\pm(\omega)E_\pm(\omega),\qquad\text{with}\qquad \mu_\pm(\omega)\equiv{1\over \gamma[\omega]-i\omega\mp i\omega_o}. \end{equation} The quantity $\mu_\pm(\omega)$ is known as the admittance. Thus, we can then determine the admittance $\mu_\pm(\omega)$, and thereby $\gamma[\omega]$, by measuring the response $\left\langle p_\pm(\omega)\right\rangle$ to an external fluctuating force. In particular, if the external force is taken to be \begin{equation} E_\pm(t)=e^{-i\omega t}K_{\pm}, \end{equation} then $\left\langle p_\pm(t)\right\rangle$ is just \begin{equation} \left\langle p_\pm(t)\right\rangle=\mu_\pm(\omega) e^{-i\omega t}K_\pm=\mu_\pm(\omega) E_\pm(t). \end{equation} For late times (or low frequencies) the generalized Langevin equation reduces to its non-local progenitor, and the timescales associated to the decay of the two-point function of the momentum as well as the displacement squared are the same as discussed before. In particular, note that \begin{equation} \mu_\pm(0)\equiv{1\over \gamma_o\mp i\omega_o} \end{equation} so \begin{equation} \gamma_o=t^{-1}_{\text{relax}}=\mathbf{Re}\left(\frac{1}{\mu_\pm(0)}\right). \end{equation} For a quantity ${\mathcal{O}}$, the power spectrum $I_{\mathcal{O}}(\omega)$ is defined as \begin{equation} I_{\mathcal{O}}(\omega)\equiv\int_{-\infty}^\infty dt\left\langle{\mathcal{O}}(t_0){\mathcal{O}}(t_0+t)\right\rangle e^{i\omega t}, \label{pwrspctr_def} \end{equation} and it is related to the two-point function through the Wiener-Khintchine theorem \begin{equation}\label{thm} \left\langle{\mathcal{O}}(\omega){\mathcal{O}}(\omega')\right\rangle=2\pi \delta(\omega+\omega')I_{\mathcal{O}}(\omega). \end{equation} For stationary systems (\ref{pwrspctr_def}) is independent of $t_0$, so one can set $t_0=0$ in such situations. When the external force is set to zero, from (\ref{langefour}) it follows that \begin{equation} p_\pm(\omega)={R_\pm(\omega)\over \gamma[\omega]-i\omega\mp i\omega_o}=\mu(\omega)R_\pm(\omega), \end{equation} and, using (\ref{thm}) one gets that \begin{equation} I_{p_\pm}(\omega)=\left|\mu(\omega)\right|^2I_{R_\pm}(\omega). \end{equation} Therefore, the random force correlator appearing in (\ref{RR}) can be evaluated as \begin{equation}\label{kapa} \kappa_\pm(\omega)=I_{R_\pm}(\omega)=\frac{I_{p_\pm}(\omega)}{\left|\mu_\pm(\omega)\right|^2}. \end{equation} This will be important in the next section, to check the validity of the fluctuation-dissipation theorem. \subsection{Bulk dynamics and the drag coefficient\label{matchingB}} We now turn to the holographic realization of Brownian motion in the presence of a magnetic field. Let us begin by considering the action (\ref{action}), which in the commutative limit $a\rightarrow 0$ reduces to \begin{equation} S\approx\frac{R^2}{4\pi}\int dt du\left[u^4f\left(X_2'^2+X_3'^2\right)-\frac{1}{f}\left(\dot{X}_2^2+\dot{X}_3^2\right)\right]. \end{equation} This action describes the dynamics of a string in Schwarzschild-AdS$_5$ and it is dual to a quark in ordinary SYM at finite temperature. We then turn on a gauge field in the flavor brane of the form \begin{equation} \vec{A}=\frac{B}{2}\left(y\hat{z}-z\hat{y}\right), \end{equation} thus getting the desired magnetic field $\vec{B}=B\hat{x}$. As explained before, this only appears as a boundary term (\ref{actem}), so it will not affect the bulk dynamics of the string. The equations of motion coming from the above action are: \begin{equation}\label{eomb} 0=f\partial_u\left(u^4fX_i'\right)-\ddot{X}_i. \end{equation} We now proceed by expanding $X_i$ in modes as in (\ref{fourier}), i.e., \begin{equation} X_i(t,u)=e^{-i\omega t}g_i(u). \end{equation} Then the equations of motion (\ref{eomb}) can be written as \begin{equation}\label{eomdeboer} 0=g_i''(y)+\frac{4y^3}{y^4-1}g_i'(y)+\frac{\nu^2y^4}{(y^4-1)^2}g_i(y) \end{equation} where we defined dimensionless quantities \begin{equation} y=\frac{u}{u_h},\quad \nu=\frac{\omega}{u_h}, \end{equation} and where primes denote now derivatives with respect to $y$. The wave equation for the modes (\ref{eomdeboer}) is independent of the magnetic field and it is exactly the same as the equation considered in \cite{brownian} for $d=4$. We need to find the solutions of the equation (\ref{eomdeboer}). In general, it is not possible to do this analytically for an arbitrary frequencies $\nu$ and hence we employ a low frequency approximation $\nu\ll 1$ by means of the so-called matching technique. Here, we only write down the final result, relegating the details of the computation to appendix \ref{appsols}. The two solution that correspond to outgoing and ingoing waves at the horizon behave asymptotically as \begin{equation}\label{solsec} g^{(\text{out/in})}(y)\sim \left(1\mp\frac{i}{8} \nu (\pi -\log(4))\right) \left(1+\frac{\nu^2}{2y^2}\right)\mp \frac{i\nu}{3y^3}+\mathcal{O}(1/y^4). \end{equation} We now consider the forced motion of our Brownian particle due to a fixed external magnetic field and a fluctuating electric field. As mentioned in section \ref{holbrom}, this amounts to the addition of the boundary term (\ref{actem}) to the action (\ref{nambugoto}), which imposes a boundary condition of the form \begin{equation} \mathrm{\Pi}^u_i\big|_{\partial\Sigma}\equiv\frac{\partial\mathcal{L}}{\partial X'_i}\bigg|_{\partial\Sigma}=F_i. \end{equation} Here, $F_i=-(F_{it}+F_{ij}\dot{X}^j)$ is the usual Lorentz force and \begin{equation} F_{\mu\nu}\equiv\partial_\mu A_\nu-\partial_\nu A_\mu=\left( \begin{array}{cccc} 0 & E_1 & E_2 & E_3 \\ -E_1 & 0 & -B_3 & B_2 \\ -E_2 & B_3 & 0 & -B_1 \\ -E_3 & -B_2 & B_1 & 0 \\ \end{array} \right). \end{equation} Then, for a magnetic field pointing in the $x$-direction, $B_1=B$, and transverse electric fields $E_2=E_2(t)$ and $E_3=E_3(t)$ we find a set of boundary conditions that inevitably mix the fluctuations along the transverse directions: \begin{equation}\label{bcb} \frac{R^2}{2\pi}u^4fX'_2-B\dot{X}_3\bigg|_{u=u_m}=E_2,\quad \frac{R^2}{2\pi}u^4fX'_3+B\dot{X}_2\bigg|_{u=u_m}=E_3, \end{equation} where $u_m$ denotes the position of the flavor brane. Our goal is now to compute the thermal expectation value (or one-point function) of the momentum, and then extract the admittance. The general solution for $X_i$ is the sum of ingoing and outgoing modes at the horizon $X_i=A_i^{(\text{out})}X^{(\text{out})}+A_i^{(\text{in})}X^{(\text{in})}$, where $X^{(\text{out/in})}=e^{-i\omega t}g^{(\text{out/in})}$. In the semiclassical approximation, outgoing modes are always thermally excited because of Hawking radiation, while the ingoing modes can be arbitrary. However, because the radiation is random, the phase of $A_i^{(\text{out})}$ takes random values and, on average $\langle A_i^{(\text{out})}\rangle=0$. Then, we can write $\langle X_i\rangle=\langle A_i^{(\text{in})}\rangle e^{-i\omega t}g^{(\text{in})}(u)$, where $g^{(\text{in})}(u)$ correspond to the normalized solution given by (\ref{solsec}). For the remaining part of this section we will denote $\langle A_i^{(\text{in})}\rangle=A_i$ and $g^{(\text{in})}=g$ for simplicity. In the Brownian motion literature, it is customary to work in a circular basis when an external magnetic field is included. Thus, we define \begin{equation} X_{\pm}\equiv X_2\pm i X_3=e^{-i\omega t}g_{\pm}(u)\quad\text{and}\quad E_{\pm}\equiv E_2\pm i E_3=e^{-i\omega t}K_{\pm}. \end{equation} In fact, the equations (\ref{bcb}) decouple in this basis. After some algebra we get \begin{equation}\label{bcb2} \frac{R^2}{2\pi}u^4fg'A_{\pm}\pm\omega B g A_{\pm}\bigg|_{u=u_m}=K_{\pm}, \end{equation} where $A_{\pm}=A_2\pm i A_3$. Inverting this relation we obtain \begin{equation} A_{\pm}=\frac{K_{\pm}}{\frac{R^2}{2\pi}u^4fg'\pm\omega B g}\bigg|_{u=u_m}, \end{equation} from which we can read the average position of the heavy quark, $\left\langle x_{\pm}(t)\right\rangle\equiv\left\langle X_{\pm}(t,u_m)\right\rangle=e^{-i\omega t}g(u_m)A_{\pm}$, \begin{equation} \left\langle x_{\pm}(t)\right\rangle=e^{-i\omega t}K_{\pm}\frac{g}{\frac{R^2}{2\pi}u^4fg'\pm\omega B g}\bigg|_{u=u_m}, \end{equation} and \begin{equation} \left\langle p_{\pm}(t)\right\rangle =E_{\pm}(t)\frac{-i\omega m g}{\frac{R^2}{2\pi}u^4fg'\pm\omega B g}\bigg|_{u=u_m}. \end{equation} This expression allows us to read the admittance, \begin{equation}\label{muB} \mu_\pm(\omega) =\frac{-i\omega m g}{\frac{R^2}{2\pi}u^4fg'\pm\omega B g}\bigg|_{u=u_m}. \end{equation} In the zero frequency limit, and for a heavy quark $u_m\ll u_h$, we obtain \begin{equation} \mu_\pm(0)=\frac{2 m }{\pi\sqrt{\lambda} T^2\pm 2 i B}, \end{equation} from which we can infer \begin{equation} \gamma_o=\frac{\pi\sqrt{\lambda} T^2}{2m} \quad\text{and}\quad\omega_o=\pm\frac{B}{m}. \end{equation} As expected, the friction coefficient is not modified by the presence of the magnetic field, which is consistent with the fact that the magnetic field does not do work. Also, we find that \begin{equation} t_{\text{relax}}=\frac{1}{\gamma_o}=\frac{2m}{\pi \sqrt{\lambda} T^2}. \end{equation} This temporal scale dominates the late-time decay of the one-point function of $\mathbf{p}(t)$, for a quark that transverses the plasma, in agreement with the previous works \cite{Herzog:2006gh,Gubser:2006bz}. The late-time\footnote{There should be a smooth crossover with the early-time regime or high frequency limit. See for example \cite{early}.} behavior is dominated by the low frequency limit of the generalized Langevin equation, case in which (\ref{genlangeB}) reduces to its nonlocal progenitor (\ref{langeB}). From (\ref{momentum}) we can thus infer that, \begin{equation}\label{exppb} \left\langle p_\pm(t)\right\rangle\sim e^{-\gamma_o t}e^{\pm i\omega_o t}. \end{equation} This is exactly what is expected for the Brownian motion of a charged particle in the presence of a magnetic field. To obtain the thermal averages of $p_2(t)$ and $p_3(t)$ we can simply take the real and imaginary parts of (\ref{exppb}). \subsection{Diffusion and the fluctuation-dissipation theorem\label{hawk1}} The purpose of this section is to compute holographically the displacement squared of the heavy quark and to extract from it the diffusion constant $D$. The upshot of the computation is summarized in equation (\ref{disquared}), which is valid for an arbitrary background. However, the functions $f^{(\pm)}_\omega(u)$ as well as the details for the computation of the constant $B$ vary according to each situation. Before proceeding with the direct calculation of this quantity, it is useful to understand the boundary conditions we want to impose on the fields. Although we are interested in the world-sheet theory of the probe string, in the static gauge the induced metric on the string inherits the geometric characteristics of the spacetime background. This means that the usual rules for correlators in the gauge/gravity correspondence apply in our case. First, we need to impose an UV cutoff in order to have a quark with finite mass. The natural place to impose the cutoff is given by the location of the flavor brane $u_m$, which can be related to the mass $m$ of the quark through (\ref{mass}). The mass of the quark is chosen to be dominant scale of the system, so usually one would push the cutoff up to the boundary $u_m\rightarrow\infty$ and choose normalizable boundary conditions for the modes. However, in our case that would correspond to a infinitely massive quark and there would be no Brownian motion. A Neumann boundary condition at $u=u_m$ also does not work in our case because we would go back to the case of free Brownian motion. Instead, we use a mixed boundary condition in which the external magnetic field is on but the fluctuating electric fields are turned off. According to (\ref{bcb}) this is, \begin{equation} \frac{R^2}{2\pi}u^4fX'_2-B\dot{X}_3\bigg|_{u=u_m}=0,\quad \frac{R^2}{2\pi}u^4fX'_3+B\dot{X}_2\bigg|_{u=u_m}=0, \end{equation} or in terms of the modes $X_\pm$, \begin{equation}\label{bcbc} \frac{R^2}{2\pi}u^4fX'_{\pm}\pm i B \dot{X}_\pm\bigg|_{u=u_m}=0. \end{equation} Recall that $X_\pm$ can be expressed as the sum of outgoing modes and ingoing modes found previously, with arbitrary coefficients. Following the convention introduced in (\ref{defab}), let us write \begin{equation} X_\pm(t,u)=A_\pm\left[g^{(\text{out})}(u)+B_\pm\, g^{(\text{in})}(u)\right]e^{-i\omega t}. \end{equation} From (\ref{bcbc}) it follows then that \begin{equation} B_\pm=-\frac{\frac{R^2}{2\pi}u^4fg^{(\text{out})}\,\!' \pm \omega B g^{(\text{out})}}{\frac{R^2}{2\pi}u^4fg^{(\text{in})}\,\!'\pm \omega B g^{(\text{in})}}\bigg|_{u=u_m}\equiv e^{i\theta_\pm}. \end{equation} The fact that $B_\pm$ is a pure phase is self-evident, given that $g^{(\text{out})}=g^{(\text{in})\,\!^*}$. To leading order in frequency one finds that \begin{equation} B_\pm=\frac{\pi R^2 T^2 (y_m^4-1)\mp 2i B y_m^4}{\pi R^2 T^2 (y_m^4-1)\pm 2 i B y_m^4}+\mathcal{O}(\nu)=\left(\frac{\pi R^2 T^2 \mp 2i B}{\pi R^2 T^2 \pm 2 i B}+\mathcal{O}(1/y_m^4)\right)+\mathcal{O}(\nu), \end{equation} from which one gets \begin{equation} \left|g^{(\text{out})}+B_\pm g^{(\text{in})}\right|^2=\frac{4 \pi ^2 R^4 T^4}{4 B^2+\pi ^2 R^4 T^4}+\mathcal{O}(\nu). \end{equation} The late-time behavior of the displacement squared can be then inferred from the low-frequency limit of (\ref{disquared}), i.e., \begin{equation} s^2_{\pm}(t)=\frac{16 \sqrt{\lambda} T^3}{4 B^2+\pi^2 \lambda T^4}\int_{0}^\infty \!\!d\omega\frac{\sin^2\left(\frac{\omega t}{2}\right)}{\omega^2}\sim\frac{4 \pi \sqrt{\lambda} T^3}{4 B^2+\pi^2 \lambda T^4}t. \end{equation} Thus, as expected, we find that the diffusion constant defined as in (\ref{difct}) is given by \begin{equation} D=\frac{2 \pi \sqrt{\lambda} T^3}{4 B^2+\pi^2 \lambda T^4}=\frac{T}{m}\frac{\gamma_o}{\gamma_o^2+\omega_o^2}. \end{equation} Finally, in order to give an explicit check of the fluctuation dissipation theorem (\ref{flucdis}) we compute the random force autocorrelation appearing in (\ref{kapa}) to extract the coefficient $\kappa_o$. From (\ref{ppcorr}) we can evaluate the two-point correlator of the momentum $p_\pm$ as follows: \begin{eqnarray} \left\langle:\! p_\pm(t)p_\pm(0)\! :\right\rangle&=&\int_0^\infty{d\omega\over 2\pi}\frac{2m^2\omega^2|A|^2\cos(\omega t)}{e^{\beta\omega}-1}\left|g^{(\text{out})}(u_m)+B_\pm \, g^{(\text{in})}(u_m)\right|^2,\nonumber\\ &=&\int_{-\infty}^\infty{d\omega\over 2\pi}\frac{m^2}{\sqrt{\lambda}\pi T}\frac{\beta|\omega|}{e^{\beta|\omega|}-1}\left|g^{(\text{out})}(u_m)+B_\pm \, g^{(\text{in})}(u_m)\right|^2e^{-i\omega t}. \end{eqnarray} Therefore, \begin{equation} I_{p_\pm}(\omega)=\frac{m^2}{\sqrt{\lambda}\pi T}\frac{\beta|\omega|}{e^{\beta|\omega|}-1}\left|g^{(\text{out})}(u_m)+B_\pm \, g^{(\text{in})}(u_m)\right|^2, \end{equation} and combining this with (\ref{muB}) one finds that \begin{equation} I_{R_\pm}(\omega)=\pi \sqrt{\lambda } T^3+{\mathcal{O}}(\omega). \end{equation} This gives us precisely a coefficient $\kappa_o=\pi \sqrt{\lambda } T^3$ which agrees with (\ref{flucdis}), providing and explicit check of the fluctuation-dissipation theorem in the presence of a magnetic field. \section{Brownian Motion in NCSYM\label{sec4}} We now turn our attention to the study of Brownian motion in the non-commutative setup. The main difference here is that the closed string sector is modified by the inclusion of an antisymmetric $B$-field. In this setup, and after the appropriate decoupling limit, the effective field theory is described by a gauge theory living in a noncommutative space. It is interesting to study the similarities and differences with our previous computation when the magnetic field was introduced in the open string sector. \subsection{Langevin dynamics in the non-commutative plasma\label{BroM2}} To start, let us postulate a generalized Langevin equation of a particle in a non-commutative thermal bath: \begin{equation}\label{langenc} \dot{p}_i(t)=-\int_{-\infty}^t dt'\,\mathrm{\Gamma}_{ij}(t-t')p_j(t')+R_i(t)+E_i(t), \end{equation} where \begin{equation} \langle R_i(t)\rangle=0,\qquad\langle R_i(t)R_j(t')\rangle=\kappa_{ij}(t-t'). \end{equation} In this case, the $B$-field does not appear explicitly in the Langevin equation, though its effect should be somehow be present through the coefficients $\mathrm{\Gamma}$ and $\kappa$. We propose that in this case $\mathrm{\Gamma}$ is a matrix that encodes the effects of the non-commutativity. In particular, for non-commutativity in the $(x^2,x^3)$-plane, we propose to write \begin{equation}\label{matrix} \mathrm{\mathrm{\Gamma}}_{ij}(t)=\left( \begin{array}{ccc} \gamma(t) & 0 & 0 \\ 0 & \gamma(t) & -\mathrm{\Omega}(t) \\ 0 & \mathrm{\Omega}(t) & \gamma(t) \\ \end{array} \right). \end{equation} In the low frequency limit, the above equation becomes local in time, allowing us to write \begin{equation} \dot{p}_i(t)=-\mathrm{\Gamma}_{ij}p_j(t)+R_i(t)+E_i(t), \end{equation} where $\gamma(t-t')=\gamma_o\delta(t-t')$, $\mathrm{\Omega}(t-t')=\mathrm{\Omega}_o\delta(t-t')$ and $\kappa(t-t')=\kappa_o\delta(t-t')$. Note that this has exactly the same structure of (\ref{langeB2}), with $\gamma_o$ being the usual friction coefficient and $\mathrm{\Omega}_o$ playing the role of the Larmor frequency. Furthermore, if the fluctuation-dissipation theorem applies, we expect that the relation (\ref{flucdis}) is also true for the present configuration. Solutions (\ref{momentum}) and (\ref{momentumx}) hold in the low frequency limit. This means that the two-point correlators (\ref{twopnt1})-(\ref{twopnt3}), as well as the diffusive behavior of the displacement squared (\ref{difct}), act in the exact same way, but now with new coefficients $\gamma_o$, $\mathrm{\Omega}_o$ and $\kappa_o$ which might depend on the noncommutative parameter $\theta$. The fluctuations along the $x^1$-direction are unaffected by the presence of the non-commu\-ta\-ti\-vi\-ty. We thus restrict our attention to the fluctuations on the Moyal plane. These fluctuations can be decoupled by working in the circular basis $p_{\pm}=p_2\pm i p_3$, $R_{\pm}=R_2\pm i R_3$ and $E_{\pm}=E_2\pm i E_3$. The eigenvalues of (\ref{matrix}) are found to be $\lambda_{\pm}=\gamma\mp i\mathrm{\Omega}$ so we can rewrite (\ref{langenc}) as \begin{equation} \dot{p}_{\pm}(t)=-\int_{-\infty}^t dt'\,\lambda_\pm(t-t') p_{\pm}(t')+R_{\pm}(t)+E_{\pm}(t). \end{equation} In frequency domain this equation can be written as \begin{equation}\label{langefournc} p_\pm(\omega)={R_\pm(\omega)+E_\pm(\omega)\over \lambda_\pm[\omega]-i\omega}, \end{equation} and taking the statistical average we obtain \begin{equation} \left\langle p_{\pm}(\omega)\right\rangle= \mu_\pm(\omega)E_\pm(\omega),\qquad\text{with}\qquad \mu_\pm(\omega)\equiv{1\over \lambda_{\pm}[\omega]-i\omega}. \end{equation} Then, by measuring the response $\left\langle p_\pm(\omega)\right\rangle$ due to an external force we can determine the admittance $\mu_\pm(\omega)$ and thereby $\mu_\pm[\omega]$. In particular, if the external force is taken to be \begin{equation} E_\pm(t)=e^{-i\omega t}K_{\pm}, \end{equation} then \begin{equation} \left\langle p_\pm(t)\right\rangle=\mu_\pm(\omega) e^{-i\omega t}K_\pm=\mu_\pm(\omega) E_\pm(t). \end{equation} With this at hand, one can take the real and imaginary parts of $\mu_\pm$ to obtain $\gamma$ and $\mathrm{\Omega}$ respectively. Note also that in the zero frequency limit \begin{equation} \mu_\pm(0)\equiv\frac{1}{\lambda_\pm[0]}={1\over \gamma_o\mp i\mathrm{\Omega}_o}. \end{equation} The analysis of the power spectrum and two-point functions is the same as the one done in section \ref{BroM}. In particular, the equation (\ref{kapa}) relating the random force correlations with the momentum correlations should apply in this case, which will be useful to check the validity of the fluctuation-dissipation theorem for the current setup. \subsection{Bulk dynamics and the drag coefficient\label{matchingB2}} For the action given by (\ref{action}) we can derive the following equations of motion: \begin{eqnarray} 0&=&\frac{f}{h}\partial_u\left(u^4fhX_2'-a^2u^4h\dot{X}_3\right)-\partial_t\left(\dot{X}_2-a^2u^4fX_3'\right),\\ 0&=&\frac{f}{h}\partial_u\left(u^4fhX_3'+a^2u^4h\dot{X}_2\right)-\partial_t\left(\dot{X}_3+a^2u^4fX_2'\right). \end{eqnarray} The term with mixed derivatives cancels out in both cases and one ends up with \begin{eqnarray} 0&=&\frac{f}{h}\partial_u\left(u^4fhX_2'\right)-4a^2u^3fh\dot{X}_3-\ddot{X}_2,\\ 0&=&\frac{f}{h}\partial_u\left(u^4fhX_3'\right)+4a^2u^3fh\dot{X}_2-\ddot{X}_3. \end{eqnarray} These are two coupled partial differential equations. We now proceed by expanding $X_i$ in modes by setting \begin{equation} X_2(t,u)=e^{-i\omega t}g_2(u),\quad\text{and}\quad X_3(t,u)=e^{i(\varphi-\omega t)}g_3(u). \end{equation} We have introduced a phase difference for reasons that will become clear below. The equations of motion for the modes are \begin{eqnarray} 0&=&\frac{f}{h}\partial_u\left(u^4fhg_2'\right)+4i\omega a^2u^3fh g_3e^{i\varphi}+\omega^2g_2,\\ 0&=&\frac{f}{h}\partial_u\left(u^4fhg_3'\right)-4i\omega a^2u^3fh g_2e^{-i\varphi}+\omega^2g_3. \end{eqnarray} If we choose $e^{i\varphi}=\pm i$, or equivalently $\varphi=\pm\pi/2$, the two equations of motion turn out to be the same. This motivates to consider the linear combinations $X_{\pm}=X_2\pm iX_3=e^{-i\omega t}g_{\pm}(u)$, where \begin{equation} g_{2}=\frac{g_{+}+g_{-}}{2}\quad\text{and}\quad g_{3}=\frac{g_{+}-g_{-}}{2i}. \end{equation} Not surprisingly, this is completely equivalent with the circular basis introduced in section \ref{secSYM} for the case of Brownian motion in a magnetic field. In this basis the equations of motion decouple, and can be rewritten as: \begin{equation}\label{eom} 0=g_{\pm}''(y)+\frac{4(1+b^4)y^3}{(y^4-1)(1+b^4 y^4)}g_{\pm}'(y)+\left(\frac{\nu^2 y^4}{(y^4-1)^2}\pm\frac{4\nu b^2y^3}{(y^4-1)(1+b^4y^4)}\right)g_{\pm}(y) \end{equation} where we have defined dimensionless quantities \begin{equation} y\equiv\frac{u}{u_h},\quad \nu \equiv \frac{\omega}{u_h},\quad b \equiv au_h, \end{equation} and the primes denote derivatives with respect to $y$. The normal modes $X_{\pm}$ correspond to fluctuations with circular polarization, rotating clockwise or counterclockwise, respectively. Explicit solutions to the above equations can be found in appendix \ref{appsol2}. The final expressions for outgoing and ingoing modes are \begin{equation}\label{solrealo} g^{(\text{out})}_{\pm}(y)\sim \frac{i b^2 \nu y}{b^2\mp i}\left(1-\frac{\nu^2}{2y^2}\right)+ \left(1-\frac{i b^2 \nu}{b^2\mp i}-\frac{1}{8} i \nu (\pi -\log(4))\right) \left(1-\frac{\nu^2}{6y^2}\right)+\mathcal{O}(1/y^3), \end{equation} and \begin{equation}\label{solreal} g^{(\text{in})}_{\pm}(y)\sim -\frac{i b^2 \nu y}{b^2\pm i}\left(1-\frac{\nu^2}{2y^2}\right)+ \left(1+\frac{i b^2 \nu}{b^2\pm i}+\frac{1}{8} i \nu (\pi -\log(4))\right) \left(1-\frac{\nu^2}{6y^2}\right)+\mathcal{O}(1/y^3), \end{equation} respectively. We now exert an external fluctuating force $\vec{E}(t)$ on the string endpoint by turning on an electric field $F_{ti}=E_i$ on the flavor brane. Variation of the whole action implies the standard dynamics for all interior points of the string, but now with boundary condition \begin{equation} \mathrm{\Pi}_i^u\big|_{\partial\Sigma}\equiv \frac{\partial \mathcal{L}}{\partial X_i'}=E_i, \end{equation} where $E_i$ is the external force. From (\ref{action}), it follows that \begin{equation}\label{bcnc1} \frac{R^2}{2\pi}\left(u^4fhX_2'-a^2u^4h\dot{X}_3\right)\bigg|_{u=u_m}=E_2,\quad \frac{R^2}{2\pi}\left(u^4fhX_3'+a^2u^4h\dot{X}_2\right)\bigg|_{u=u_m}=E_3, \end{equation} where $u_m$ denotes the position of the D7-brane. Our goal is to find the admittance of the system for which we need the one-point function of the momentum. Again, it is convenient to work in the circular basis, i.e. $X_{\pm}=X_2\pm iX_3$ and $E_{\pm}=E_2\pm iE_3$. The general solution for $X_{\pm}$ is the sum of outgoing and ingoing modes $X_{\pm}=A_{\pm}^{(\text{out})}X_{\pm}^{(\text{out})}+A_{\pm}^{(\text{in})}X_{\pm}^{(\text{in})}$. However, as discussed before, the phase of $A_{\pm}^{(\text{out})}$ takes random values and on average $\langle A_{\pm}^{(\text{out})}\rangle=0$. Then, we can write $\langle X_{\pm}\rangle=\langle A_{\pm}^{(\text{in})}\rangle e^{-i\omega t}g^{(\text{in})}_{\pm}(u)$ and $E_{\pm}=e^{-i\omega t}K_{\pm}$ but, for simplicity, we will denote $\langle A_\pm^{(\text{in})}\rangle=A_\pm$ and $g^{(\text{in})}_\pm=g_\pm$ in the remaining part of this section. In this basis the boundary conditions decouple: \begin{equation}\label{bcnc} \frac{R^2}{2\pi}\left(u^4fh A_{\pm} g_{\pm}'(u)\pm \omega a^2u^4h A_{\pm} g_{\pm}(u)\right)\bigg|_{u=u_m}=K_{\pm}, \end{equation} or, in terms of the dimensionless quantities $y$, $\nu$ and $b$ as defined previously, \begin{equation} A_{\pm}=\frac{2\pi K_{\pm}}{R^2u_h^3}\frac{1+b^4y^4}{(y^4-1) g_{\pm}'(y)\pm \nu b^2y^4g_{\pm}(y)}\bigg|_{y=y_m}. \end{equation} The average position of the heavy quark, $\left\langle x_{\pm}(t)\right\rangle\equiv\left\langle X_{\pm}(t,y_m)\right\rangle$ is then \begin{equation} \left\langle x_{\pm}(t)\right\rangle =\frac{2\pi}{R^2u_h^3}\frac{(1+b^4y^4)g_{\pm}(y)}{(y^4-1)g_{\pm}'(y)\pm \nu b^2y^4 g_{\pm}(y)}\bigg|_{y=y_m}K_{\pm}e^{-i\omega t}, \end{equation} and \begin{equation} \left\langle p_{\pm}(t)\right\rangle =\frac{-2\pi i \omega m}{R^2u_h^3}\frac{(1+b^4y^4)g_{\pm}(y)}{(y^4-1)g_{\pm}'(y)\pm \nu b^2y^4 g_{\pm}(y)}\bigg|_{y=y_m}F_{\pm}(t), \end{equation} from which we can read the admittance, \begin{equation}\label{adnc} \mu_\pm(\omega) =\frac{-2\pi i \omega m}{R^2u_h^3}\frac{(1+b^4y^4)g_{\pm}(y)}{(y^4-1)g_{\pm}'(y)\pm \nu b^2y^4 g_{\pm}(y)}\bigg|_{y=y_m}. \end{equation} In the zero frequency limit and for large mass\footnote{In the large mass expansion, we set the non-commutative parameter to a fixed value and then take the limit $y_m\rightarrow\infty$.} we get, \begin{equation} \mu_\pm(0)=\frac{2 m \left(1+b^4 y_m^4\right)}{b^2 R^2 \pi T^2}\frac{1\mp i b^2}{b^2 y_m^4\pm i}\sim\frac{2m}{R^2 \pi T^2}(1\mp i b^2)+{\mathcal{O}}(1/y_m^4). \end{equation} As we can see, the real part coincides with the expected value for the commutative case, but now there is an additional part that is imaginary (and independent of the temperature, given that $b=a u_h=a\pi T$). For a quark that transverses the plasma one can read the following evolution at late times: \begin{equation}\label{exppnc} \left\langle p_\pm(t)\right\rangle\sim e^{-\gamma_o t}e^{\pm i\mathrm{\Omega}_o t}, \end{equation} where \begin{equation} \gamma_o=\frac{\pi\sqrt{\lambda} T^2}{2m(1+\pi^4\lambda\theta^2T^4)}\quad\text{and}\quad\mathrm{\Omega}_o=\frac{ \pi^3 \lambda\theta T^4}{2m(1+\pi^4\lambda\theta^2T^4)}. \end{equation} The friction coefficient is modified by the presence of the non-commutativity, and in this case \begin{equation} t_{\text{relax}}=\frac{1}{\gamma_o}=\frac{2m(1+\pi^4\lambda\theta^2T^4)}{\sqrt{\pi\lambda} T^2}. \end{equation} This agrees with a previous computation of the drag force in the Maldacena-Russo background \cite{Matsuo:2006ws,Roy:2009sw} (in the non-relativistic regime) and implies that the non-commutative plasma is less viscous in comparison to the commutative one. \subsection{Diffusion and the fluctuation-dissipation theorem\label{hawk2}} Now we turn to the computation of the displacement squared in the Maldacena-Russo background. First of all, we need to understand the boundary conditions we want to impose on the fields. In this case, the effect of non-commutativity is already present in the background itself, so the free Brownian motion is realized by imposing a Neumann boundary condition at $u=u_m$. According to (\ref{bcnc1}), this means \begin{equation} \frac{R^2}{2\pi}\left(u^4fhX_2'-a^2u^4h\dot{X}_3\right)\bigg|_{u=u_m}=0,\quad\frac{R^2}{2\pi}\left(u^4fhX_3'+a^2u^4h\dot{X}_2\right)\bigg|_{u=u_m}=0, \end{equation} or simply \begin{equation}\label{bcnc2} f X_\pm' \pm ia^2\dot{X}_\pm\bigg|_{u=u_m}=0, \end{equation} where again, we defined $X_\pm=X_2\pm i X_3$. The general solution is then written as a linear combination of outgoing and ingoing modes, \begin{equation} X_\pm(t,u)=A_\pm\left[g_\pm^{(\text{out})}(u)+B_\pm\, g_\pm^{(\text{in})}(u)\right]e^{-i\omega t}, \end{equation} and from the boundary condition (\ref{bcnc2}) we get that \begin{equation} B_\pm=-\frac{f g_\pm^{(\text{out})}\,\!' \pm \omega a^2 g_\pm^{(\text{out})}}{f g_\pm^{(\text{in})}\,\!'\pm \omega a^2 g_\pm^{(\text{in})}}\bigg|_{u=u_m}\equiv e^{i\theta_\pm}. \end{equation} Here, $B_\pm$ is also a pure phase given that, in this case, it still holds that $g_\pm^{(\text{out})}=g_\pm^{(\text{in})\,\!^*}$. To leading order in frequency, we obtain \begin{equation} B_\pm=-\frac{(1\mp i b^2)(1\pm i b^2 y_m^4)}{(1\pm i b^2)(1\mp i b^2 y_m^4)}+\mathcal{O}(\nu). \end{equation} With this at hand, we can also compute \begin{equation} \left|g_\pm^{(\text{out})}+B_\pm g_\pm^{(\text{in})}\right|^2=\left(\frac{4}{(1+b^4)}+\mathcal{O}(1/y_m^4)\right)+\mathcal{O}(\nu), \end{equation} and finally, by taking the low-frequency limit of (\ref{disquared}) we compute the late-time behavior of the displacement squared: \begin{equation} s^2_{\pm}(t)=\frac{16 }{\pi^2 \sqrt{\lambda} T}\int_{0}^\infty \!\!d\omega\frac{\sin^2\left(\frac{\omega t}{2}\right)}{\omega^2}\sim\frac{4}{\pi \sqrt{\lambda} T}t. \end{equation} Note that the factor that depends on $b$ exactly cancels with the $h(u_h)$ term that appears in the normalization constant $A$. We find that, surprisingly, the diffusion constant is not affected by non-commutativity, but the relation with respect to $\gamma_o$ and $\mathrm{\Omega}_o$ is still the same \begin{equation} D=\frac{2}{\pi \sqrt{\lambda} T}=\frac{T}{m}\frac{\gamma_o}{\gamma_o^2+\mathrm{\Omega}_o^2}. \end{equation} This suggests that the fluctuation-dissipation theorem holds even in the presence of non-commutativity. In order to explicitly check this, we compute the random force correlator in to extract the coefficient $\kappa_o$. From (\ref{ppcorr}) it follows that \begin{equation} \left\langle:\! p_\pm(t)p_\pm(0)\! :\right\rangle=\int_{-\infty}^\infty{d\omega\over 2\pi}\frac{m^2(1+b^4)}{\sqrt{\lambda}\pi T}\frac{\beta|\omega|}{e^{\beta|\omega|}-1}\left|g^{(\text{out})}(u_m)+B_\pm \, g^{(\text{in})}(u_m)\right|^2e^{-i\omega t}, \end{equation} and hence \begin{equation} I_{p_\pm}(\omega)=\frac{m^2(1+b^4)}{\sqrt{\lambda}\pi T}\frac{\beta|\omega|}{e^{\beta|\omega|}-1}\left|g^{(\text{out})}(u_m)+B_\pm \, g^{(\text{in})}(u_m)\right|^2. \end{equation} At leading order, we find through (\ref{kapa}) that \begin{equation} \kappa_o=\frac{\pi \sqrt{\lambda } T^3}{1+\pi^4\lambda\theta^2T^4}. \end{equation} This agrees with (\ref{flucdis}), thus providing an explicit check of the fluctuation-dissipation theorem for the non-commutative plasma. \section{Discussion\label{conclusions}} In this paper we carried out an analytical study of the dynamics of a heavy quark in two strongly-coupled systems at finite temperature: SYM in the presence of a magnetic field and NCSYM. The former was realized by studying the fluctuations of a string living in an AdS black hole background and turning on a gauge field in the open string sector. The latter was achieved by replacing the background for one that incorporates the effects of non-commutativity through the introduction of an antisymmetric $B$-field in the closed string sector. For both systems, we found that the Langevin equation that describes the dynamics of such a quark has matrix coefficients and this fact induces correlations along the relevant directions. This is in complete agreement with the classical theory of Brownian motion in a magnetic field \cite{BM1,BM2}. We then displayed the basic properties of these equations by computing holographically the admittance and the random force autocorrelator and we showed that these two quantities are related through the usual fluctuation-dissipation theorem. The existence of such theorem is due to the fact that, at the microscopic level, friction and random forces have the same origin, i.e., interactions with the degrees of freedom of the thermal bath. Finally, we studied the diffusion of the quark in both systems and we showed that, although the non-commutative plasma is less viscous, the late-time behavior of the displacement squared is unaffected by the non-commutativity. As explained in the introduction, one of the main motivations that led to this work was to establish whether the fast thermalization found in \cite{Edalati:2012jj} holds for more general situations or not. An important difference between the approach of this paper and that of \cite{Edalati:2012jj} is that here we studied the non-commutative plasma with a local probe. In the previous work, on the other hand, we considered composite non-local operators that are obtained by smearing the usual gauge covariant operators over open Wilson lines\footnote{In non-commutative field theories, this modification makes the operators gauge-invariant. In fact, this class of operators are known to couple to the linearized supergravity fields \cite{Das:2000ur,Liu:2001ps}.}. The fast thermalization and large decay rates of the modes are then possibly related to the non-local character of the probes\footnote{It is well known that the presence of the open Wilson lines dominate the UV behavior of the two (and higher) point functions of the gauge invariant operators \cite{Gross:2000ba}.}. It would be interesting to further explore this question by probing the theory with probes of different `size' and study the associated timescales for the approach to thermal equilibrium. Two interesting possibilities to consider are Wilson loops and entanglement entropy. It is important to emphasize that all of our computations were performed in the low frequency limit of the theory, in which case the analytical computations were under control. Going beyond the hydrodynamical regime might also offer new insights but it requires a numerical approach. For example, in \cite{Brattan:2012uy} it was shown that a large class of holographic quantum liquids exhibit novel collective excitations that appear due to the presence of a magnetic field. At high frequency, the dominant peak in the spectral function is associated to sound mode similar to the zero sound mode in the collisionless regime of a Landau Fermi liquid. The study of Brownian motion within this regime is beyond the scope of this paper, but it is also left for future works. In conclusion, the results obtained in this paper shed additional light on the thermal nature of non-commutative gauge theories and suggest future directions of research. The present study constitutes yet another illustration of the usefulness of the gauge/gravity correspondence. \section*{Acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No. PHY-0969020 and by the Texas Cosmology Center. W.T.G. is also supported by a University of Texas fellowship. We are grateful to M. Shigemori for a clarification about the overall normalization of the solutions and for pointing out a useful reference.
2,877,628,089,314
arxiv
\section{Introduction}\label{Intro} In the recent years, with the rapid technological progress in the production of micro- and nano-channels, the understanding of fluid flow on the nanoscale\cite{Drake,Meller} has become crucial for modern nanotechnology (such as the ``lab on a chip'' and related microfluidic devices) as well as for various applications of porous materials, flow in biomembranes, etc. A key problem is the description of fluid flow in narrow channels with wettable walls. Such channels are ubiquitous in cells and living matter but have been also successfully produced from synthetic materials in recent years\cite{Alvine}. Thus, planar nanochannels, fabricated by silicon-based technology, can provide an attractive configuration for fundamental studies like filling kinetics\cite{Haneveld}, hydrodynamics in confinement, and for molecular separation processes in biology\cite{Han}. Indeed, besides practical applications, microfluidics also raises a challenge to basic research since the continuum description of fluid flow goes under question whenever discreteness of matter comes into play, that is, at length scales comparable to the molecular size. To gain insight into the transport mechanisms involved in fluid flows, many researchers have studied the problem using a variety of computer simulation methods, and most notably Computational Fluid Dynamics (CFD), Lattice-Boltzmann Equations (LBE), and Molecular Dynamics (MD) methods. The classical continuum theory based on Navier-Stokes (NS) equations assumes that state variables do not vary appreciably on a length scale comparable to the molecular free path. This conjecture has been challenged by both experiment\cite{Nagy} and the earliest MD computer simulations\cite{Cieplak} which indicate the existence of significant density fluctuations of the liquid normal to the solid wall. Against expectations, however, some MD studies\cite{Bitsanis,Evans} on Poiseulle flow demonstrated that the classical NS description may be used for modeling capillary flow in channels with diameter of several molecular sizes and greater, while this has been challenged by another study\cite{Travis}. To these controversial results one should add data, produced by the LBE method\cite{shanchen} which has gained increased prominence in recent years due to its efficiency and proximity to the basic assumptions of the NS constitutive equations. Results from the LBE approach indicate that there are several microfluidic situations, in which molecular details, although non-negligible, can still be given a meso-scopic, rather than fully atomistic, representation, without affecting the basic physics \cite{SBRAG, yeomans1, yeomans2,karlin,dupin,chibbaro,Fabiana}. On general grounds, this can be expected to be the case whenever molecular fluctuations do not play any major role. However, as the physics of micro/nanoflows progresses towards increasingly demanding standards, qualitative expectations need to be complemented and possibly tested against quantitative assessments. In view of the diversity of methods and the plethora of controversial results from the computational modeling of flow in the sub-micrometer range, a test of the adequacy and reliability of the principal approaches is highly warranted. In the present investigation we perform such a test by comparing the results from three of the most broadly used methods - CFD, LBE, and MD - focusing on a generic case, the capillary filling of a wettable narrow channel by spontaneous imbibition of a simple fluid. The aim of this comparative study is as follows: (i) we test the reliability of the three simulation methods with respect to the capillary filling of a wettable nanochannel, by comparing the results against the Lucas - Washburn (LW) theoretical prediction; (ii) we analyse the effect of the presence of an isolated corrugation on the flow dynamics and test the numerical results against the theoretical Concus-Finn criterion; (iii) the direct comparison between fluctuating (MD) and non-fluctuating (CFD and LB) approaches, permits to highlight the role of thermal fluctuations on the universality of the CF criterion. \section{Theoretical background} In a horizontal capillary under steady state conditions (i.e., in the absence of gravity and transient inertial effects), the capillary imbibition pressure is balanced by the viscous drag of the liquid. Simple analysis of this process leads to the Lucas - Washburn equation (LW)\cite{Lucas,Washburn} which relates the distance of penetration $z(t)$ of the fluid front at time $t$ to the capillary size $H$, the viscosity $\eta$ and surface tension $\gamma$ of the liquid, and the static contact angle $\theta$ between the liquid and the capillary wall. If slip velocity and deviations from Poiseille flow are neglected, in the late asymptotic regime the distance $z(t)$ travelled by the moving interface in the channel (having $z_0$ as initial coordinate) is given by \begin{equation}\label{LW} z(t)^2 - z_0^2 = \frac{\gamma H \cos(\theta)}{\eta} ct, \end{equation} where $H$ denotes the height of the channel and the constant $c=1/3$ for a slit-like capillary. Equation \ref{LW} can be recast in dimensionless form as $\tilde{z}=z/H$ and $\tilde{t}=t/t_{cap}$, where $t_{cap}=H \eta/\gamma$, becoming: \begin{equation} \label{LW_dimless} \tilde{z}(\tilde{t})^2 - \tilde{z}_0^2 = \cos(\theta) c \tilde{t}. \end{equation} It is apparent that in these dimensionless units, time-penetration depends only on the value of the contact angle $\theta$, regardless of other fluid parameters and geometrical details. This facilitates the comparison between different simulation methods. \begin{figure}[htb] \begin{center} \includegraphics[width=6cm,angle=0]{fab.eps} \caption{\label{fig1} Scheme of the microchannel geometry} \end{center} \end{figure} In this comparative study we explore capillary filling in the presence of topological obstacles (rectangular ridges) on the channel wall (\ref{fig1}). Since capillary filling is mainly determined by the three-phase boundary (``contact line'') between liquid, wall and vapor, the motion of the contact line offers a much more stringent test of the various simulation methods and offers a possibility to assess their shortcomings and advantages. A key problem in this regard is the pinning of the contact line, due to a geometric singularity in the meniscus stability, like the presence of obstacles whose sides make an angle $2\alpha$ with respect to the wall, broadly known as Concus-Finn condition \cite{Concus, Concus2, Concus3}. According to the Concus-Finn criterion, there is no filling due to meniscus instability, provided the contact angle fulfills the following condition: \begin{equation} \theta > \frac{\pi}{2} - \alpha. \end{equation} For a rectangular obstacle, this means that a contact line will be pinned at its trailing edge, once $\theta > 45^0$, regardless of the obstacle height. While this condition follows from the detailed mathematical analysis\cite{Concus} of the liquid surface stability, in fact it goes back to the thermodynamic foundations of wetting, as established in the early works of Gibbs\cite{Gibbs}. The possibility for capillary driven flow is of major importance for numerous applications, e.g., modern fuel cells\cite{Zengerle}, therefore it is not surprising that this criterion has been tested even in space experiments on the Shuttle-Mir complex. \section{Modelling microfluids: from continuum mechanics to molecular dynamics}\label{model} Currently, intensive efforts to get deeper insight and understanding of flow in microchannels are carried out by researchers using a variety of computer modeling approaches. In principle, continuum fluid mechanics provides the most economical description of microflows. However, this approach fails to describe a series of phenomena, such as near-wall slip-flow, in which the continuum assumption goes under question due to the onset of molecular effects, especially near solid walls. Molecular Dynamics provides a high degree of physical fidelity, at the price, however, of a very susbtantial computational burden. The lattice kinetic approach is well positioned to offer a good compromise between the physical realism of molecular dynamics and the computational efficiency of continuum mechanics \cite{STATPHYS,Gladrow}. The lattice kinetic approach is finding increasing applications to microfluidic problems, as it permits to handle fluid-wall interactions at a more microscopic level than the Navier-Stokes equations, while simultaneously reaching up to much larger scales than Molecular-Dynamics \cite{SBRAG,Yeom,Harting,Popescu}. However, the LB approach is subject to a number of limitations, such as the existence of spurious currents near curved interfaces, as well as enhanced evaporation/condensation effects due to the finite-width of the interface \cite{EPJB09b}. Ideally, one would combine the three methods within a synergistic multiscale procedure, using each of them wherever/whenever appropriate, in different regions of the microflow. Various two-level (continuum-MD, LB-MD and continuum-LB), are already available in the literature~\cite{Bird}, while, to the best of our knowledge, three-level coupling are just beginning to appear \cite{MU3}. A detailed and comparative assessment of merits and downsides of the three levels of description, is therefore of great importance to the ultimate goal of developing efficient and robust multiscale coupling methods for complex microfluidic flows. In this work, we shall present a comparative study of capillary filling in the presence of wall corrugations, using all of the three methods mentioned above, namely a finite-volume CFD software package, a LB model with non-ideal fluid interactions, and a MD computer program. Since all of these methods are by now standard, in the following we shall provide just a cursory description, leaving the details to the vaste literature on the subject. \subsection{Computational Fluid Dynamics}\label{model_CFD} Our CFD approach is based on the CFD-ACE+ software package (release 2008), as a multiphysics commercial tool including geometry modeling, grid generation and results visualization \cite{dapp1}. The spontaneous capillary filling in micro/nano channels is reproduced by means of the VOF (Volume of Fluid) scheme, based on the hypotheses of incompressibility, no-interpenetration and negligible localized relative slip of the two fluids in contact. In order to describe the transport of the volume fraction $\phi$ of one of the two fluids in each cell ($\phi$ thus ranging from 0 to 1), the Navier-Stokes equations for the fluid velocity are augmented with a scalar transport equation for the volume fraction: \begin{equation}\label{CFD_eq} \frac{\partial \phi}{\partial t} + \vec{\nabla} \cdot (\bm{u} \phi) = 0 \end{equation} where $t$ is time, $\vec{\nabla}$ is the standard spatial gradient operator, and $\bm{u}$ is the velocity vector of the fluid. The composition of the two fluids in the mixture determines for each computational cell the averaged value of physical properties such as density and viscosity. Any volume-specific quantity is evaluated in accordance with the following expression: \begin{equation} \bar \omega=\phi \omega_l + (1-\phi) \omega_g \end{equation} where $\bar \omega$ is the volume-averaged quantity, $\omega_l$ (resp. $\omega_g$) is the value of the property for one liquid (resp. gas). However, being $\phi$ the averaged volume fraction of fluid in each cell, the definition of the interface between the two fluids in that cell is not unique and must be dynamically reconstructed throughout the simulation. In CFD-ACE+, this is accomplished through an upwind scheme with PLIC (Piecewise Linear Interface Construction method \cite{dapp2,dapp3}), taking into account surface tension to determine accurately the interface curvature. As to boundary conditions, a zero static pressure is imposed at both inlet (liquid only) and outlet (gas only). At the initial time both fluids are at rest, with a short portion of the channel at the inlet filled with liquid. This specific configuration allows the liquid-vapour interface to assume the curvature corresponding to the hydrophilic partial wetting condition imposed at walls through the contact angle $\theta$, thus generating the capillary pressure that drives the fluid motion. The CFD simulation set-up consists of a $2D$ straight channel with a height $H=800\; $nm and overall length $L=80\; \mu$m. At the bottom wall, a squared post, of side and height $h=400\; $nm, is located. The computational domain has been discretized with a squared structured non-uniform grid consisting of $185000$ cells and $190000$ nodes. \subsection{Lattice Boltzmann method for multiphase flows}\label{geom_LBE} The Lattice Boltzmann (LB) method is based on a minimal form of the Boltzmann kinetic equation describing the time evolution of discrete particle distribution function $f_i(\bm{x},t)$, denoting the probability of finding a particle at lattice site $\bm{x}$ and time $t$ moving along the lattice vector $\bm{c}_i$ ( \ref{d2q9}), where the index $i$ labels the discrete directions of motion. \begin{figure}[htb] \includegraphics[width=3cm,angle=0]{d2q9.eps} \caption{\label{d2q9} The two-dimensional, nine-speed LB scheme.} \end{figure} In mathematical terms \cite{Gladrow}, the LB equation reads as follows: \cite{SS_92} \begin{equation} \label{eq:LB} f_{i}(\bm{x}+\bm{c}_{i}\Delta t,t+\Delta t)-f_i(\bm{x},t)=-\frac{\Delta t}{\tau}\left( f_{i}(\bm{x},t)-f_{i}^{(eq)}(\rho,\rho{\bm u}) \right) + F_i \; \Delta t \end{equation} where $\rho$ is the fluid density, ${\bm u}$ is the fluid velocity and $i=0-8$ labels the the nine-speed, two-dimensional $2DQ9$ model \cite{Gladrow}. The second term on the right hand side is a model collision operator $\Omega_i$, describing the relaxation towards a local equilibrium on a time scale $\tau$. Finally, $F_i$ describes the effect of external/internal forces on the discrete particle distribution. The macroscopic density and momentum are obtained as weighted averages of discrete distributions: \cite{Gladrow} \begin{equation} \rho({\bm x},t)=\sum_{i} f_{i}({\bm x},t); \qquad \rho {\bm u}({\bm x},t)=\sum_{i}{\bm c}_{i}f_{i}({\bm x},t).\end{equation} For the case of a two-phase fluid, the interparticle force reads as follows: \begin{equation} \label{forcing} \bm{F}=-{\cal G} c^{2}_{s}\sum_i w_i \psi({\bm x},t) \psi ({\bm x}+{\bm c}_i\Delta t,t) \bm{c}_i \quad i=0-8. \end{equation} Here $\psi({\bm x},t)=\psi(\rho({\bm x},t))=(1-\exp(-\rho(\bm x,t)))$ is the pseudo-potential functional, describing the fluid-fluid interactions triggered by inhomogeneities of the density profile, and ${\cal G}$ is the strength of the coupling (see \cite{shanchen,prenoi2} for details). Finally, $F_i=w_i \;{\bm F} \cdot {\bm c}_i/c_s^2$, where $w_i=0,1/9,1/36$ are the standard weights of the nine-speed two-dimensional lattice 2DQ9 \cite{Gladrow} and $c_s=1/\sqrt 3$ is the sound speed in lattice units. Besides introducing a non-ideal excess pressure $p^{\star}=\frac{1}{2} {\cal G} c_s^2 \psi^2$, the model also provides a surface tension $\gamma \sim {\cal G} c_s^4 \frac{(\delta \psi)^2}{\delta_w}$, where $\delta \psi$ is the drop of the pseudo-potential across the interface of width $\delta_w$. By tuning the value of the pseudo-potential at the wall $\psi(\rho_w)$, this approach allows to define a static contact angle, spanning the full range of values $\theta \in [0^o:180^o]$ \cite{Kang}. As to the boundary conditions, the standard bounce-back rule is imposed: any flux of particles that hits a boundary simply reverses its velocity so that the average velocity at the boundary is automatically zero. This rule can be shown to yield no-slip boundary conditions up to second order in the Knudsen number in the hydrodynamic limit of single phase flows. \begin{figure}[htb] \begin{center} \includegraphics[width=8cm,angle=0]{Fabiana_fig1_bis.eps} \caption{\label{fig_fab} Geometrical set-up of the LB simulations.} \end{center} \end{figure} In this work we consider a $2D$ channel composed of two parallel plates separated by a distance $H= 40 \Delta$, where $\Delta$ is the space discretization unit. This two-dimensional geometry, with length $2 L= 3000 \Delta$, is divided in two regions, as shown in \ref{fig_fab}. The left part has top and bottom periodic boundary conditions, so as to support a perfectly flat gas-liquid interface, mimicking a ``infinite reservoir''. The actual capillary channel resides on the right half, of length $L$. The top and bottom boundary conditions are those of a solid wall, with a given contact angle $\theta$. Periodic boundary conditions are also imposed at the west and east sides. The squared obstacle, of side $h=H/2=20 \Delta$, is placed on the lower wall. \subsection{Molecular Dynamics} During the last few decades Molecular Dynamics has proved to be one of the most broadly used simulation techniques, capable of reproducing many details of macroscopic fluid dynamics. It has been extensively used to study dynamic wetting problems and to shed more light on the molecular processes close to the contact line region~\cite{Robins,Gentner}. An appealing feature of MD (e.g., compared with Monte Carlo or molecular statics) is that it follows the actual dynamical evolution of the system. We perform MD simulation of a simple generic model on a coarse-grained level. In our MD simulation, see Fig. 4, the fluid particles interact with each other through a Lennard-Jones (LJ) potential, \begin{equation} U_{LJ}(r)=4\epsilon[(\sigma/r)^{12} -(\sigma/r)^{6}] \end{equation} where $\epsilon=1.4$ and $\sigma=1.0$. The capillary walls are represented by particles forming a triangular lattice with spacing $1.0$ in units of the liquid atom diameter $\sigma$. The wall atoms may fluctuate around their equilibrium positions, subject to a finitely extensible non-linear elastic (FENE) potential \begin{equation} U_{FENE}(r) = -15 \epsilon_w R_0^2\ln(1-r^2/R_0^2) \end{equation} with $R_0 = 1.5$ and $\epsilon_w = 1.0 k_BT$ where $k_B$ denotes the Boltzmann constant and $T$ is the temperature of the system; $r$ is the distance between the particle and the virtual point which represents its equilibrium position in the wall structure. The FENE-potential acts like an elastic string between the wall-particles and their equilibrium positions in the lattice and keeps the structure of the wall densely-packed hexagonal. In addition, the wall particles interact with each other via a LJ potential with $\epsilon_{ww} = 1.0$ and $\sigma_{ww} = 0.8$. This choice of interactions guarantees no penetration of liquid through the wall, and at the same time the mobility of the wall particles corresponds to the system temperature. Molecules are advanced in time via the velocity-Verlet algorithm with integration time step $\delta t = 0.01 t_0$ where $t_0= (\sigma^2m / 48 \epsilon_{LJ})^{1/2} = 1/\sqrt{48}$ is the basic time-unit and we have taken particle mass $m=1$ and $k_BT=1$. The temperature is maintained by a Dissipative-Particle-Dynamics (DPD) thermostat, with friction parameter $\xi = 0.5$, thermostat cutoff $r_c = 2.5 \sigma$ and step-function-like weight functions \cite{HK,Allen}. Fluid properties and flow boundary conditions arise then as a consequence of the collision dynamics and the local friction controlled by the DPD thermostat. An advantage of the DPD method in comparison with other MD schemes is that the local momentum is conserved, so that the hydrodynamic behavior of the liquid at large scales is correctly reproduced. It is worth emphasizing that all contact angles $\theta$, used in the simulations, have been determined by measuring the mean static contact angle $\theta$ of a sessile meniscus in a slit, formed by two parallel atomistic walls for a few given liquid -wall interactions $\epsilon$. Specific contact angles are then chosen by interpolation between the relevant values of $\epsilon$. Therefore, the actual value of the contact angle when the fluid front is located at an obstacle edge, for instance, may incidentally deviate from the nominal value of $\theta$ . A recent MD study~\cite{Dimitrov_PRL} has shown that the LW-law (Eq.(1)), holds almost quantitatively down to nanoscale tube diameters. In this study we use an obstacle shaped as a rectangular ridge which runs perpendicular to the flow direction in the slit and has the same atomic composition as that of the slit walls. The height and width of the ridge are chosen as $10 \sigma$ so that the obstacle height takes half of the slit thickness. \section{Numerical results}\label{results} We have performed simulations of spontaneous capillary filling in presence of a squared obstacle, for each of the three computational methods described above. The test case is the same: the filling of a channel of height $H$ with a squared post of side $h=H/2$. In LB and MD, the post is placed at the central position on the bottom wall(\ref{fig1}), at a distance $l=25H$ and $l=5H$ from the inlet, respectively, due to computational costs. Since the LW equation does not depend on the channel length, this is not a limiting factor for the present study (see below). Since the inertial transient with the CFD method is much longer than the estimated inertial time, as shown also in \ref{CFD_snap}, in this case the channel length has been taken $l \approx 100H$, so as to guarantee that the front meets the post when the asymptotic regime is attained. We have performed a series of four simulations (for each method), varying the contact angle $\theta$ from $30^\circ$ to $64^\circ$. The specific values of the physical parameters for each simulation method are reported in Table 1. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|} \hline \em Parameter & \em CFD value & \em LB value & \em MD value\\\hline Channel height $H$ (m) & $8 \cdot 10^{-7}$ & $8 \cdot 10^{-7}$ & $8\cdot 10^{-7}$\\ Water density $\rho_l$ (kg/m$^3$)& $10^3$ &$10^3$ &$10^3$ \\ Water kinematic viscosity $\nu$ (m$^2$/s) & $10^{-6}$ & $10^{-6}$ & $10^{-6}$\\ Water dynamic viscosity $\eta$ (kg/(m s))& $10^{-3}$ & $10^{-3}$ & $10^{-3}$ \\ Liquid/gas surface tension $\gamma$ (N/m)& $7.2 \cdot 10^{-2}$ &$9.6 \cdot 10^{-2}$ & $3.4 \cdot 10^{-4}$\\ Vapour density $\rho_g$ (kg/m$^3)$& $1.167$ & $28.12$ &$10.6$ \\ \hline \end{tabular} \caption{Physical quantities for the three different cases. Units have been chosen as follows: LB) $\Delta x=H/NY$, $\Delta t=\nu_{LB}\; \Delta x^2 /\nu$, $\Delta m= \rho_l\; \Delta x^3 /\rho_{LB}$); MD) $\sigma = H/20$, $\tau= \nu_{MD} \; \sigma^2/\nu$, $\delta m=\rho_l\; \sigma^3 /\rho_{MD}$. The different values of the gas density and surface tension reflect the computational constraints of the different methods.} \end{center} \end{table} Some comments are in order. By construction, the CFD approach is able to reproduce exactly all the physical properties of the flow. On the other hand, both LB and MD show some discrepancies in the value of the surface tension and vapour density. However, these discrepancies have been found to be irrelevant to the purpose of investigating the macroscopic features of capillary imbibition~\cite{Dimitrov_PRL,EPJB09} \begin{figure}[htb] \includegraphics[scale=0.6]{all_data_single.eps} \caption{ \label{all} Log-Log plot of the dimensionless front coordinate $z(t)/H$ vs the dimensionless time $t/t_{cap}$. Here $z$ denotes the centerline position ($y=H/2$) of the front, $y$ being the cross-flow coordinate. All simulations show superposition with the LW-prediction before reaching the obstacle, at all inspected contact angles. The arrows indicate the pinning points for the case of $50^\circ$ and $60^\circ$, not visible on the scale of this figure. See also \ref{CFD_snap}, \ref{LBE_snap} and \ref{MD_snap} for a close-up of the dynamics around the obstacle region only. } \end{figure} As is well known, depending on the value of the contact angle $\theta$, two scenarios may appear according to the Gibbs, or Concus Finn criterion \cite{Concus, Concus2, Concus3}: a) for small contact angles the front is able to climb the obstacle, walk on it, and eventually pass it; b) for large contact angles the front climbs the obstacle, but pins at its back edge, thus stopping the fluid motion. We wish to emphasize that the Concus-Finn, or Gibbs, criterion is based upon thermodynamic arguments and, consequently, it has been rigourously proven only at a macroscopic level. \subsection{Recovering the Lucas-Washburn regime} As is well known, the Lucas-Washburn asymptotic regime sets in once inertial effects die out and steady propagation results from the sole balance between capillary drive and dissipation. During the transient regime, the dynamic contact angle, which depends on the front speed itself, changes in time until its static value is attained \cite{Martic,Quere}. In \ref{all}, the time evolution of the centerline ($y=H/2)$ front position is reported as a function of time, and compared with the dimensionless LW-law Eq. (\ref{LW_dimless}). As one can see, all three methods exhibit satisfactory agreement with the theoretical LW prediction, although on a different range of time-scales. This is due to the different values of the parameters used in each method, and particularly, to the fact that the MD capillary speed $V_{cap}=\gamma/\eta$ is $20 \div 30)$ times higher than in the other methods. Indeed, it should be noted that the LW asymptotic solution sets in after a typical transient time of the order of $\tau \sim H^2/12 \nu_l$, or in units of capillary times, $\tau/t_{cap} = \frac{\rho \gamma H}{12 \mu^2}$. Based on the data in Table I, one can readily check that $\tau/t_{cap} \sim 1$ for LB and CFD, and $\tau/t_{cap} \sim 10^{-2}$ for MD. It is therefore possible to appreciate the anomalous transient in the CFD case, see \ref{all}, which is of the order $\tau/t_{cap} \sim 100$. However, since CFD and LB simulations last over $10^3$ capillary times, and MD simulations last about $10$ capillary times, before reaching the obstacle, the superposition with the LW regime is free from transient phenomena. \subsection{Front morphology while crossing the obstacle} In \ref{CFD_snap}, \ref{LBE_snap} and \ref{MD_snap}, snapshots of the fluid front during the surmounting of the obstacle are reported for the three simulation methods (the figures report density contours). As one can see, the front dynamics and morphology are strongly affected by the presence of the obstacle: the liquid impinging on the obstacle must adjust to a $90^{\circ}$ degrees discontinuity of the contact angle, which is clearly causing a significant deformation of the front, before the static value is recovered again on the flat-top surface of the obstacle. These changes of shape are well visible in figures \ref{CFD_snap} and \ref{LBE_snap}, for both CFD and LB. The case of MD is less clear-cut, due to the absence of a well-defined interface and to molecular fluctuations. Indeed, although the fluid meniscus is clearly visible, its surface appears rough and strongly fluctuating in time. From \ref{MD_snap} one can see that some atoms evaporate from the liquid and overcome the slit. Moreover, long before the fluid meniscus has passed the obstacle, vapor condensation and partial filling of the wedge, formed by the rear wall of the ridge and the slit wall, take place (see also \ref{MD_snap}a). Since fluid imbibition in a capillary is accompanied by the faster motion of a precursor film far ahead of the meniscus \cite{Bonn,Kav_03,sergio}, it is also conceivable that the wedge is filled by this atomically thin precursor. Since the meniscus position at time $t$ is difficult to locate precisely, we rather measure the volume of the fluid in the capillary which is readily obtained by the total number of fluid atoms, residing at time $t$ in the slit during the filling process. For an incompressible fluid, this volume is directly proportional to the distance $z(t)$, travelled by the fluid front at time $t$. The curve giving $z(t)$ is shown in ~\ref{MD_snap}b for several values of the contact angle, $30^{\circ} \le \theta \le 64^{\circ}$. The shape changes also entail a substantial change of the front speed, as well documented in \ref{CFD_snap}b, \ref{LBE_snap}b and \ref{MD_snap}b , from which the front speed can be read off simply as the slope of the curves reporting the front position as a function of time. These figures show evidence of a significant acceleration of the front (centerline position $y=H/2$) in the climbing stage, followed by a deceleration in the stage where the front is approaching the rear corner, where the chance/risk of pinning is highest. Here the fate of the front becomes critical. According to the CF analysis, if the contact angle is below $45^{\circ}$, there is enough drive from the upper wall to pull the front away from the rear edge of the obstacle. Otherwise, the front stops moving. The CFD and LB simulations confirm this picture, showing evidence of pinning only for the two angles above $45^{\circ}$. More precisely, they show a significant front acceleration in the climbing stage, followed by a coasting period, once the rear edge is reached. For $\theta=30^{\circ}$ and $\theta=40^{\circ}$, the coasting period exhibits a finite lifetime, after which the front regains its motion. For $\theta=50^{\circ}$ and $\theta=60^{\circ}$, the coasting period does not seem to come to an end (within the simulation time), and the front is pinned. After the obstacle, the (unpinned) front is slowed down, as one can appreciate from the slightly reduced slope of the front dynamics, see \ref{CFD_snap}b (only for $\theta=40$) and \ref{LBE_snap}b. The MD simulations, on the other hand, tell a different story, in that even at $\theta = 50^{\circ}$, the front proves capable of overcoming the obstacle, although with a drastically reduced speed. Note that, while both CFD and LB show evidence of a strong front acceleration as they approach the obstacle, MD simply shows a monotonic deceleration. This is due to the fact that while in CFD and LB simulations we measure the distance travelled by the interface midpoint, as already pointed out, MD measures the total volume of the fluid in the capillary. Given the qualitatively different outcome of CFD(LB) versus MD simulations at $\theta > 45^{\circ}$, we next discuss the dependence of the filling dynamics on the different contact angles, case by case. \begin{figure}[htb] \includegraphics[scale=0.4]{CFD_snapshots_final.eps} \vspace{1cm} \hspace{1cm} \includegraphics[scale=0.35]{CFD_data_new.eps} \caption{\label{CFD_snap} a) Snapshots of the front dynamics at different stages of a post surmounting CFD simulation. The various columns from left to right correspond to contact angles $\theta = 30^{\circ},\; 40^{\circ},\;50^{\circ},\;60^{\circ}$, as indicated. b) Time evolution of the interface midpoint while crossing the obstacle in CFD simulations. The inset shows the instantaneous propagation speed of the front for the case $\theta=30^{\circ}$. } \end{figure} \begin{figure}[htb] \includegraphics[scale=0.5]{LBE_snapshots_final.eps}\\ \vspace{1truecm} \includegraphics[scale=0.35]{LB_data_new.eps} \caption{\label{LBE_snap} a) Snapshots of the front dynamics at different stages of a post surmounting LB simulation. From left to right, the columns correspond to contact angles $\theta = 30^{\circ},\; 40^{\circ},\;50^{\circ},\;60^{\circ}$. b) Time evolution of the interface midpoint while crossing the obstacle in LB simulations. The inset shows the instantaneous propagation speed of the front for the case $\theta=30^{\circ}$.} \end{figure} \begin{figure}[htb] \includegraphics[scale=0.8]{snapshot-new.eps}\vspace{1cm} \\ \includegraphics[scale=0.3]{MD_data_new.eps} \caption{\label{MD_snap} a) Variation of the number of fluid particles $N$ in the capillary with time $t$ for contact angles $\theta = 32^{\circ},\; 40^{\circ},\;50^{\circ},\;64^{\circ}$. Symbols represent MD results, whereas lines denote the predicted behavior according to the LW-equation. The vertical grey-shaded column indicates an extension in the time axis. b) Time evolution of the interface, computed through the fluid volume measurement, while crossing the obstacle in MD simulations. The inset shows the instantaneous propagation speed of the front for the cases $\theta=32^{\circ}$. } \end{figure} \begin{itemize} \item{Contact angles $30^{\circ}$ and $40^{\circ}$} \label{3040}\\ As already mentioned, in these cases, all three methods give the same qualitative outcome: in proximity of the ridge, the front deforms in response to the geometrical discontinuity, climbs up the obstacle and walks over its top. Once the rear-edge is reached, the bottom end of the front pins at the corner, and keeps moving on the top wall. Then, according to the CF criterion, the front overcomes the obstacle and completely fills up the entire channel. Manifestly, the standard $z$ versus $t$ relationship, described by the LW-law (\ref{LW}) is strongly violated in the vicinity of the obstacle. After overcoming the obstacle, the usual LW regime is recovered, although after a transitional period of time, which depends on the wettability $\theta$~\cite{chibbaro}, and with a reduced velocity. In particular, in all cases, the front is slowed down right after passing the obstacle, see \ref{CFD_snap}b, \ref{LBE_snap}b and \ref{MD_snap}b. However, at $\theta=30$, the asymptotic behaviour is quite different for the three methods: (i) In CFD simulations, \ref{CFD_snap}b, after a short transient time, the velocity basically regains the initial value before passing the obstacle. (ii) LB shows a similar behaviour, with only a slight reduction of the front velocity after passing the obstacle. (iii) in MD, however, the front undergoes a substantial velocity decrease. This might be due to the details of the fluid-solid interaction during the obstacle surmounting (in which the contact angle is bound to undergo drastic changes), as well as to the different time-scales. A detailed analysis of this effect shall be deferred to a future study. \item{Contact angle $50^{\circ}$} \label{50} \\In this case, while results from CFD and LB confirm the CF criterion, MD simulations show a different behaviour for the front motion: interestingly, the Concus-Finn (Gibbs) criterion for contact line pinning at the edge of the ridge is found to break down. Indeed, for $\theta = 50^{\circ}$, the fluid front overcomes the obstacle in manifest violation of the Concus-Finn criterion. In order to vividly explain this feature, we show in \ref{FigMD4050} the snapshots and in video 1 and 2 the movie of the front dynamics, respectively for the cases $\theta = 40^{\circ}, ~50^{\circ}$. These observations suggest that, on the nanoscale, the overcoming of topological obstacles is strongly affected by interface fluctuations, thus undermining the deterministic nature of the imbibition process. Indeed, both the CFD and LB method work in absence of fluctuations, and this would explain the difference with MD simulations. The problem of contact line pinning during capillary imbibition acquires thus a stochastic character and is most probably governed by the size of the obstacles around the CF critical point. In fact, depending on the height of the ridge obstacle, a coalescence of the pinned meniscus with the molecules ahead of the obstacle, in the vicinity of the edge, may occur at later times. \item{Contact angle $60^{\circ}$ }\label{60} \\Again, in this case all the three methods give the same result: the front deforms, climbs the obstacle, walks on its top, but pins at the back edge and definitely stops moving. \begin{figure}[htb] \includegraphics[scale=1.2]{fig8.eps} \caption{\label{FigMD4050} Snapshots from MD simulations with $\theta=40^{\circ}$ (on the left) and for $\theta=50^{\circ}$ (on the right). It is clearly seen that for both values of the contact angle the front overcomes the obstacle, although at different times. This shows that the Concus-Finn criterion is violated in the case with $\theta=50^{\circ}$.} \end{figure} \subsection{Further discussion} In order to gain a better understanding of the previous results, and particularly of the violation of the CF criterion for mildly super-critical angles in MD simulations, additional runs have been performed. More precisely, we have run MD simulations at $\theta=50^{\circ}$ with a taller obstacle, $h=15 \sigma$, in order to inspect whether the front is still capable of surmounting the obstacle in violation of the CF criterion (in this case, the total slit height was correspondingly increased, so as to leave the same clearance above the obstacle as in the previous simulations). Moreover, we have also run additional CFD and MD simulations at $\theta=50^{\circ}$ with shorter obstacles, $h=H/4$, in order to inspect whether even CFD(LB) would show violations of the CF criterion upon reducing the obstacle size. The main outcome is as follows: MD simulations with $h=15 \sigma$ {\it do} show front pinning, indicating that violations of the CF criterion disappear once the obstacle is made sufficiently tall (see Fig. \ref{FigMDnew}). This corroborates our previous conjecture of a (Arrehnius-like?) dependence of the CF criterion on the obstacle height, in the presence of molecular fluctuations. At the same time, CFD and LB simulations with $h=H/4$ at $\theta=50^{\circ}$ keep showing evidence of pinning, thereby lending further support to the idea that the CF criterion remains insensitive to obstacle size, so long as molecular fluctuations can be neglected. \begin{figure}[htb] \includegraphics[scale=1.0]{fig8a.eps} \includegraphics[scale=1.]{fig8b.eps} \caption{\label{FigMDnew} Snapshots (a) and centerline front coordinate (b), from MD simulations with $\theta=50^{\circ}$ and $h=15 \sigma$. Unlike the same case with $h=10 \sigma$, the front is now pinned in accordance with the CF criterion. } \end{figure} Before concluding, a few words of caution should be spent on the fact that, being diffuse-interface methods, both CFD and LB propagate a finite-width interface. As a result, one may wonder whether and to what extent finite-width effects could interfere with the physics of post surmounting. Indeed, as shown in previous studies by these and other authors (\cite{EPJB09,yeomans1}), finite interface width eventually leads to mild deviations from the LW law. However, they do not affect the qualitative outcome of a pinning/no-pinning test, unless the interface width becomes comparable to the characteristic obstacle width. Both CFD and LB simulations are visibly far from this critical limit, which is why we are confident that the qualitative conclusions of the present work are not significantly affected by finite-width effects. One may wonder whether such deviations might be paralleled to the effect of molecular fluctuations. To this regard, it is worth underlining that such a parallel has to be taken very cautiously, since molecular fluctuations stem from the microscopic physics of the problem, while finite-width effects are due to a lack of numerical resolution. Sometimes a mapping between the two can be established, but this can by no means be taken as a general rule. Finally, another potential source of discrepancy between CFD(LB) and MD is the fact that the latter allows slip motion while CFD and LB (in this set up) do not. Although any solid statement on the effect of slip flow on the dynamics of post surmounting must necessarily be deferred to a detailed quantitative analysis, we observe that due to the weakness of inertial effects, hydrodynamic boundary conditions have little or no effect, on the dynamics/energetics of the obstacle surmounting. This is confirmed by the fact that slip flow effects should manifest through visible deviations from the LW regime, of which we have no evidence, at least in the parameter regime investigated in this work. This situation can drastically change in the presence of superhydrophobic effects, although we shall not be concerned with these problems in the present work. \end{itemize} \section{Conclusions}\label{summary} Summarizing, we have studied the effect of geometrical obstacles in microchannels on the process of capillary filling, by means of three distinct simulation methods - Computational Fluid Dynamics (CFD), Lattice-Boltzmann Equations (LBE) and Molecular Dynamics (MD). The numerical results of these approaches have been compared and tested against the Concus-Finn (CF) criterion, which predicts pinning of the contact line at rectangular ridges perpendicular to flow for contact angles $\theta > 45^\circ$. While for $\theta = 30^\circ$ (flow), $\theta = 40^\circ$ and $\theta = 60^\circ$ (no flow) all methods are found to produce data consistent with the CF criterion, at $\theta = 50^\circ$ the numerical experiments provide different outcomes. While pinning of the liquid front is observed in both LB and CFD simulations, the MD simulations show that the moving meniscus overcomes the obstacle and the filling goes on, for a sufficiently small obstacle. This result indicates that the macroscopic picture underlying the CF criterion and hydrodynamic approach needs to be amended near the critical angle. Furthermore, while in CFD and LB simulations the front re-emerges from the obstace surmounting with a nearly unchanged velocity, in the MD case the post-surmounting velocity appears considerably reduced. These results suggest that, away from the critical value $\theta=45^o$, the issue of front-pinning in a corrugated channel can be quantitatively described by a kinetic Boltzmann approach or by the macroscopic CFD method. While the CFD software used in this work is well-suited to handle complex geometries, it also shows some physical and computational limitations, namely anomalous-long transients and the need of a large computational grid to assure the required accuracy, which entails a correspondingly long computational time, much closer to the MD requirements than to the LB ones. In the vicinity of the critical angle, the motion of the front exhibits a strong sensitivity to molecular fluctuations which cannot be accounted for by standard (non-fluctuating) LB methods, let alone continuum methods. In particular, the MD simulations show that molecular fluctuations allow front propagation slightly above the critical value predicted by the deterministic CF criterion, thereby introducing a sensitivity to the obstacle height (the CF criterion is restored for sufficiently tall obstacles). On the basis of the present results, it would be indeed of interest to explore whether fluctuating hydrodynamic methods, either in the form of stochastic hydrodynamics or fluctuating LB, would prove capable of reproducing the results of MD simulations~\cite{Landman}. Whether the probability of "tunnelling" to the deterministically forbidden region $\theta>45^o$ shows an Arrehnius-like dependence on the obstacle height, stands as an interesting topic for future research. \begin{acknowledgement} The authors thank the support from the project "INFLUS", NMP-031980 of the VI-th FW programme of the EC. \end{acknowledgement}
2,877,628,089,315
arxiv
\section{Introduction}\label{intro} Let $X/{\mathbb C}$ be a smooth complex variety, and $M = (E,\nabla)$ a vector bundle with an integrable connection on $X.$ Recall that the Grothendieck $p$-curvature conjecture \cite[I]{Kat72} predicts that $M$ has finite monodromy if it has a full set of algebraic solutions mod $p$ for almost all primes $p.$ More precisely, we can descend $(X,M)$ to a finitely generated ${\mathbb Z}$-algebra $R \subset {\mathbb C}$ and consider its reductions $(X_s, M_s)$ at closed points $s \in {\rm Spec \,} R.$ We consider the following condition: \begin{quote} $\frak{P}:$ there is a dense open subscheme $U\hookrightarrow {\rm Spec} \ R$ such that for all closed points $s\in U$, $M_s$ has $p$-curvature $0$. \end{quote} The conjecture says that this condition implies the existence of a finite \'etale cover $h: Y\to X$ such that $h^*M$ is trivial as a connection; that is $M$ is isotrivial. The conjecture is known to be true when the monodromy representation of $M$ is solvable (\cite[Thm.~8.5]{Chu85}, \cite[Thm.~2.9]{Bos01}, \cite[Cor.~4.3.2]{And04}),) and for Gau{\ss}-Manin connections \cite[Thm.~5.1]{ Kat72}. The condition $\frak{P}$ means that, Zariski locally, $M_s = (E_s,\nabla_s)$ is spanned by the kernel of $\nabla_s.$ This is equivalent to asking that the action of derivations on $E_s,$ given by $\nabla_s$, extends to an action of differential operators of the form $\frac{(\partial /\partial x)^p} p$ where $x$ is a local co-ordinate on $X.$ Of course, if $M$ becomes trivial over a finite cover then this condition holds, but in that case one has, in fact, a stronger condition: the action of the derivations extends to an action of the full ring of differential operators ${\mathcal D}_{X_s}.$ With this motivation, we consider in this paper the stronger condition \begin{quote} $\frak{D} :$ there is a dense open subscheme $U\hookrightarrow {\rm Spec} \ R $ such that for all closed points $s\in U$, $M_s$ underlies a ${\mathcal D}_{X_s}$-module. \end{quote} We denote by $MIC^{\frak{D}}(X/{\mathbb C})$ the category of vector bundles with integrable connections on $X$ which satisfy $\frak{D}.$ Unfortunately, we were not able to show that $\frak{D}$ implies that $M$ is isotrivial. \footnote{In fact one can show that $\frak{D}$ implies the isotriviality of $M$ by combining Theorem~\ref{ithm:finitenessats} below with Theorem~\cite[Thm.~3.3]{Mat06}. Unfortunately, there is a mistake in the proof of Theorem~\cite[Thm.~3.1]{Mat06}, so the proof of Theorem~\cite[Thm.~3.3]{Mat06} is incomplete.} However, we show some partial results. We assume for the rest of the introduction that $X$ is projective. By \cite[Thm.8.1 (1), (3)]{Kat82}, the $p$-curvature conjecture can be reduced to this case. See also \S \ref{ss:MvdP} for a discussion on the projectivity assumption. Our first theorem shows that one can deduce the isotriviality if we add a finiteness condition on the underlying vector bundle $E.$ \begin{ithm}\label{ithm:Nori} The forgetful functor $(E,\nabla) \mapsto E$ from $MIC^{\frak{D}}(X/{\mathbb C})$ to the category of vector bundles on $X,$ is fully faithful. In particular, if $E$ is Nori finite, then $M$ is isotrivial. \end{ithm} Recall that Nori finiteness means that the class of $E$ in the Grothendieck group associated with the monoid of vector bundles on $X$ (see \cite[Section~2.3]{Nor82}) is integral over ${\mathbb Z}$, or equivalently, as we are in characteristic $0$, that there is a finite \'etale cover $h: Y\to X$, such that $f^*E$ is trivial as a vector bundle; that is $E$ is isotrivial. Our next result is an analogue of Katz's theorem on the Gau\ss-Manin connection. \begin{ithm}\label{ithm:katz} If $M$ in $MIC^{\frak{D}}(X/{\mathbb C})$ underlies a polarizable ${\mathbb Z}$-variation of Hodge structure, then $M$ is isotrivial. \end{ithm} To prove these results we use arguments involving stability of vector bundles, together with the following theorem, set purely in characteristic $p.$ \begin{ithm}\label{ithm:finitenessats} Let $X_0$ be a smooth projective, geometrically connected scheme over a finite field $k,$ and let $M_0$ be a coherent ${\mathcal D}_{X_0}$-module on $X_0.$ Then $M_0$ is isotrivial. \end{ithm} This is proved using the existence of the course moduli space of stable vector bundles, and a finiteness argument. A consequence of this last theorem, is that any $M_0$ as in the theorem defines a finite \'etale Tannakian group scheme. Returning to $M$ over $X/{\mathbb C}$ satisfying $\frak{D}$, we define for $s$ in a non-trivial open in ${\rm Spec \,} R$ the corresponding \'etale group scheme $G_s = G(M_s).$ We denote by $k(s)$ the residue field of $s.$ Our final result is \begin{ithm}\label{ithm:finiteness} If there is a Zariski dense set of closed points $s \in {\rm Spec \,} R$ such that ${\rm char} \, k(s)$ does not divide the order of $G_s,$ then $M \in MIC^{\frak{D}}(X/{\mathbb C})$ is isotrivial. \end{ithm} In particular this proves a projective analogue of a conjecture of Matzat-van der Put \cite[p.~51]{MP03}, which predicted isotriviality assuming the order of the $G_s$ was bounded independently of $s.$ In fact we explain in \S \ref{ss:MvdP} that the original conjecture in {\it loc.~cit}, which dealt with a Zariski open in the affine line, is not correct. The paper is organized as follows. In \S \ref{sec:p}, we prove Theorem~\ref{ithm:finitenessats}. The method has already been used in \cite{BK08} (see Remark~\ref{rmk:BK}) and \cite{EM10}. We push it further to prove isotriviality of the whole ${\mathcal O}$-coherent ${\mathcal D}$-module. From it we deduce in \S \ref{sec:0} that the underlying vector bundle $E$ of $M$ satisfying $\frak{D}$ is semistable, and is stable if $M$ is irreducible (Proposition~\ref{prop:ss}). We then deduce Theorem \ref{ithm:Nori} and use Hitchin-Simpson theory, to deduce Theorem~\ref{ithm:katz}. To prove isotriviality in Theorem~\ref{ithm:finiteness}, one uses an idea of Andr\'e, who applied Jordan's theorem \cite{Jor78} in \cite[7.1.3.~Cor.]{And04} to reduce the $p$-curvature conjecture to the case of a number field. This idea in equal characteristic $p>0$ was carried over in \cite[Thm.~5.1]{EL13}, where the coprimality to $p$ appeared as a necessary condition (\cite[Section~4]{EL13}). The difficulty of the mixed characteristic version presented here is made easier by the fact that the groups in characteristic $p>0$ are finite. {\it Acknowledgements:} We thank Yves Andr\'e, Antoine Chambert-Loir and Johan de Jong for their interest and for discussions. We especially thank Sinan \"Unver for a close, and perceptive reading of the manuscript, and Jo\~ao Pedro dos Santos for mentioning \cite{Mat06} to us. The first named author thanks the department of mathematics of Harvard University for hospitality during the preparation of this work. \section{ ${\mathcal O}$-coherent ${\mathcal D}$-modules over finite fields. } \label{sec:p} \begin{para} Let $X$ be a smooth, geometrically connected, projective variety over a field $k.$ We fix an ample line bundle ${\mathcal O}_X(1)$ on $X.$ For a coherent sheaf $E$ on $X,$ we set $p_E(n) = \chi(E(n))/{\rm rk}\, E,$ where $\chi$ denotes the Euler characteristic. We say that $E$ is $\chi$-semi-stable (resp.~$\chi$-stable) if for all proper subsheaves $E' \subset E$ one has $p_{E'}(n) \leq p_E(n)$ (resp. $p_{E'}(n) < p_E(n)$) for $n$ sufficiently large (see \cite[\S 0]{Gie77}, \cite[p.~512]{Lan14}). Similarly if $(E,\nabla)$ is a vector bundle with integrable connection, we defined $\chi$-(semi-)stability in the same way, but we require $E'$ to be $\nabla$-stable. \end{para} \begin{para} Suppose that $k = {\mathbb F}_{p^a}$ is a finite field. For $i \geq 0,$ we denote by $X^{(i)}$ the pullback of $X$ by $F_k^i,$ where $F_k$ is the absolute Frobenius on $k.$ We have the relative Frobenius maps $F: X^{(i)}\to X^{(i+1)}.$ Let $M$ be a ${\mathcal O}_X$-coherent ${\mathcal D}_X$-module. Associated to $M$ we have a Frobenius divided sheaf $(E^{(i)}, \sigma^{(i)})_{i\ge 0}$, where $E^{(i)}$ is a vector bundle on $X^{(i)},$ with $E^{(0)} = M,$ and $\sigma^{(i)}$ is an isomorphism $E^{(i)} \overset\sim\rightarrow F^*E^{(i+1)}$ over $X^{(i)}$. In fact the categories of ${\mathcal O}_X$-coherent ${\mathcal D}_X$-modules and of Frobenius divided sheaves are equivalent (\cite[Thm.~1.3]{Gie75}). The $E^{(i)}$ have trivial numerical Chern classes as these classes are infinitely $p$-power divisible and by definition lie in ${\mathbb Z}$. Then $M$ generates a $k$-linear Tannakian subcategory $\langle M\rangle $ of the category of ${\mathcal O}_X$-coherent ${\mathcal D}_X$-modules. If $x\in X(k),$ taking the fibre at $x$ of $E^{(0)}$ defines a neutral fibre functor on $\langle M\rangle.$ Let $G(M, x)$ be the Tannaka group of $\langle M\rangle$. \end{para} \begin{thm} \label{thm:fin_tan} The group scheme $G(M, x)$ over $k$ is finite \'etale. \end{thm} \begin{proof} Suppose first that the $E^{(i)}$ are $\chi$-stable for $i\geq 0.$ Let ${\mathcal M}$ be the coarse moduli space of $\chi$-stable vector bundles with numerical vanishing Chern classes and rank equal to ${\rm rk} \, M,$ which exists over $k$ (\cite[Thm.~1.1]{Lan14}). Then ${\mathcal M}$ is quasi-projective (\cite[Thm.~0.2]{Lan04}), and in particular has finitely many $k$-points. This implies that there exist $i \geq 0$ and $t> 0$ such that $E^{(ai)}$ and $E^{(ai+at)}$ correspond to the same point $[E^{(ai)}]=[E^{(ai+at)}]$ in ${\mathcal M}(k).$ Hence $F^{a*}$ induces a well defined, surjective map on $S = \{[E^{(ai)}]; i \geq 0\} \subset {\mathcal M}(k).$ That is, $F^{a*}$ is an automorphism of the finite set $S.$ It follows that there is a natural number $t>0$ such that the points $[E^{(ait)}]$ for $i\ge 0$ are all equal. This means that the vector bundles $E^{(0)}$ and $E^{(ait)}$ are isomorphic over an algebraic closure $\bar k$ of $k.$ It follows from Lemma \ref{lem:isomvect} below that they are isomorphic over $k.$ We have an isomorphism $F^{*at}E^{(i)} \overset\sim\rightarrow E^{(i)}$ for $i \geq t$ divisible by $a,$ and hence for $i \geq 0.$ Therefore, there is a finite \'etale cover $h: Y\to X$ such that $h^* E^{(i)}$ is a trivial algebraic bundle for all $i\ge 0$ (\cite[Satz~1.4]{LS77}), and it follows that the ${\mathcal O}_Y$-coherent ${\mathcal D}_Y$-module $h^*M$ is trivial (\cite[Prop.~1.7]{Gie75}). Consequently, in the category of ${\mathcal O}_X$-coherent ${\mathcal D}_X$-modules, $\langle M\rangle \subset \langle h_*{\mathcal O}_Y\rangle$, and $G(\langle h_*{\mathcal O}_Y\rangle,x)$ is finite \'etale. Thus $G(\langle M\rangle,x)$ is finite \'etale. Now consider the case of arbitrary $M.$ By \cite[Prop.~2.3]{EM10}, there is a natural number $i_0$ such that $ (E^{(i)}, \sigma^{(i)})_{i\ge i_0a}$ is a successive extension of Frobenius divided sheaves $U_n$ on $X^{(i_0a)}$ all of whose underlying vector bundles $U_n^{(i)}$ are stable with vanishing numerical Chern classes. It suffices to prove the theorem with $(E^{(i)}, \sigma^{(i)})_{i\ge 0}$ replaced by $(E^{(i+i_0)}, \sigma^{(i+i_0)})_{i\ge 0},$ so we may assume $i_0 =0.$ By what we have seen above there exists a finite \'etale cover $h: Y \to X$ such that $h^* (\oplus_n U_n)$ is a trivial ${\mathcal O}_Y$-coherent ${\mathcal D}_Y$-module. Then $h^*M$ is a successive extension of the trivial ${\mathcal O}_Y$-coherent ${\mathcal D}_Y$-module by itself. By induction on the number of factors $U_n,$ we may assume that $h^*M$ is an extension of $({\mathcal O}_Y)^{s_1}$ by $({\mathcal O}_Y)^{s_2}$ as ${\mathcal D}_Y$-modules for some $s_1,s_2 >0.$ Thus, $h^*M$ is given by a matrix of classes in ${\rm Ext}^1_{{\mathcal D}_Y}({\mathcal O}_Y,{\mathcal O}_Y).$ Arguing with $F$-divided sheaves as above, but replacing the finiteness of ${\mathcal M}(k)$ by the finiteness of the set ${\rm Ext}^1_{{\mathcal O}_Y}({\mathcal O}_Y,{\mathcal O}_Y),$ one finds that any class in ${\rm Ext}^1_{{\mathcal D}_Y}({\mathcal O}_Y,{\mathcal O}_Y)$ becomes trivial over a finite \'etale cover of $X.$ Thus $\langle M\rangle$ is finite, and $G(M, x)$ is \'etale as $M$ has a finite \'etale trivializing cover. \end{proof} \begin{lemma}\label{lem:isomvect} Let $V_1,V_2$ be vector bundles on $X,$ which are isomorphic over $\bar k.$ Then $V_1,V_2$ are isomorphic over $k.$ \end{lemma} \begin{proof} This is presumably well known. Consider the $k$-scheme $\underline{\rm Isom}(V_1,V_2),$ which assigns to any $k$-algebra $R,$ the set of invertible elements in ${\rm Hom}(V_1,V_2)\otimes_kR.$ Since $V_1,V_2$ are isomorphic over $\bar k,$ this is a torsor under the $k$-group scheme $\underline{\rm Aut}(V_1),$ whose $R$ points are given by the units in ${\rm Hom}(V_1,V_1)\otimes_kR.$ For $y \in \underline{\rm Isom}(V_1,V_2)(\bar k)$ and $\sigma \in {\rm Gal}(\bar k/k),$ write $\sigma(y) = y\circ c_{\sigma},$ with $c_{\sigma} \in \underline{\rm Aut}(V_1)(\bar k).$ Then $ (c_{\sigma})$ is a cocycle, defining a class in $ H^1({\rm Gal}(\bar k/k), \underline{\rm Aut}(V_1)(\bar k)).$ Since $\underline{\rm Aut}(V_1)$ is Zariski open in a $k$-vector space, it is smooth and connected, so this class is trivial by Lang's lemma (\cite[Thm.~2]{Lan56}). Thus the cocycle $(c_\sigma)$ is a coboundary, which means that it is the translate of the given point by some element of $\underline{\rm Aut}(V_1)(\bar k)$ is a $k$-point. \end{proof} \begin{para} Recall, \cite[Section~3]{Nor82} that a vector bundle $V$ on $X$ is called Nori-finite if its class in the Grothendieck group associated with the monoid of vector bundles on $X$ (see \cite[Section~2.3]{Nor82}) is integral over ${\mathbb Z}$. Equivalently, there is a torsor under a finite group scheme $h: Y\to X \otimes k'$, such that $f^*E$ is trivial as a vector bundle. Here $k'\supset k$ is a finite field extension such that $X(k') \neq \emptyset$. Nori-finite bundles are, in particular, strongly semistable (that is, the bundle and all its Frobenius pullbacks are semi-stable) vector bundles with vanishing numerical Chern classes \cite[Cor.~3.5]{Nor82}. The category ${\mathcal N}(X)$ of Nori-finite bundles is Tannakian. For any $x \in X(k),$ taking the fibre at $x$ is a neutral fibre functor on ${\mathcal N}(X),$ and each object $E$ has a finite Tannakian group scheme $G(E,x)$. \end{para} \begin{corollary} \label{cor:stab} \begin{itemize} \item[(1)] The vector bundles $E^{(i)}$ are Nori-finite. In particular, they are strongly $\chi$-semistable with vanishing numerical Chern classes. \item[(2)] If $E^{(i)}$ is $\chi$-stable for some $i\ge 0,$ then $E^{(i)}$ is $\chi$-stable for all $i \geq 0.$ \item[(3)] If $M$ is $\chi$-stable as a module with integrable connection, then $M = E^{(0)}$ is $\chi$-stable as a vector bundle. \end{itemize} \end{corollary} \begin{proof} (1) is an immediate consequence of Theorem~\ref{thm:fin_tan}, as the statement may be checked over the finite \'etale cover on which the $E^{(i)}$ become trivial. To see (2), one may use the periodicity of the sequence $\{E^{(i)}\},$ which we saw in the proof of Theorem \ref{thm:fin_tan}, together with the fact that $E^{(i)}$ $\chi$-stable implies $E^{(i+1)}$ $\chi$-stable. Finally for (3), $\chi$-stability of $M$ as a module with integrable connection, is equivalent to $\chi$-stability of $E^{(1)}$ as a vector bundle. So (3) follows from (2). \end{proof} \begin{remark} \label{rmk:BK} Corollary~\ref{cor:stab} (1) and the part concerning the isotriviality of the bundle $E^{(0)}$ in Theorem~\ref{thm:fin_tan} are proven in \cite[Prop.~2.5]{BK08}. There only boundedness is used, not the existence of a coarse moduli space defined over the finite field. The latter argument seems essential here, and the stronger statement in Theorem~\ref{thm:fin_tan} is used in Corollary~\ref{cor:stab} to conclude stability of $E^{(0)}$ in (2) and (3). \end{remark} \begin{para} We still assume $k={\mathbb F}_{p^a}$. Corollary~\ref{cor:stab} enables one to define the forgetful functor \ga{1}{ \frak{forg}: {\mathcal D}(X/k) \to {\mathcal N}(X); \ M\mapsto E^{(0)}, \notag} which is a tensor functor compatible with the Tannakian structures on both sides. Here ${\mathcal D}(X/k)$ is the category of ${\mathcal O}_X$-coherent ${\mathcal D}_X$-modules. For $x \in X(k),$ we denote by \ga{2}{ \frak{forg}^*: \pi_1({\mathcal N}(X),x)\to \pi_1({\mathcal D}(X/k),x), \ \ \frak{forg}|_M^*: G(E^{(0)},x)\to G(M,x) \notag} the induced homomorphisms of Tannaka group schemes. \end{para} \begin{thm} \label{thm:surj_p} The functor $\frak{forg}$ is fully faithful, and for $M$ in ${\mathcal D}(X/k)$ it induces an equivalence $\langle M \rangle \overset\sim\rightarrow \langle E^{(0)} \rangle.$ In particular, for $x \in X(k),$ the homomorphism $\frak{forg}^*$ is faithfully flat, and for any $M$ in ${\mathcal D}(X/k),$ the homomorphism $\frak{forg}|_M^*$ is an isomorphism. \end{thm} \begin{proof} The full faithfulness of $\frak{forg}$ is equivalent to the surjectivity of the induced $k$-linear map ${\rm Hom}_{{\mathcal D}_X}({\mathcal O}_X,M)\hookrightarrow H^0(X, E^{(0)}).$ That is we have to show any global section of $H^0(X, E^{(0)})$ gives rise to a map ${\mathcal O}_X\hookrightarrow E^{(0)}$ of ${\mathcal D}_X$-modules. This can be checked \'etale locally, so the statement follows from Theorem \ref{thm:fin_tan}. To show that $\langle M \rangle \overset\sim\rightarrow \langle E^{(0)} \rangle,$ it suffices to show that for any $M$ in ${\mathcal D}(X/k),$ a subbundle $E' \hookrightarrow E^{(0)}$ in ${\mathcal N}(X)$ is a ${\mathcal D}_X$-submodule. Since this statement is \'etale local, we may assume that $M,$ and hence also $E',$ is trivial, when the result follows from the full faithfulness proved above. It follows that $\frak{forg}|_M^*$ is an isomorphism, and the faithful flatness of $\frak{forg}^*$ follows from \cite[Prop.~2.21]{DM82}. \end{proof} \section{Integrable connections in characteristic $0$ which satisfy $\frak{D}$. } \label{sec:0} \begin{para} In this section, we derive the consequences in characteristic $0$ of the previous section. Let $X$ be a smooth, geometrically connected scheme of finite type defined over a field $k$ of characteristic $0,$ and equipped with an ample line bundle ${\mathcal O}_X(1)$. The category of ${\mathcal O}_X$-coherent ${\mathcal D}_{X/k}$-modules, is equivalent to the category of vector bundles with integrable connections $MIC(X/k)$, which is a $k$-linear Tannakian category, neutralized by taking the fibre of the underlying vector bundle at any point $x \in X(k)$ (if one exists). \end{para} \begin{definition} \label{defn:model} Let $M=(E,\nabla)\in MIC(X/k)$. Let $R \hookrightarrow k$ be a ring of finite type over ${\mathbb Z}$. A model $(X_R, {\mathcal O}_{X_R}(1), M_R)$ of $(X, {\mathcal O}_X(1), M)$ over $R$ is a smooth, projective $R$-scheme $X_R$ with geometrically connected fibres, equipped with an ample line bundle ${\mathcal O}_{X_R}(1),$ together with a vector bundle with an integrable connection $M_R$ relative to $R,$ and an isomorphism of $(X_R,{\mathcal O}_{X_R}(1), M_R)\otimes_Rk$ with $(X,{\mathcal O}_X(1),M).$ \end{definition} Models always exist over some finitely generated ${\mathbb Z}$-algebra $R$ (see \cite[IV, \S 8]{EGA}). We fix a model $(X_R, {\mathcal O}_{X_R}(1), M_R)$ of $(X, {\mathcal O}_X(1), M).$ For $x \in X(k),$ we denote by $G(M,x)$ the Tannaka group of $\langle M\rangle$, the full subcategory of $MIC(X/k)$ spanned by $M.$ \begin{para} \label{defn:pd} Recall the conditions $\frak{P}$ and $\frak{D}$ from the introduction. We define the full Tannakian subcategories $$MIC^{\frak{f}}(X/k)\subset MIC^{\frak{D}}(X/k)\subset MIC^{\frak{P}}(X/k)\subset MIC(X/k)$$ of objects which are finite for $^{\frak{f}}$ (that is they become trivial over a finite \'etale cover of $X$), which verify $\frak{D}$ for $^{\frak{D}}$, which verify $\frak{P}$ for $^{\frak{P}}$. Clearly, the conditions $\frak{D}, \ \frak{P}$ do not depend on the $R$ chosen in Definition~\ref{defn:model}. All these categories are Tannakian subcategories of $MIC(X/k)$. Grothendieck's $p$-curvature conjecture predicts that $$MIC^{\frak{f}}(X/k) = MIC^{\frak{D}}(X/k) = MIC^{\frak{P}}(X/k) \subset MIC(X/k).$$ For the remainder of this subsection we assume that $X$ is projective. \end{para} \begin{prop} \label{prop:ss} If $M=(E, \nabla)\in MIC^{\frak{D}}(X/k)$, then $E$ is $\chi$-semistable with vanishing numerical Chern classes. If $M$ is irreducible, then $E$ is $\chi$-stable. \end{prop} \begin{proof} A destabilizing subsheaf of $E$ would destabilize $E_s=E_s^{(0)}$ for all closed points of some non-empty open in ${\rm Spec \,} R,$ which would contradict Corollary~\ref{cor:stab} (1). This proves the first statement. As for the second one, by definition, $M$ is irreducible if and only if it is $\chi$-stable in $MIC(X/k).$ By openness of stability, $M_s$ is $\chi$-stable for all closed points of some non-empty open in ${\rm Spec \,} R$ (\cite[Thm.~1.1]{Lan14}). Thus $E_s^{(0)}$ is $\chi$-stable by Corollary~\ref{cor:stab} (3), and so $E$ is stable. \end{proof} \begin{para} Let ${\mathcal S}(X)$ denote the category of semistable vector bundles $E$ on $X$, with vanishing numerical Chern classes. This is a Tannakian category, and taking the fibre at $x \in X(k)$ yields a fibre functor. Proposition~\ref{prop:ss} enables us to define the forgetful functor \ga{1}{ {\rm forg}: MIC^{\frak{D}}(X/k) \to {\mathcal S}(X), \ M= (E,\nabla)\mapsto E , \notag} which is a tensor functor compatible with the Tannakian structures on both sides. For $x \in X(k),$ we denote by \ga{2}{{ \rm forg}^*: \pi_1({\mathcal S}(X),x)\to \pi_1(MIC(X/k),x), \ \ {\rm forg}|_M^*: G(E,x)\to G(M,x) \notag} the induced homomorphisms of Tannaka group schemes. \end{para} \begin{thm} \label{thm:surj_0} The functor ${\rm forg}$ is fully faithful. The homomorphism ${\rm forg}|_M^*$ is an isomorphism and the homomorphism ${\rm forg}^*$ is faithfully flat. \end{thm} \begin{proof} We argue as in the proof of Theorem~\ref{thm:surj_p}. The full faithfulness of ${\rm forg}$ is equivalent to the surjectivity of the map $H^0_{dR}(X, M)\hookrightarrow H^0(X, E)$ induced by ${\rm forg},$ which follows from the full faithfulness in Theorem~\ref{thm:surj_p}, by taking the fibres of sections at closed points $s \in {\rm Spec \,} R.$ Next let $M=(E,\nabla)$ be in $MIC^{\frak{D}}(X/k)$ and $E' \hookrightarrow E$ any subvector bundle in ${\mathcal S}(X).$ Then for $s$ in a non-empty open in $ {\rm Spec} \ R,$ $E'_s$ is semistable with vanishing numerical Chern classes and $E_s$ is trivialized by a finite \'etale cover. Thus $E'_s$ is trivialized by a finite \'etale cover as well, and so lies in ${\mathcal N}(X_s).$ It follows by Theorem~\ref{thm:surj_p}, that $E'_s \hookrightarrow E_s$ is $\nabla$-stable, and hence $E'$ is $\nabla$-stable. Finally, this implies that ${\rm forg}|_M^*$ is an isomorphism and ${\rm forg}^*$ is faithfully flat (\cite[Prop.~2.21]{DM82}). \end{proof} \begin{corollary} If $M = (E,\nabla)$ is in $MIC^{\frak{D}}(X/k)$ and $E$ is in ${\mathcal N}(X),$ then $M\in MIC^{\frak{f}}(X/k)$. \end{corollary} \begin{proof} In this case $G(E,x)$ is a finite (\'etale) group scheme, so the corollary follows from the fact that ${\rm forg}|_M^*$ is an isomorphism. \end{proof} \begin{thm} \label{thm:katz} Let $X$ be a smooth projective connected variety over ${\mathbb C}$, and $M$ a polarizable ${\mathbb Z}$-variation of Hodge structure, such that $M\in MIC^{\frak{D}}(X/{\mathbb C})$. Then $M\in MIC^{\frak{f}}(X/{\mathbb C})$. \end{thm} \begin{proof} By the Lefschetz hyperplane theorem, and Bertini's theorem, we can choose $x \in X({\mathbb C})$ so that there exists a smooth projective curve $C \subset X,$ with $x \in C,$ and such that the map $\pi_1(C,x) \rightarrow \pi_1(X, x)$ is surjective. One checks immediately that $M|_C$ is in $MIC^{\frak{D}}(C/{\mathbb C}).$ Hence we may replace $X$ by $C,$ and assume that that $X$ has dimension $1$. Deligne's semi-simplicity theorem \cite[4.2]{Del71} over ${\mathbb Q}$, together with the fact that a summand of a ${\mathbb Q}$-variation of Hodge structure definable over ${\mathbb Z}$ is itself definable over ${\mathbb Z}$, implies that we may assume that $M$ is irreducible, that is stable. It follows by Proposition~\ref{prop:ss} that $E$ is stable. We apply Hitchin-Simpson theory. The semistable Higgs bundle $(V, \theta)$ associated to $M=(E,\nabla)$ is $V=gr^F E=\oplus_{a=0}^n {\mathcal H}^{n-a,a}$ with $\theta: {\mathcal H}^{n-a,a}\to \omega_X\otimes {\mathcal H}^{n-a-1, a+1}$ the Kodaira-Spencer map of $\nabla$ (\cite[Thm.~8]{Sim90}). Here $\omega_X$ is the sheaf of differential $1$-forms on $X$. Choose $a$ as large as possible such that ${\mathcal H}^{n-a,a} \neq 0.$ Then $({\mathcal H}^{n-a,a},0)\subset (V, \theta)$ is a Higgs subbundle and therefore ${\rm deg} \ {\mathcal H}^{n-a,a} \le 0$. On the other hand, by definition, one has the surjection $E\twoheadrightarrow {\mathcal H}^{n-a,a}.$ Since $E$ is stable, ${\rm deg}(E)=0$, and ${\mathcal H}^{n-a,a} \neq 0,$ it follows that $E={\mathcal H}^{n-a,a}.$ In particular, $\theta \equiv 0,$ and we may apply Katz's argument \cite[Prop.~4.2.1.3]{Kat72} to conclude that the monodromy of $(M,\nabla)$ is finite. \end{proof} \section{Integrable connections in characteristic $0$ which satisfy $({\frak{D}}, p)$.} \begin{para} We keep the assumptions of the previous section, so in particular $X$ is smooth, projective and geometrically connected over $k.$ We assume that $X(k)$ is non-empty and we fix a point $x \in X(k).$ After increasing $R,$ we may assume that $x$ arises from a point $x_R \in X_R(R).$ For a point $s \in {\rm Spec \,} R,$ we denote by $x_s$ the image of $x_R$ in $X_R(k(s)).$ If $M\in MIC^{\frak{D}}(X/k),$ then for all closed points $s$ of some non-trivial open in ${\rm Spec \,} R$, the restriction $M_s$ of a model has a finite \'etale Tannaka group $G_s : = G(M_s, x_s)$ (see Theorem~\ref{thm:fin_tan}), which does not depend on the choice of ${\mathcal D}_{X_s}$-module structure on $M_s,$ by Theorem \ref{thm:surj_p}. We denote by $|G_s|$ the order of the group scheme $G_s.$ That is, $|G_s|$ is the order of $G_{\bar s} : = G_s(\overline{k(s)}),$ where $\overline{ k(s)}$ is an algebraic closure of the residue field $k(s).$ The order $|G_s|$ does not depend on the rational point chosen, as by Tannaka theory the isomorphism class of $G_{\bar s}$ does not depend on the choice of the fibre functor. \end{para} \begin{para} The group $G_{\bar s}$ may be viewed as a quotient of the geometric \'etale fundamental group $\pi_1(X_{\bar s},x_{\bar s}):$ Let ${\mathcal O}_{G_s}$ denote the Hopf algebra of $G_s.$ By \cite[\S 2]{Nor76}, the $k(s)$-representation ${\mathcal O}_{G_s}$ corresponds via Tannaka duality to a $G_s$-torsor $P.$ Since the only $G_s$-invariant elements of ${\mathcal O}_{G_s}$ are the constants, we have $H^0(P,{\mathcal O}_P) = k(s),$ and so $H^0(P_{\overline{ k(s)}}, {\mathcal O}_{P_{\overline{ k(s)}}}) = \overline{k(s)}.$ Hence $P$ is geometrically connected. In particular the automorphism group of $P_{\overline{ k(s)}}/X_{\overline{ k(s)}}$ must be equal to $G_{\bar s},$ and we obtain a surjective map $\pi_1(X_{\bar s},x_{\bar s}) \rightarrow G_{\bar s}.$ \end{para} \begin{definition} \label{defn:p} Let $MIC^{\frak{D},p}(X/k)\subset MIC^{\frak{D}}(X/k)$ denote the full subcategory of objects $M$ such that $|G_s|$ is prime to the characteristic of $k(s)$ for a dense set of closed points of some non-trivial open subset of ${\rm Spec \,} R.$ This category does not depend on the choice of model $(X_R, {\mathcal O}_{X_R}(1), M_R).$ One has inclusions of Tannakian categories $$MIC^{\frak{f}}(X/k)\subset MIC^{\frak{D},p}(X/k) \subset MIC^{\frak{D}}(X/k)\subset MIC^{\frak{P}}(X/k)\subset MIC(X/k).$$ \end{definition} We finish the paper with a proof of the following \begin{thm} \label{thm:Dp} Let $X$ be a smooth projective geometrically connected variety over a field $k$ of characteristic $0$, with a rational point $x$. Then $$ MIC^{\frak{f}}(X/k)=MIC^{\frak{D},p}(X/k).$$ Moreover, if $G = G(M,x)$ denotes the (finite) monodromy group of $M,$ then we have $G_{\bar s} = G$ for all closed points $s$ is a non-empty Zariski open subset of ${\rm Spec \,} R.$ \end{thm} \begin{proof} Take $M\in MIC^{\frak{D},p}(X/k).$ Consider the dense set of closed points $s$ in ${\rm Spec \,} R$ such that $G_s$ is defined and $G_{\bar s}$ has order prime to the characteristic of $k(s)$. We denote by $x_{\bar k} \in X(\bar k)$ and $x_{\bar s} \in X(\overline{ k(s)})$ the geometric points induced by $x$ and $x_s$ respectively. By Jordan's theorem \cite{Jor78}, there is a constant $c(r)$ depending only on $r = {\rm rk} \, M$ such that $G_{\bar s}$ contains a normal abelian subgroup $N_s$ of index at most $c(r)$. Thus, the surjective specialization homomorphism $\pi_1(X_{\bar k}, x_{\bar k})\twoheadrightarrow \pi_1(X_{\bar s}, x_{\bar s})$, composed with $\pi_1(X_{ \bar s}, x_{\bar s}) \to G_{\bar s}/N_s$ defines, for each closed point $s,$ a finite quotient of $\pi_1(X_{\bar k}, x_{\bar k})$ of order bounded above by $c(r)$. Since $\pi_1(X_{\bar k}, x_{\bar k})$ is topologically finitely generated, there are finitely many such quotients, and all such maps factor through some finite quotient of $\pi_1(X_{\bar k}, x_{\bar k}),$ which defines a Galois cover $h: Y\to X_{\bar k}.$ The map $h$ is defined over a finite extension $K$ of $k$, say $h_K: Y_K\to X_K,$ and we may assume that $x$ is the image of a point of $Y_K(K).$ Replacing $k$ by $K$, $X$ by $Y_K,$ and $M$ by its pullback to $Y_K,$ the new Tannaka groups $G_s$ are subgroups of the old ones, so that $M\in MIC^{\frak{D},p}(X/k).$ Thus we may assume that $G_s$ is abelian for $s$ in a dense set of closed points in ${\rm Spec} \ R$. If $M' = (E',\nabla)$ is an irreducible subquotient of $M$ in $MIC(X/k),$ then $M'$ is stable, and so $M'_s$ is stable for $s$ any closed point in a non-empty open in ${\rm Spec \,} R,$ and in particular for all $s$ in a dense set of closed points of ${\rm Spec \,} R$ on which $G_s$ is abelian. This implies that $M'_s$ has rank $1,$ and hence so does $M'$. It follows that $M$ is a successive extension of rank $1$ objects in $MIC(X/k).$ We now apply Andr\'e's solution to Grothendieck's conjecture, for connections with solvable monodromy \cite[Cor.~4.3.2]{And04} to conclude that $M$ has finite monodromy. We have $G_{\bar s} \subset G,$ and it remains to show that $G = G_{\bar s}$ for all $s$ in a non-empty Zariski open subset of ${\rm Spec \,} R.$ If not, there is a proper subgroup $H \subset G$ such that $G_{\bar s}$ is identified with $H$ for $s$ in a Zariski dense set $T.$ By the Tannakian formalism, there is an object $V$ in $\langle M \rangle$ corresponding to a non-trivial, irreducible representation $\rho$ of $G,$ such that $\rho$ admits non-trivial $H$-invariants. The latter condition implies that for $s$ in a Zariski dense subset of $T,$ $H^0_{{\rm dR}}(X_s, V_s) \neq 0,$ which implies that $H^0_{{\rm dR}}(X,V) \neq 0,$ either by base change for de Rham cohomology \cite[Thm.~8.0]{Kat70} or Theorem \ref{thm:surj_0}. This contradicts the irreducibility of $\rho.$ \end{proof} \begin{para} \label{ss:MvdP} Theorem \ref{thm:Dp} answers an analogue of a question of Matzat - van der Put \cite[p.~51]{MP03} in the projective case. More precisely, in our terminology their question amounts to whether the following assertion holds: Let $k$ be a number field, $X \subset \mathbb A^1_k$ a Zariski open subset, and $M$ in $MIC^{\frak{D}}(X/k).$ Suppose that for almost all $s \in {\rm Spec \,} {\mathcal O}_k,$ $M_s$ underlies a ${\mathcal D}_{X_s}$-module which becomes trivial over a finite \'etale Galois covering with group $G_{\bar s} = G,$ a fixed group independent of $s.$ Then $M$ has monodromy group $G.$ When $X$ is replaced by a projective $k$-scheme, this is a particular case of Theorem~\ref{thm:Dp}. We remark that if $X$ is {\it not} projective, then the assertion of \cite[p.~51]{MP03} does not hold. Indeed, suppose that $X \hookrightarrow \mathbb A^1_k$ is open with $k$ a number field, and let $M = (E,\nabla)$ be a regular connection in $MIC(X/k)$ having finite, non-trivial monodromy. Then $M$ has vanishing $p$-curvatures, and so $E_s$ descends to a vector bundle $E_s^{(1)}$ on $X_s^{(1)}.$ As $X_s^{(1)}$ is open in $\mathbb A^1_{k(s)},$ $E_s^{(1)}$ is necessarily a trivial bundle, and so $M_s$ is trivial as an object of $MIC(X_s/k(s)).$ In particular $M_s$ is obtained from a trivial ${\mathcal D}_{X_s}$-module, and we may take $G_s = \{1\}$ for almost all $s.$ This example also shows that if one weakens the conclusion in \cite[p.~51]{MP03} to assert that $M$ has finite monodromy, then the question becomes equivalent to the original $p$-curvature conjecture, since over an open $X$ in $\mathbb A^1_k$ and any object in $MIC(X/k)$ with vanishing $p$-curvatures, we may take $G_s = \{ 1 \}$ for almost all $s.$ Finally, we remark that in this whole discussion, we could have replaced $X \hookrightarrow \mathbb A^1_k$ by any smooth variety $X$ such that all vector bundles on $X_s$ are trivial. \end{para}
2,877,628,089,316
arxiv
\section{Introduction} Top-quark pair production is one of the key measurements performed at the LHC. The detailed analysis of the top quark properties will contribute to a better understanding of the origin of particle masses and electroweak symmetry breaking, and it might also give hints on the physics that lies beyond the Standard Model. With the large number of top quarks being produced at the LHC \cite{Silva:2012di}, the study of their properties is becoming precision physics. Some observables, such as the total cross section for $t\bar{t}$ production, are expected to be measured with an accuracy of the order of five percent. In order to match these very precise experimental measurements with equally accurate theoretical predictions, next-to-next-to-leading order (NNLO) corrections must be considered. Although a full NNLO calculation of the $t\bar{t}$ cross section including all required partonic channels is so far missing, a rapidly increasing number of pieces and intermediate results have become available recently \cite{Abelof:2011ap,Anastasiou:2008vd,Baernreuther:2012ws,Bierenbaum:2011gg,Bonciani:2008az,Bonciani:2009nb,Bonciani:2010mn,Czakon:2008zk,Czakon:2011ve,Czakon:2012zr,Kniehl:2008fd,Korner:2008bn}\footnote{An up-to-date review of these intermediate results can be found in \cite{Bonciani:2011zza}.}. Most notably, the inclusive total hadronic $t\bar{t}$ production cross section induced by the all-fermion partonic processes has been computed~\cite{Baernreuther:2012ws,Czakon:2012zr}. At NNLO, the calculation of any observable receives three classes of contributions: double real ${\rm d}\sigma^{RR}$, mixed real-virtual, ${\rm d}\sigma^{RV}$, and double virtual contributions ${\rm d}\sigma^{VV}$. For an $m$-jet observable, these contributions contain respectively $(m+2)$, $(m+1)$ and $(m)$ partons in the final state. While the latter contribution is already in an $m$-jet final state configuration, the double real and real virtual classes of partonic channels contribute to the $m$-jet observable at NNLO if the partons present in these channels are theoretically unresolved (soft/collinear) or are experimentally unresolved, i.e clustered to form an $m$-jet final state by a given jet algorithm. In addition, for hadronic observables, mass factorization counterterms which will enter at the $(m+1)$ and $m$-parton level have to be taken into account. The integration of the matrix elements with real radiated particles over the soft and/or collinear regions of phase space yields infrared divergencies. Therefore, in order to evaluate hadronic observables including higher orders, a process independent procedure which enables the extraction and cancellation of those infrared poles amongst the different partonic channels, needs to be applied. Subtraction methods explicitly constructing analytically integrable infrared subtraction terms which reproduce the behaviour of the full matrix elements in their unresolved limits are well known solutions to this problem \cite{Boughezal:2011jf,Catani:1996vz,Catani:2002hc,Catani:2007vq,Czakon:2010td,Frederix:2008hu,Frixione:1997np,Frixione:2004is,Kilgore:2004ty,Kunszt:1992tn,Nagy:1996bz,Phaf:2001gc,Somogyi:2006cz,Somogyi:2006da,Weinzierl:2003fx}, \cite{Boughezal:2010mc,Daleo:2006xa,Daleo:2009yj,Gehrmann:2011wi,GehrmannDeRidder:2005cm,GehrmannDeRidder:2007jk,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012ja,Glover:2010im}, \cite{Abelof:2011ap,Abelof:2011jv,Abelof:2012rv,Bernreuther:2011jt,GehrmannDeRidder:2009fz}. For QCD observables involving massive fermions, fewer subtraction terms are needed, since the real radiation amplitudes are singular in fewer regions of phase space than their massless counterparts. Indeed for those observables, QCD radiation emitted off a massive leg can only lead to soft singularities. Strict collinear divergencies cannot occur, since they are regulated by the mass of the fermion. In this context, collinear limits are to be replaced by their massive analogues: the quasi-collinear limits \cite{Catani:2002hc,Abelof:2011jv}. In these limits, when integrated over the appropriate phase space, the real radiation matrix element is not divergent but develops terms of the form $\log(m_Q^2/Q^2)$, with $Q$ being the hard scattering scale in the problem under consideration. Since those logarithms are not enhanced in the context of $t \bar{t}$ production at the LHC, they have not been taken into account in the extension of the antenna subtraction method developed in \cite{Abelof:2011ap}. There, subtraction terms capturing only the strict infrared behaviour of the matrix element squared were derived. Even though, the infrared structure of the matrix elements for processes involving massive fermions is simpler, the kinematics and the integration of the subtraction terms become more involved, given the fact that the finite parton masses introduce a new scale into the problem under consideration. At the NLO level, two different subtraction methods have been extended to deal with massive final-state fermions: the dipole formalism~\cite{Catani:2002hc,Phaf:2001gc} and the antenna subtraction method~\cite{Abelof:2011jv,GehrmannDeRidder:2009fz}. The latter has been further extended to NNLO and employed to construct subtraction terms for the double real corrections to heavy quark pair production in \cite{Abelof:2011ap,Abelof:2012rv}. We shall follow this second subtraction framework in this paper. Based on the universal factorisation properties of QCD colour-ordered amplitudes squared in their infrared limits, antenna subtraction \cite{Boughezal:2010mc,Daleo:2006xa,Daleo:2009yj,Gehrmann:2011wi,GehrmannDeRidder:2005cm,GehrmannDeRidder:2007jk,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012ja,Glover:2010im, Abelof:2011ap,Abelof:2011jv,Abelof:2012rv,Bernreuther:2011jt,GehrmannDeRidder:2009fz} constitutes a framework to construct subtraction terms that approximate the double real and mixed-real virtual corrections in their infrared limits. Within this formalism, the subtraction terms are built as products of antenna functions and reduced matrix elements squared with remapped momenta. The antenna functions, which are derived from physical colour-ordered matrix elements, capture all unresolved radiation emitted between two hard radiators and reduce to the well-known universal infrared factors (collinear splitting functions or soft eikonal factors) in the appropriate limits. Depending on whether the two hard radiators are located in the initial or in the final state, three types of antennae are needed: final-final (f-f), initial-final (i-f) and initial-initial (i-i). The initial-final and initial-initial antennae are obtained from their final-final counterparts by crossing one or two final state partons respectively to the initial state. Furthermore, by the flavour of the hard radiators we distinguish between quark-antiquark, quark-gluon and gluon-gluon antennae. While at NLO the subtraction of the unresolved limits of the real corrections only requires tree-level three-parton antennae, at NNLO we need tree-level four-parton antenna functions as well as products of two tree-level three-parton antennae for the double real contributions, and one-loop three-parton antennae for the mixed real-virtual contributions. In addition to the antenna functions and the phase space mappings needed for the reduced matrix elements, the method uses a Lorentz invariant factorisation of the full phase space into an antenna phase space, and a reduced phase space with remapped momenta. This factorisation is needed in a different form in each configuration (f-f, i-f, i-i), and it enables the analytic integration of the antenna functions over the antenna phase space. The integrated subtraction terms therefore contain these integrated antennae, and the reduced matrix elements and phase space, both defined in terms of remapped momenta, which are left unintegrated. This integrated subtraction term can then be combined with the virtual contributions and mass factorisation counterterms, achieving thus an analytic cancelation of the infrared poles. In the massless case, the phase space factorisations have been derived in~\cite{Daleo:2006xa,GehrmannDeRidder:2005cm}. All the massless final-final antennae were integrated in \cite{GehrmannDeRidder:2005cm}, the initial-final ones in \cite{Daleo:2006xa,Daleo:2009yj}, and the initial-initial antennae in~\cite{Boughezal:2010mc,Daleo:2006xa,Gehrmann:2011wi,GehrmannDeRidder:2012ja}. For the antennae involving massive partons, the three-parton tree-level antennae, which are the only ingredients needed at NLO, have been integrated in \cite{Abelof:2011jv,GehrmannDeRidder:2009fz}. At NNLO, so far only one antenna function with massive particles has been integrated \cite{Bernreuther:2011jt}: the final-final four-parton antenna involving a massive and a massless quark-antiquark pair and derived from the physical process $ \gamma^{*} \to Q \bar{Q} q \bar{q}$. It is the purpose of this paper to evaluate the integrated forms of the new four-parton antenna functions employed in the double real subtraction terms for $t\bar{t}$ production due to the partonic processes $q\bar{q}\to t\bar{t}q'\bar{q}'$ and $gg\to t\bar{t}q\bar{q}$ \cite{Abelof:2011ap,Abelof:2012rv}. These partonic processes constitute the $N_F$ double real contributions to the $t\bar{t}$ hadronic cross section evaluated at NNLO. The three specific antennae have a massless initial-state parton and a massive final-state fermion as their hard radiators, and they are therefore initial-final antennae. The paper is organised as follows. In section \ref{sec.antsub}, we briefly recall the infrared structure of double real radiation corrections to hadronic jet observables involving massive final state fermions. We specify in particular how the initial-final four-parton massive antennae enter in the construction of the subtraction terms and present the phase space factorisation that enables the integration of these antennae. In section \ref{sec.antennae} we recall the precise definitions of these specific three initial-final four-parton antennae whose integration constitute the core result of this paper. Section \ref{sec.integration} describes how the integration over the massive initial-final antenna phase space of these four-parton antenna functions is performed. As the expressions of the integrated antennae are too lengthy to be presented here, we shall give the corresponding pole parts in section \ref{sec.integration}, and give the full result in a {\tt Mathematica} file attached separately to this paper. Finally, section 5 contains our conclusions. An appendix containing the unintegrated forms of the initial-final four-parton antennae is included too. \section{Initial-final antenna subtraction with massive fermions at NNLO}\label{sec.antsub} At the partonic level, the double real emission contributions to an $m$-jet cross section involving a pair of massive fermions $(Q\bar{Q})$ read \begin{eqnarray} \lefteqn{{\rm d}\hat\sigma^{RR}_{NNLO}(p_1,p_2)= {\cal N}_{NNLO}^{RR}\, \sum_{{m}}{\rm d}\Phi_{m+2}(p_{Q},p_{\bar{Q}},p_{5},\ldots,\,p_{m}; p_1,p_2) }\nonumber \\ && \times \frac{1}{S_{{m}}}\, |{\cal M}_{m+4}(p_{Q},p_{\bar{Q}},p_{5},\ldots,p_{m};p_1,p_2)|^{2}\; J_{m}^{(m+2)}(p_{Q},p_{\bar{Q}},p_{5},\ldots,p_{m})\hspace{3mm} \\ &\equiv& {\cal N}_{NNLO}^{RR}\, \sum_{{m}}{\rm d}\Phi_{m+2}(p_{3}, \ldots, p_{m+4}; p_1,p_2) \nonumber \\ && \times \frac{1}{S_{{m}}}\, |{\cal M}_{m+4}(p_{3},\ldots, p_{m+4},;p_1,p_2)|^{2}\; J_{m}^{(m+2)}(p_{3},\ldots,p_{m+4}),\hspace{3mm}\label{eq.real} \end{eqnarray} where the last line is obtained by relabelling all final state partons. In eq.(\ref{eq.real}), $S_{m}$ is a symmetry factor for identical massless partons in the final state, $|{\cal M}_{m+4}|^2$ denotes a colour-ordered tree-level matrix element squared with $m+2$ final-state partons (two of which are massive) and two are initial-state partons, and $\sum_{m}$ denotes the sum over all the possible colour orderings. The next-to-next-to leading order normalisation factor ${\cal N}^{RR}_{NNLO}$ includes all QCD-independent factors as well as the dependence on the renormalised QCD coupling constant $\alpha_s$. ${\rm d}\Phi_{m+2}$ denotes the phase space for an $m+2$-parton final state containing $m$ massless and two massive partons with total four-momentum $p_1^{\mu}+p_2^{\mu}$. Finally, the jet function denoted by $J_{m}^{(m+2)}$ ensures that out of $m$ massless partons and a $Q\bar{Q}$ pair, an observable with a pair of heavy quark jets in addition to $(m-2)$ jets, is built. In principle, only the leading colour pieces of the double real corrections are accounted for in eq.(\ref{eq.real}), since the subleading colour contributions involve in general interferences between sub-amplitudes with different colour orderings. To keep the notation simpler, however, we denote these interference terms also as $|{\cal M}_{m+4}|^2$. The NNLO contributions given in eq. (\ref{eq.real}) contain infrared singularities which arise when one or two final state partons are unresolved (soft or collinear). The phase space integration can only be carried out numerically after those singularities have been extracted using subtraction terms that approach the real radiation matrix elements in all their unresolved limits. Depending on the colour connection between the unresolved partons in each sub-amplitude squared in eq.(\ref{eq.real}), the antenna subtraction method distinguishes between the following configurations~\cite{GehrmannDeRidder:2005cm,Glover:2010im} \begin{itemize} \item[(a)] One unresolved parton but the experimental observable selects only $m$ jets. \item[(b)] Two colour-connected unresolved partons (colour-connected). \item[(c)] Two unresolved partons that are not colour-connected but share a common radiator (almost colour-unconnected). \item[(d)] Two unresolved partons that are well separated from each other in the colour chain (colour-unconnected). \item[(e)] Compensation terms for the over-subtraction of large angle soft emission. \end{itemize} For each of the configurations listed above, the antenna subtraction terms have a distinct characteristic structure in terms of required antennae functions, which is valid for final-final, initial-final and initial-initial configurations. These characteristic structures, first derived in the massless case in \cite{Daleo:2009yj,GehrmannDeRidder:2005cm,Glover:2010im}, are unaltered by the presence of non-vanishing parton masses \cite{Abelof:2011ap}, although some of the antenna functions needed in this latter case are massive. With the exceptions of subtraction terms of type $(b)$ and $(e)$, all the remaining configurations are constructed with products of the NLO three-parton tree-level antenna functions. Configuration $(e)$ only arises in partonic processes involving at least three gluons, and its treatment is left to be discussed elsewhere. We shall here focus only on configuration $(b)$ in the presence of massive final-state fermions. In particular, we shall concentrate on those cases where the two unresolved partons are colour-connected to one of the massive final-state particles and to one of the (massless) incoming partons. The form of these specific type of initial-final subtraction terms with massive antennae has been derived in \cite{Abelof:2011ap,Abelof:2012rv} in the context of the evaluation of the double real radiation corrections to top quark pair production at the LHC for different partonic processes. We shall now recall the form of these subtraction terms. When two unresolved partons $j$ and $k$ are adjacent and colour-connected to two hard radiators labelled $i$ (massless) and $l$ (massive), the initial-final subtraction term ${\rm d}\sigma^{S,b(if)}_{NNLO}$ related to the double real contribution ${\rm d}\sigma^{RR}_{NNLO}$ given in eq.(\ref{eq.real}) reads \cite{Abelof:2011ap} \begin{eqnarray} {\rm d}\sigma_{NNLO}^{S,b (if)}&=& {\cal N}_{NNLO}^{RR}\,\sum_{m}{\rm d}\Phi_{m+2}(p_{3},\ldots,p_{m+4}; p_i,p_2)\frac{1}{S_{{m+2}}} \nonumber \\ &\times& \,\Bigg [ \sum_{jk}\;\left( X^0_{i,jkl}- X^0_{i,jk} X^0_{I,Kl} - X^0_{jkl} X^0_{i,JL} \right)\nonumber \\ &\times&|{\cal M}_{m}(p_{3},\ldots,{p}_{L},\ldots,p_{m+4};x_i p_i,p_2)|^2\,J_{m}^{(m)}(p_{3},\ldots,{p}_{L},\ldots,p_{m+4})\;\Bigg]\;. \label{eq.sub2b} \end{eqnarray} This subtraction term involves: the phase space for the production of $(m+2)$ partons, ${\rm d}\Phi_{m+2}$, with two of them being massive, the colour-ordered reduced $(m+2)$-parton amplitude squared $|{\cal M}_{m+2}|^2$ (with two partons less than the original matrix element squared), the jet function $J^{(m)}_{m}$, the initial-final four-parton antennae $X_{i,jkl}$ and products of three-parton antennae in final-final and initial-final configurations. The initial-final antenna functions are defined by crossing one massless partons in the corresponding final-final antennae. Since the three and four-parton final-final antennae are denoted by $X_{ijk}$ and $X_{ijkl}$ respectively, the corresponding three and four-parton initial-final antennae are denoted by $X_{i,jk}^0$ and $X_{i,jkl}^0$. By construction, the subtraction term in eq.(\ref{eq.sub2b}) contains all colour-connected double unresolved limits of the $(m+4)$-parton colour-ordered matrix elements squared. Those double unresolved limits are captured by the four-parton antennae $X_{i,jkl}$, which are one of the new ingredients appearing in the NNLO double real subtraction terms. The hard radiators are the initial-state parton $i$ and the (possibly massive) final-state particle $l$, and the unresolved particles are $j$ and $k$. These four-parton antennae, however, are generally also singular in single unresolved limits of $j$ or $k$, where they do not reproduce any physical singularity of the matrix element. In order to ensure that these subtraction terms are only active in the double unresolved regions that they are aimed at, we remove the unphysical single unresolved limits of these four-particle tree-level antennae using appropriate products of two tree-level three-particle antennae, as was done in the massless case in \cite{Daleo:2009yj,Glover:2010im}. In eq.(\ref{eq.sub2b}), the four-parton antenna functions $X_{i,jkl}$ depend on the original momenta $p_i$, $k_j$, $k_k$ and $k_l$ while the reduced $m$-parton matrix element depend only on the redefined on-shell final-state momenta $p_{3},\ldots,{p}_{L},\ldots p_{m+4}$ and on the rescaled initial-state momenta $x_{i} p_{i}$ and $p_2$. Thus, in order to obtain the integrated form of our subtraction terms, we factorise the full $(m+2)$-particle phase space (with parton $i$ in the initial state) in the following way \cite{Abelof:2011ap}: \begin{eqnarray} {\rm d}\Phi_{m+2}(p_3,\dots,p_{m+4};p_i,p_2)&=&{\rm d}\Phi_{m}(p_3,\dots,p_{L},\dots,p_{m+4};x_i p_i,p_2)\nonumber\\ &\times&\frac{Q^2+m_{jkl}^2}{2\pi}{\rm d}\Phi_{3}(p_j,p_k,p_l;p_i,p_2)\frac{{\rm d} x_i}{x_i}\,, \label{eq.psfact} \end{eqnarray} with $m_{jkl}^2=m_j^2+m_k^2+m_l^2$ and \begin{equation} x_i=\frac{Q^2+m_{jkl}^2}{2 p_i\cdot q}. \end{equation} Using the phase space factorisation in eq.(\ref{eq.psfact}), the integrated form of the parts of the subtraction terms that involve four-parton antennae are given by \begin{equation}\label{eq.intsterm} |{\cal M}_{m+2}|^2\, J_{m}^{(m)}\; {\rm d}\Phi_{m} \int \frac{Q^2+m_{jkl}^2}{2\pi} {\rm d}\Phi_{3}(p_j,p_k,p_l;p_i,q)\; X^0_{i,jkl}\frac{{\rm d}x_i}{x_i}.\\ \end{equation} The integrated form of the four-parton initial-final antennae, denoted by ${\cal X}^0_{i,jkl}$, are thus obtained by integrating the four-parton antennae $X^0_{i,jkl}$ over the $2 \to 3$ particle phase space ${\rm d}\Phi_{3}(p_j,p_k,p_l;p_i,q)$ (with parton $i$ in the initial state), analytically in $d=4-2 \epsilon$ dimensions. More precisely, the integrated four-parton initial-final antenna is defined as: \begin{equation}\label{eq.aint4} {\cal X}^0_{i,jkl}(x_i)=\frac{1}{\left[ C(\epsilon)\right]^2} \frac{Q^2+m_{jkl}^2}{2\pi} \int {\rm d}\Phi_3(p_j,p_k,p_l;p_i,q) X^0_{i,jkl}\,, \end{equation} where \begin{equation}\label{eq.ceps} C(\epsilon)=(4\pi)^{\epsilon}\frac{e^{-\epsilon \gamma}}{8\pi^2}. \end{equation} The kinematics of the reduced matrix element appearing in eq.(\ref{eq.sub2b}) depends on the masses of the mapped momenta and on the momentum fraction carried by the remapped initial state parton, $x_i$. As will be shown in section \ref{sec.integration}, the integrated antennae also depend on $x_i$, in addition to their natural dependence on the mass of the hard radiator $l$. \section{Antenna functions for double real radiation}\label{sec.antennae} In this section we define the three initial-final four-parton massive antenna functions whose integrated form will be given in section \ref{sec.integration}. These antennae have been derived for the first time in \cite{Abelof:2011ap,Abelof:2012rv}, and their explicit expressions will be given in appendix \ref{sec.unintegratedantennae} for completeness. They are present in the subtraction terms needed for the computation of the partonic processes $q\bar{q} \to Q \bar{Q}q' \bar{q}'$ and $ gg \to Q\bar{Q} q \bar{q}$, which are part of the double real radiation contributions to heavy quark pair production in hadronic collisions. In general, an antenna function is characterised by its parton content, and its two radiators. These are the hard partons onto which the antenna collapses to in the unresolved limits. Accordingly, antenna functions are grouped into quark-antiquark, quark-gluon and gluon-gluon antennae and they are all derived from physical matrix elements related, in the final-final case, to the decay of a colourless particle into partons \cite{GehrmannDeRidder:2005aw,GehrmannDeRidder:2005hi}. As stated before, while at NLO, only tree-level three-parton antennae are needed to capture the single unresolved behaviour of the real radiation matrix elements, at NNLO, four-parton tree-level antennae and one-loop three-parton antennae are also required. The former are needed to capture the double unresolved behaviour of double real matrix elements while the latter are used to capture the single unresoved behaviour of one-loop matrix elements. Restricting ourselves to the integration of the subtractions terms needed to evaluate double real contributions, we shall not discuss the one-loop antennae in the context of this paper. In contrast with the massless antenna functions, massive antennae have explicit mass terms and can be of two different natures: flavour-conserving and flavour-violating. The latter are derived from partonic processes with a flavour-violating vertex connecting radiators of different flavours, one of them being massless and the other massive. Massive three-parton flavour violating antennae have been derived in~\cite{Abelof:2011jv,Abelof:2012rv}, and one specific quark-antiquark four-parton antenna of this kind has been derived in \cite{Abelof:2011ap}. Since its integrated form will be derived in section \ref{sec.integration}, we shall here recall how it is defined. When considering the partonic process $q\bar{q} \to Q\bar{Q}q' \bar{q}'$ presented in \cite{Abelof:2011jv}, to account for the singularities that arise when the massless final-state quark-antiquark pair is unresolved between a massive final state fermion and a massless incoming one, our subtraction terms need a massive initial-final flavour-violating quark-antiquark B-type antenna denoted by $B_4^0(\Q{1},\qb{4},\q{3},\qpi{2})$. In this antenna, the hard radiators are a massive final state quark ($\Q{1}$), and a massless initial state quark ($\qpi{2}$)~\footnote{In the explicit expressions of the antenna functions, we shall label initial state partons with a hat, massless fermions as $q$ and massive fermions as Q.}. This antenna is derived from the matrix element of the process $V^* q' \to Qq\bar{q}$, with a flavour violating vertex joining the colourless virtual particle $V^*$, an initial state massless quark $q'$ and a massive final state quark $Q$. The explicit expression of this antenna together with its infrared limits has been given in \cite{Abelof:2011ap}. It is given in appendix \ref{sec.unintegratedantennae} for completeness. The other four-parton antennae whose integrated form is derived in section \ref{sec.integration} are two kinds of quark-gluon antennae. Those are employed in our subtraction terms for the process $gg \to Q\bar{Q} q \bar{q}$ \cite{Abelof:2012rv}, in order to reproduce the double unresolved limits of the double real radiation matrix element in which the final state $q\bar{q}$ pair becomes unresolved between the massive (anti) quark and an initial state gluon. These two quark-gluon antennae are obtained in their final-final forms from the process $\tilde{\chi}\to \tilde{g}gq\bar{q}$, with the massive gluino $\tilde{g}$ playing the role of the massive (anti) quark of mass $m_Q$. The initial-final antennae required here are then obtained by crossing the gluon to the initial state in these final-final ones. The full amplitude for the process $\tilde{\chi}\to \tilde{g}gq\bar{q}$ contains leading and subleading colour pieces~\cite{GehrmannDeRidder:2005aw}. By squaring the leading colour piece, in which the $q\bar{q}$ pair is emitted between the gluino and the gluon in the colour chain, one-kind of quark-gluon antenna, namely the $E_4^0$ antenna, is obtained, while, squaring the subleading colour piece, in which the gluon is emitted between the $q\bar{q}$ pair, the $\widetilde{E}_4^0$ antenna is obtained. The antenna $E_4^0(\Q{1},\q{3},\qb{4},\gli{2})$ accounts for the infrared limits associated to the emission of an unresolved $q\bar{q}$ pair between a massive (anti) quark and an initial state gluon, while $\widetilde{E}_4^0(\Q{1},\q{3},\qb{4},\gli{2})$ is used to account for the triple collinear limits that involve a massless $q\bar{q}$ pair and an initial state gluon in those sub-leading colour amplitudes in which the gluon is placed between the quark and the antiquark in the colour chain. The explicit expressions of these two quark-gluon initial-final antenna functions denoted by $E_4^0(\Q{1},\q{3},\qb{4},\gli{2})$ and $\widetilde{E}_4^0(\Q{1},\q{3},\qb{4},\gli{2})$ have been derived together with their infrared limits in \cite{Abelof:2012rv} and are recalled in appendix \ref{sec.unintegratedantennae} for completeness. \section{Integration of initial-final antenna functions at NNLO}\label{sec.integration} In this section we describe our calculation of the integrated initial-final four-parton antennae denoted by ${\cal B}^0_{q',Qq\bar{q}}$, ${\cal E}^0_{g,Qq\bar{q}}$ and ${\cal\tilde{E}}^0_{g,Qq\bar{q}}$ whose unintegrated forms, defined in section \ref{sec.antennae} are denoted by $B_4^0(\Q{1},\qb{4},\q{3},\qpi{2})$, $E_4^0(\Q{1},\q{3},\qb{4},\gli{2})$, and $\widetilde{E}_4^0(\Q{1},\q{3},\qb{4},\gli{2})$ respectively. \subsection{General structure of the integrated antennae} The initial-final antennae considered in this paper have a DIS-like $2\rightarrow 3$ kinematics \begin{equation} q+p_2\rightarrow p_1+p_3+p_4 \end{equation} with $p_1^2=m_Q^2$, $p_2^2=p_3^2=p_4^2=0$ and $q^2<0$. It turns out to be convenient to parametrise this kinematics in terms of the following variables \begin{equation} Q^2=-q^2\, ,\hspace{0.5in}y=1-\frac{Q^2+m_Q^2}{2p_2\cdot q}\, ,\hspace{0.5in} z=\frac{m_Q^2}{E_{cm}^2} \end{equation} where $E_{cm}^2=(p_1+p_3+p_4)^2=(q+p_2)^2$. The more familiar variables $E_{cm}$ and $m_Q$ can be related to $Q^2$, $y$ and $z$ through \begin{equation}\label{eq.relations} E_{cm}^2=\frac{y}{1-y-z}Q^2\, ,\hspace{0.8in} m_Q^2=\frac{yz}{1-y-z}Q^2. \end{equation} The fact that the $E_{cm}\geq m_Q$ implies that $0\leq z\leq 1$ and $y \geq 0$ and, recalling that $Q^2>0$, it follows from eq.(\ref{eq.relations}) that $y\leq 1-z$. Thus, in terms of our variables $y$ and $z$ the physical region is given by \begin{equation} 0\leq z\leq 1\, , \hspace{0.8in} 0\leq y\leq 1-z. \end{equation} Following eq.(\ref{eq.intsterm}) we integrate the initial-final four-parton antennae over the three particle phase space and express our results in the variables $Q$, $y$ and $z$: ${\cal X}^0_{i,jkl}(Q^2,y,z)$. The integration is carried out following the standard technique of reduction to master integrals using integration-by-parts identities (IBP)~\cite{Chetyrkin:1981qh,Tkachov:1981wb}. We start by expressing our phase space integrals as cuts of two-loop four-point functions with two off-shell legs in forward scattering kinematics~\cite{Anastasiou:2002yz}, and then reduce these two-loop integrals using the Laporta algorithm~\cite{Laporta:2001dd} as implemented in {\tt FIRE}~\cite{Smirnov:2008iw}. In order to write the phase-space integrals of our four-parton antenna functions as two-loop integrals with cuts, we consider the following propagators: \begin{eqnarray} D_1&=&p_1^2-m_Q^2\nonumber\\ D_2&=&p_3^2\nonumber\\ D_3&=&p_4^2\nonumber\\ D_4&=&(p_3+p_4-p_2)^2=(q-p_1)^2\nonumber\\ D_5&=&(p_3+p_4)^2\nonumber\\ D_6&=&(p_3-p_2)^2\nonumber\\ D_7&=&(p_4-p_2)^2\nonumber\\ D_8&=&(p_1+p_3)^2-m_Q^2\nonumber\\ D_9&=&(p_3+p_4-q)^2-m_Q^2=(p_1-p_2)^2-m_Q^2, \end{eqnarray} where $D_1$, $D_2$ and $D_3$ are cut in the phase space integration. In the reduction procedure we impose momentum conservation $q+p_2=p_1+p_3+p_4$, we set $p_1^2=m_Q^2$, $p_2^2=p_3^2=p_4^2=0$ and $q^2=-Q^2$, and we discard all integrals in which the cut propagators $D_1$, $D_2$ or $D_3$ are not in the denominator. After carrying out the reduction, we find that the NNLO integrated four-parton antennae ${\cal B}^0_{q',Qq\bar{q}}$, ${\cal E}^0_{g,Qq\bar{q}}$ and ${\cal\tilde{E}}^0_{g,Qq\bar{q}}$ can be expressed in terms of four master integrals \begin{eqnarray} I_{[0]}&=&\int {\rm d}\Phi_3(p_1,p_2,p_3;p_2,q)\\ I_{[-8]}&=&\int {\rm d}\Phi_3(p_1,p_2,p_3;p_2,q)((p_1+p_3)^2-m_Q^2)\\ I_{[4]}&=&\int {\rm d}\Phi_3(p_1,p_2,p_3;p_2,q)\frac{1}{(q-p_1)^2}\\ I_{[4,9]}&=&\int {\rm d}\Phi_3(p_1,p_2,p_3;p_2,q)\frac{1}{(q-p_1)^2((p_1-p_2)^2-m_Q^2)}. \end{eqnarray} All these master integrals contain multiplicative factors of the form $y^{-2\epsilon}$ which regulate soft endpoint singularities in initial state convolution integrals. These factors ought to be left as such in the master integrals themselves. All other terms in the master integrals can be expanded in $\epsilon$. Explicitly the master integrals which we denote collectively by $I_{\alpha}(y,z,\epsilon)$ take the form: \begin{equation} I_{\alpha}(y,z,\epsilon)=y^{m-2\epsilon} R_{\alpha}(y,z,\epsilon). \end{equation} The integer $m$ is specific to each master integral and the function $R_{\alpha}(y,z,\epsilon)$ is regular as $y \to 0$ and can be calculated as Laurent series in $\epsilon$. \begin{figure} \begin{center} \subfigure[ {$I_{[0]}$, $I_{[-8]}$} ]{ \resizebox{0.25\linewidth}{!}{ \begin{picture}(519,352) (157,-111) \SetWidth{7.0} \SetColor{Black} \Arc[clock](416,64)(128,-180,-360) \SetWidth{1.0} \Arc(416,64)(128,-180,0) \Line(288,64)(544,64) \Line(288,64)(160,-64) \Line[double,sep=6](288,64)(160,192) \Line[double,sep=6.5](544,64)(672,192) \Line(544,64)(672,-64) \Line[dash,dashsize=10](416,240)(416,-112) \end{picture} } } % \subfigure[ {$I_{[4]}$} ]{ \resizebox{0.25\linewidth}{!}{ \begin{picture}(517,352) (159,-111) \SetWidth{8.0} \SetColor{Black} \Arc[clock](424,72)(120.266,93.814,-3.814) \SetWidth{1.0} \Arc(416,64)(128,-180,0) \Line(288,64)(544,64) \Line(288,64)(160,64) \Line[double,sep=6](288,192)(160,192) \Line[double,sep=6.5](544,64)(672,192) \Line(544,64)(672,-64) \Line[dash,dashsize=10](416,240)(416,-112) \Line(288,192)(288,64) \SetWidth{7.0} \Line(288,192)(416,192) \end{picture} } } % \subfigure[ {$I_{[4,9]}$} ]{ \resizebox{0.25\linewidth}{!}{ \begin{picture}(514,352) (159,-111) \SetWidth{1.0} \SetColor{Black} \Arc(416,64)(128,-180,0) \Line(288,64)(544,64) \Line(288,64)(160,64) \Line[double,sep=6](288,192)(160,192) \Line[dash,dashsize=10](416,240)(416,-112) \Line(288,192)(288,64) \SetWidth{7.0} \Line(288,192)(544,192) \Line(544,192)(544,64) \SetWidth{1.0} \Line(672,192)(544,192) \Line[double,sep=6](672,64)(544,64) \end{picture} } } \caption{The topologies and mass distributions of the four master integrals encountered in the calculation of the integrated NNLO four-parton antennae ${\cal B}^0_{q',Qq\bar{q}}$, ${\cal E}^0_{g,Qq\bar{q}}$ and ${\cal\tilde{E}}^0_{g,Qq\bar{q}}$. Bold (thin) lines refer to massive (massless) scalar propagators. The double line in the external states represents the off-shell momentum $q$, with $q^2 = -Q^2$. The cut propagators are the ones intersected by the dashed line.} \label{topos} \end{center} \end{figure} The integrated antennae collectively denoted by ${\cal X}(y,z,\epsilon)$ are linear combinations of these master integrals with coefficients containing poles in $\epsilon$ as well as in $y$. After the masters have been inserted into the integrated antennae, these take the form \begin{equation} {\cal X}(y,z,\epsilon) =y^{-1-2\epsilon} {\cal R}_{{\cal X}}(y,z,\epsilon) \end{equation} where ${\cal R_{\cal X}}(y,z,\epsilon)$ is a regular function as $y \to 0$. The $\epsilon$ expansion of the singular factor $ y^{-1-2\epsilon}$ is done in the form of distributions: \begin{equation} y^{-1-n\epsilon}=-\frac{\delta(y)}{n\epsilon}+\sum_{m=0}^{\infty}\frac{(-n\epsilon)^m}{m!}{\cal D}_m(y) \end{equation} with \begin{equation} {\cal D}_m(y)=\left(\frac{\ln ^m(y)}{y} \right)_+\, . \end{equation} It is worth noting that in the functions $R_{\alpha}(y,z,\epsilon)$ and ${\cal R}_{\cal X}(y,z,\epsilon)$, which are regular as $y \to 0$, the limit $z\to 0$, corresponding to the massless limit, cannot be safely taken. In those functions, terms proportional to $\log(z)=\log(m_{Q}^2/E_{cm}^2)$ are present. These terms are expected to be there and correspond to the quasi-collinear kinematical configuration \cite{Abelof:2011jv,Catani:2002hc,GehrmannDeRidder:2009fz}. To evaluate the integrated antennae, we distinguish two regions depending on the value of $y$: a hard region where $ y\neq 0$ and a soft region where $y=0$. The highest order in $\epsilon$ needed in the expansion of each master integral is determined by the $\epsilon$ and $y$-dependent coefficient that multiplies the integral in the integrated antenna. In the soft region, since the expansion in distribution generates an additional $1/\epsilon$ factor, the function $R_{\alpha}$ is required one order higher in $\epsilon$ than in the hard region. \subsection{Master integrals} In this part of the section we present the results concerning the four masters required for the evaluation of the integrated antennae defined above. While $I_{[0]}$ and $I_{[-8]}$ have already been calculated in~\cite{GehrmannDeRidder:2009fz}, $I_{[4]}$ and $I_{[4,9]}$ are presented here for the first time. \subsubsection{The master integrals $I_{[0]}$ and $I_{[-8]}$} The master integrals $I_{[0]}$ and $I_{[-8]}$ do not contain any propagator involving the incoming momenta $q, p_2$. Therefore, they are effectively $1\rightarrow 3$ phase space integrals with incoming momentum $q+p_2$ and a massive particle in the final state. These integrals were computed in~\cite{GehrmannDeRidder:2009fz} in the context of the evaluation of final-final three-parton antenna functions with massive final state fermions. They are given by the following all order expressions \begin{eqnarray} &&\hspace{-0.2in}I_{[0]}=N_{\epsilon}[C(\epsilon)]^2\left(Q^2+m_Q^2\right)^{1-2\epsilon}\frac{\pi}{2}y^{1-2\epsilon}(1-y)^{-1+2\epsilon}(1-z)^{2-2\epsilon}\nonumber\\ &&\hspace{2.5in}\times\gaussf{1-\epsilon}{2-2\epsilon}{4-4\epsilon}{1-z}\label{eq.I0}\\ &&\hspace{-0.2in}I_{[-8]}=N_{\epsilon}[C(\epsilon)]^2\left(Q^2+m_Q^2\right)^{2-2\epsilon}\frac{\pi}{4}y^{2-2\epsilon}(1-y)^{-2+2\epsilon}(1-z)^{2-2\epsilon}\nonumber\\ &&\hspace{2.5in}\times\gaussf{1-\epsilon}{2-2\epsilon}{5-4\epsilon}{1-z}, \label{eq.I-8} \end{eqnarray} with \begin{equation} N_{\epsilon}=e^{2\epsilon\gamma_E}\frac{\Gamma(1-\epsilon)^2}{\Gamma(4-4\epsilon)}. \end{equation} The hypergeometric functions in eqs.(\ref{eq.I0}) and (\ref{eq.I-8}) can be expanded with the {\tt Mathema \newline tica} package {\tt HypExp}~\cite{Huber:2005yg}, yielding ordinary harmonic polylogarithms (HPLs) \cite{Remiddi:1999ew} in the variable $z$. In the computation of the integrated antennae of this paper, we find that these two integrals are needed in the hard regions up to ${\cal O}(\epsilon^2)$ while in the soft region, they are needed up to ${\cal O}(\epsilon^3)$. At this order (${\cal O}(\epsilon^3)$), the harmonic polylogarithms (HPLs) in the variable $z$ are needed up to weight 3. \subsubsection{The master integrals $I_{[4]}$ and $I_{[4,9]}$} To evaluate the two remaining master integrals we use the method of differential equations~\cite{Caffo:1998du,Caffo:1998yd,Gehrmann:1999as,Kotikov:1990kg,Kotikov:1991hm,Kotikov:1991pm,Remiddi:1997ny}. In order to obtain those we proceed as follows: We start by noting that \begin{eqnarray} q_{\mu}\frac{\partial\,\,}{\partial q_{\mu}}&=&2Q^2\frac{\partial\,\,}{\partial Q^2}-\frac{1 - y - z - y z}{1-z}\frac{\partial\,}{\partial x}+\frac{1 - 2 y - z}{y}\frac{\partial\,}{\partial z}\nonumber\\ p_{2\mu}\frac{\partial\,\,}{\partial p_{2\mu}}&=&(1-y)\frac{\partial\,}{\partial y}-\frac{z(1-z)}{y}\frac{\partial\,}{\partial z}\nonumber\\ m_Q^2\frac{\partial\,\,}{\partial m_Q^2}&=&-\frac{yz}{1-z}\frac{\partial\,\,}{\partial y}+z\frac{\partial\,}{\partial z}, \end{eqnarray} and invert this system to get \begin{eqnarray} Q^2\frac{\partial\,\,}{\partial Q^2}&=&\frac{1}{2}\,q_{\mu}\frac{\partial\,}{\partial q_{\mu}}+\frac{1}{2}\,p_{2\mu}\frac{\partial\,}{\partial p_{2\mu}}+\frac{Q^2 yz}{1-y-z}\,\frac{\partial\,}{\partial m_Q^2}\nonumber\\ \frac{\partial\,}{\partial y}&=&\frac{1}{1-y-z}\,p_{2\mu}\frac{\partial\,}{\partial p_{2\mu}}+\frac{Q^2 z(1-z)}{(1-y-z)^2}\,\frac{\partial\,}{\partial m_Q^2}\nonumber\\ \frac{\partial\,}{\partial z}&=&\frac{y}{(1-z)(1-y-z)}\,p_{2\mu}\frac{\partial\,}{\partial p_{2\mu}}+\frac{Q^2 y(1-y)}{(1-y-z)^2}\,\frac{\partial\,}{\partial m_Q^2}.\label{eq.diffeq} \end{eqnarray} We then apply the differential operators in eq.(\ref{eq.diffeq}) to the integrals $I_{[4]}$ and $I_{[4,9]}$ and reduce the resulting integrals on the right hand side to master integrals. We thus obtain a set of first order partial differential equations for the master integrals which can be readily solved with standard techniques. The solution is found order by order in a Laurent expansion in $\epsilon$. This expansion involves harmonic polylogarithmic functions of one variable (HPL's), which can be functions of either $y$ or $z$, or generalised harmonic polylogarithmic functions (GHPL's) \cite{Gehrmann:2000xj,Gehrmann:2001jv} of variable $z$ and arguments which can are taken from the list $(0,1,1-y)$. Products of these two type of functions are also found. The master integral $I_{[4]}$ is finite. For the integrated antennae required in this paper, it is needed in the hard region ($y \neq 0$) up to $\order{\epsilon^2}$ while in the soft region it is needed up to $\order{\epsilon^3}$. At this order ($\order{\epsilon^3}$) it will involve polylogarithmic functions, whose overall weight can go up to 4, as the $\order{\epsilon^0}$ involves polylogarithmic functions of weight 1. This master integral can, however, be calculated in both regions up to order $\epsilon^3$. In the hard region $(y\neq 0)$ it is given by \begin{eqnarray} I_{[4]}&=&[C(\epsilon)]^2\left(Q^2+m_Q^2\right)^{-2\epsilon}\frac{\pi}{2}y^{-2\epsilon}(1-y-z)^{-1}\bigg\{-(1-y) (1-z) H(1;y)-y z H(0;z)\nonumber\\ &&+\epsilon\bigg[(1-y)(1-z) \bigg(-2 H(1;y) G(1-y;z)-2 G(1-y,0;z)-2 H(1;y) H(1;z)\nonumber\\ && -2 H(0,1;y)+3 H(1,1;y)\bigg)+ y z \bigg(2 H(1;y)H(0;z)-5 H(0;z)+H(0,0;z)\nonumber\\ &&-5 H(1;y)-2 H(0,1;z)+\frac{2 \pi ^2}{3}\bigg)-2 (1-y-z) H(1,0;z)\bigg]\nonumber\\ &&+\epsilon^2\bigg[(1-y)(1-z) \bigg(-10 H(1;y) G(1-y;z)-4 H(0,1;y) G(1-y;z)\nonumber\\ &&+6 H(1,1;y) G(1-y;z)+4 H(1;y) G(1,1-y;z)+4 H(1;y) G(1-y,0;z)\nonumber\\ &&+4H(1;y) G(1-y,1;z)-4 H(1;y) G(1-y,1-y;z)+\frac{4}{3} \pi ^2 G(1-y;z)\nonumber\\ &&-10 G(1-y,0;z)+4 G(1,1-y,0;z)+2 G(1-y,0,0;z)+4G(1-y,0,1;z)\nonumber\\ &&-4 G(1-y,1-y,0;z)-10 H(1;y) H(1;z)-4 H(0,1;y) H(1;z)\nonumber\\ &&+6 H(1,1;y) H(1;z)-4 H(1;y) H(1,1;z)-19 H(1;y)-10H(0,1;y)\nonumber\\ &&+15 H(1,1;y)-4 H(0,0,1;y)+6 H(0,1,1;y)+2 H(1,0,1;y)-7 H(1,1,1;y)\bigg)\nonumber\\ &&+y z \bigg(10 H(1;y) H(0;z)-4H(1,1;y) H(0;z)-2 H(1;y) H(0,0;z)\nonumber\\ &&+4 H(1;y) H(0,1;z)-\frac{1}{2} \pi ^2 H(1;y)+\frac{1}{6} \left(\pi ^2-114\right) H(0;z)\nonumber\\ &&+5H(0,0;z)-10 H(0,1;z)-H(0,0,0;z)+2 H(0,0,1;z)-4 H(0,1,0;z)\nonumber\\ &&-4 H(0,1,1;z)-4 H(1,1,0;z)+\frac{2}{3} \left(12 \zeta_3+5 \pi^2\right)\bigg)\nonumber\\ &&+(1-y-z) \bigg(4 H(1;y) H(1,0;z)+\frac{4}{3} \pi ^2 H(1;z)-10 H(1,0;z)\nonumber\\ &&+2 H(1,0,0;z)-4 H(1,0,1;z)-\frac{5}{6}\pi ^2 z H(1;y)\bigg)\bigg]\nonumber\\ &&+\epsilon^3\bigg[(1-y)(1-z) \bigg(-\left(38+\pi ^2\right) H(1;y) G(1-y;z)\nonumber\\ &&-20 H(0,1;y) G(1-y;z)+30 H(1,1;y)G(1-y;z)-8 H(0,0,1;y) G(1-y;z)\nonumber\\ &&+12 H(0,1,1;y) G(1-y;z)+4 H(1,0,1;y) G(1-y;z)-14 H(1,1,1;y) G(1-y;z)\nonumber\\ &&+\frac{4}{3} \left(5 \pi^2+12 \zeta_3\right) G(1-y;z)\nonumber\\ &&-\frac{8}{3} \pi ^2 G(1,1-y;z)+\frac{1}{3} \left(-114+\pi ^2\right) G(1-y,0;z)\nonumber\\ &&+\frac{8}{3} \pi ^2G(1-y,1-y;z)+20 G(1,1-y,0;z)+10 G(1-y,0,0;z)\nonumber\\ &&+20 G(1-y,0,1;z)-20 G(1-y,1-y,0;z)-8 G(1,1,1-y,0;z)\nonumber\\ &&-4 G(1,1-y,0,0;z)-8G(1,1-y,0,1;z)+8 G(1,1-y,1-y,0;z)\nonumber\\ &&-2 G(1-y,0,0,0;z)-4 G(1-y,0,0,1;z)+8 G(1-y,0,1,0;z)\nonumber\\ &&-8 G(1-y,0,1,1;z)-8 G(1-y,1,1,0;z)+8G(1-y,1,1-y,0;z)\nonumber\\ &&+4 G(1-y,1-y,0,0;z)+8 G(1-y,1-y,0,1;z)-8 G(1-y,1-y,1-y,0;z)\nonumber\\ &&+20 G(1,1-y;z) H(1;y)+20 G(1-y,0;z) H(1;y)+20G(1-y,1;z) H(1;y)\nonumber\\ &&-20 G(1-y,1-y;z) H(1;y)-8 G(1,1,1-y;z) H(1;y)\nonumber\\ &&-8 G(1,1-y,0;z) H(1;y)-8 G(1,1-y,1;z) H(1;y)+8G(1,1-y,1-y;z) H(1;y)\nonumber\\ &&-4 G(1-y,0,0;z) H(1;y)-8 G(1-y,0,1;z) H(1;y)-8 G(1-y,1,1;z) H(1;y)\nonumber\\ &&+8 G(1-y,1,1-y;z) H(1;y)+8G(1-y,1-y,0;z) H(1;y)\nonumber\\ &&+8 G(1-y,1-y,1;z) H(1;y)-8 G(1-y,1-y,1-y;z) H(1;y)-65 H(1;y)\nonumber\\ &&-38 H(1;y) H(1;z)+8 G(1,1-y;z)H(0,1;y)+8 G(1-y,1;z) H(0,1;y)\nonumber\\ &&-8 G(1-y,1-y;z) H(0,1;y)-20 H(1;z) H(0,1;y)\nonumber\\ &&-\frac{1}{3} \left(114-5 \pi ^2\right) H(0,1;y)-12G(1,1-y;z) H(1,1;y)\nonumber\\ &&-8 G(1-y,0;z) H(1,1;y)-12 G(1-y,1;z) H(1,1;y)+12 G(1-y,1-y;z) H(1,1;y)\nonumber\\ &&+30 H(1;z) H(1,1;y)+57H(1,1;y)-20 H(1;y) H(1,1;z)-8 H(0,1;y) H(1,1;z)\nonumber\\ &&+12 H(1,1;y) H(1,1;z)-8 H(1;z) H(0,0,1;y)-20 H(0,0,1;y)\nonumber\\ &&+12 H(1;z)H(0,1,1;y)+30 H(0,1,1;y)+4 H(1;z) H(1,0,1;y)\nonumber\\ &&+10 H(1,0,1;y)-14 H(1;z) H(1,1,1;y)-35 H(1,1,1;y)-8 H(1;y) H(1,1,1;z)\nonumber\\ &&-8H(0,0,0,1;y)+12 H(0,0,1,1;y)+4 H(0,1,0,1;y)-14 H(0,1,1,1;y)\nonumber\\ &&+4 H(1,0,0,1;y)-6 H(1,0,1,1;y)-2 H(1,1,0,1;y)+15H(1,1,1,1;y)\bigg)\nonumber\\ &&+y z \bigg(H(0,0,0,0;z)+4H(0,1,1,0;z)+4 H(1,1,0,0;z)\nonumber\\ &&-\frac{1}{3} \left(-114+\pi ^2\right) H(0;z) H(1;y)-\frac{5}{2} \pi ^2 H(1;y)\nonumber\\ &&+\frac{5}{3} \pi ^2 H(1;y)H(1;z)-10 H(1;y) H(0,0;z)\nonumber\\ &&+\frac{1}{6} \left(114-\pi ^2\right) H(0,0;z)+20 H(1;y) H(0,1;z)\nonumber\\ &&-\left(38-\frac{13 \pi ^2}{3}\right)H(0,1;z)-20 H(0;z) H(1,1;y)\nonumber\\ &&+4 H(0,0;z) H(1,1;y)-8 H(0,1;z) H(1,1;y)+\frac{8}{3} \pi ^2 H(1,1;z)\nonumber\\ &&+2 H(1;y) H(0,0,0;z)-5 H(0,0,0;z)-4 H(1;y) H(0,0,1;z)\nonumber\\ &&+10 H(0,0,1;z)+8 H(1;y) H(0,1,0;z)-20 H(0,1,0;z)+8 H(1;y) H(0,1,1;z)\nonumber\\ &&-20 H(0,1,1;z)+8H(1;y) H(1,1,0;z)-20 H(1,1,0;z)+8 H(0;z) H(1,1,1;y)\nonumber\\ &&-2 H(0,0,0,1;z)+4 H(0,0,1,0;z)+4 H(0,0,1,1;z)+4 H(0,1,0,0;z)\nonumber\\ &&-8H(0,1,0,1;z)-8 H(0,1,1,1;z)-8 H(1,1,0,1;z)+\frac{1}{9} \left(114 \pi ^2-\pi ^4+360 \zeta_3\right)\nonumber\\ &&+\frac{1}{6} H(0;z) \left(-390+5 \pi ^2+16\zeta_3\right)\nonumber\\ &&-\frac{1}{6}\pi^2 H(1,1,y)-\frac{16}{3}\zeta_3 \bigg)\nonumber\\ &&+(1-y-z) \bigg(-8 H(1,0,1,0;z)+8 H(1,1,1,0;z)-\pi ^2 H(1;y) H(1;z)\nonumber\\ &&+20 H(1;y) H(1,0;z)-\frac{1}{3}\left(114-\pi ^2\right) H(1,0;z)\nonumber\\ &&-8 H(1,0;z) H(1,1;y)-4 H(1;y) H(1,0,0;z)+10 H(1,0,0;z)+8 H(1;y) H(1,0,1;z)\nonumber\\ &&-20 H(1,0,1;z)-2H(1,0,0,0;z)+4 H(1,0,0,1;z)-8 H(1,0,1,1;z)\nonumber\\ &&+\frac{4}{3} H(1;z) \left(5 \pi ^2+12 \zeta_3\right)-\frac{5}{2}\pi^2 H(1,1;y)\nonumber\\ &&+\left(\frac{25}{6}\pi^2+\frac{32}{3}\zeta_3 \right) H(1;y)\bigg)\bigg]\nonumber\\ &&+{\cal O}(\epsilon^4)\bigg\}.\label{eq.I4} \end{eqnarray} The expression of $I_{[4]}$ in the soft region is simply obtained by setting $y=0$ in eq.(\ref{eq.I4}). The master integral $I_{[4,9]}$ appears in the reduction of the integrated antennae into masters. It is finite, and a priori only its soft limit is needed up to order $\order{\epsilon^0}$. Up to this order, its expression in the hard region reads \begin{eqnarray} I_{[4,9]}&=&[C(\epsilon)]^2\left(Q^2+m_Q^2\right)^{-2\epsilon}\frac{\pi}{2}y^{-2\epsilon}(1-y)\bigg[H(1;y)H(0;z)\nonumber\\ &&-H(1;y) G(1-y;z)-G(1-y,0;z)+H(0,1;y)+H(1,1;y)-H(1,0;z)+{\cal O}(\epsilon)\bigg].\nonumber\\ \end{eqnarray} With the soft limit of this expression being zero, it turns out that this master integral does not contribute in the result obtained for the integrated antennae derived in this paper. \subsection{Integrated antennae} The three integrated antennae ${\cal B}^0_{q',Qq\bar{q}}$, ${\cal E}^0_{g,Qq\bar{q}}$ and ${\cal\tilde{E}}^0_{g,Qq\bar{q}}$ have a common multiplicative factor given by $(Q^2 +m_Q^2)^{-2\epsilon}$. The full expression of these integrated antennae are too lenghty to be presented here. A separate {\tt Mathematica} file containing them is attached with the arXiv submission of this paper. In this section we shall present the complete pole part of these integrated initial-final four-parton antennae. The leading pole parts of ${\cal B}^0_{q',Qq\bar{q}}$ and ${\cal E}^0_{g,Qq\bar{q}}$ are obtained in the kinematical configuration where the final-state $q \bar{q}$ pair becomes soft and simultaneously collinear to an initial-state parton. Those leading poles parts are proportional to $1/\epsilon^3\delta(y)$. Having no soft $q\bar{q}$ singularities, the integrated antenna ${\cal\tilde{E}}^0_{g,Qq\bar{q}}$ is regular in the limit $y\to 0$ and it does not contain any distributions. Its leading pole piece is proportional to $1/\epsilon^2$. To make our expressions more concise we define the following variable \begin{equation} \lambda=\sqrt{\frac{y\,z}{1-y-z}} \end{equation} which is equal to $m_{Q}/Q$. Using this notation, the pole parts of the integrated antennae are: \begin{eqnarray} \lefteqn{ {\cal B}^0_{q',Qq\bar{q}}=\left( Q^2+m_Q^2\right)^{-2\epsilon}}\nonumber\\ &&\times\bigg\{ -\frac{1}{\epsilon^3}\frac{1}{12}\delta(y) \nonumber\\ && +\frac{1}{\epsilon^2}\left[ \delta(y)\left( -\frac{19}{72}-\frac{z}{12}+\frac{z^2}{24} \right) + \frac{1}{6}D_0(y) + \frac{1}{6}\delta(y) G(1;z)-\frac{1}{6}+\frac{y}{12} \right] \nonumber\\ && +\frac{1}{\epsilon} \bigg[\delta(y)\left( -\frac{373}{432}+\frac{5\pi^2}{72}-\frac{19z}{72}+\frac{25z^2}{144} \right) +D_0(y)\left( \frac{19}{36}+\frac{z}{6}-\frac{z^2}{12} \right)\nonumber\\ &&\hspace{0.3in} -\frac{1}{3}D_1(y) -\frac{11}{18}+\frac{5y}{36}+\frac{z}{1-z}\delta(y)\left( \frac{1}{6}+\frac{z}{8}-\frac{z^2}{24} \right)H(0;z)\nonumber\\ &&\hspace{0.3in} -\delta(y)\left(\frac{19}{36}+\frac{z}{6}-\frac{z^2}{12}\right)H(1;z) -\frac{1}{6}\delta(y)H(1,0;z)-\frac{1}{3}\delta(y)H(1,1;z)\nonumber\\ &&\hspace{0.3in} +\frac{1}{3}D_0(y)H(1;z)+\left(\frac{1}{3}-\frac{y}{6} \right)H(0;y)-\left( \frac{1}{2y}-\frac{1}{2}+\frac{y}{4}\right)H(1;y)-\left( \frac{1}{3}-\frac{y}{6}\right) H(1;z) \bigg]\nonumber\\ && + {\cal O}(\epsilon^0)\bigg\}, \end{eqnarray} \begin{eqnarray} \lefteqn{ {\cal E}^0_{g,Qq\bar{q}}=\left( Q^2+m_Q^2\right)^{-2\epsilon}}\nonumber\\ &&\times\bigg\{ -\frac{1}{\epsilon^3}\frac{1}{12}\delta(y) \nonumber\\ && +\frac{1}{\epsilon^2}\bigg[ \delta(y)\left( -\frac{19}{72}-\frac{z}{12}+\frac{z^2}{24} \right)+\frac{1}{6}D_0(y)-\frac{1}{6}\delta(y)H(1;z)\nonumber\\ &&\hspace{0.3in} +\frac{1}{1-y}\left( -\frac{1}{6}+\frac{9y}{8}-\frac{23y^2}{24}+\frac{y^3}{3} \right)-\left(\frac{1}{2}-\frac{y}{4} \right)H(1;y)\bigg]\nonumber\\ &&+\frac{1}{\epsilon}\bigg[ \delta(y)\bigg( -\frac{373}{432}+\frac{5\pi^2}{72}-\frac{19z}{72}+\frac{25z^2}{144}\bigg)+D_0(y)\bigg( \frac{19}{36}+\frac{z}{6}-\frac{z^2}{12}\bigg)-\frac{1}{3}D_1(y)\nonumber\\ &&\hspace{0.3in}+\frac{1}{1-y}\bigg( \frac{1}{18}-\frac{\lambda}{3}+\frac{7y}{2}-\frac{29y^2}{9}-\frac{\lambda y^2}{2}+\frac{4y^3}{3}-\frac{z}{6}-\frac{z^2}{12}\bigg)+\frac{1}{(1-y)(1-z)}\bigg( \frac{y}{3}+\frac{\lambda y}{3}\nonumber\\ &&\hspace{0.3in}-\frac{y^2}{2}+\lambda y^2 -\frac{2y^3}{3}+\frac{\lambda y^3}{2} \bigg)+\frac{y^3}{(1-y)(1-z)^2}\bigg( \frac{2}{3}-\lambda\bigg)-\delta(y)\bigg( \frac{1}{4}-\frac{1}{4(1-z)}+\frac{z}{12}\nonumber\\ &&\hspace{0.3in}-\frac{z^2}{24}\bigg)H(0;z)-\delta(y)\bigg( \frac{19}{36}+\frac{z}{6}-\frac{z^2}{12}\bigg)H(1;z)-\frac{1}{6}\delta(y)H(1,0;z)-\frac{1}{3}\delta(y)H(1,1;z)\nonumber\\ &&\hspace{0.3in}+\frac{1}{3}D_0(y)H(1;z)+\bigg(1-\frac{2}{3(1-y)}-\frac{5y}{4}+\frac{2y^2}{3}\bigg)H(0;y)+\bigg( \frac{y(1-y)}{2(1-y-z)}\nonumber\\ &&\hspace{0.3in}+\frac{1}{6}-\frac{\lambda}{2}-\frac{2}{3(1-y)}-\frac{1}{3y}-\frac{11y}{24}+\frac{2y^2}{3} \bigg) H(1;y)\nonumber\\ &&\hspace{0.3in}+\bigg( -\frac{2}{3}-\frac{1}{2}+\frac{1-5\lambda}{6(1-y)}+\frac{3y}{2}-\frac{2y^2}{3}-\frac{y(1-y)}{2(1-y-z)}+\frac{1}{1-z}\bigg( -\frac{3y}{2}+\frac{11\lambda}{6}\nonumber\\ &&\hspace{0.3in}+\lambda y +\frac{2-13\lambda}{6(1-y)}\bigg)-\frac{1}{(1-z)^2}\bigg(-\frac{7}{6}-\frac{3y}{2}+\frac{7\lambda}{3} +2\lambda y -y^2+\lambda y^2 +\frac{7(1-2\lambda)}{6(1-y)} \bigg)\nonumber\\ &&\hspace{0.3in} +\frac{(2-3\lambda)y^3}{2(1-y)(1-z)^3} \bigg)H(0;z)+\bigg( -1+\frac{2}{3(1-y)}+\frac{5y}{4}-\frac{2y^2}{3} \bigg)H(1;z)\nonumber\\ &&\hspace{0.3in}-\bigg(1-\frac{y}{2}\bigg)H(1;y)H(1;z)-\bigg(1-\frac{y}{2}\bigg)H(1;y)G(1-y;z)+\bigg(1-\frac{y}{2}\bigg)H(1,0;y)\nonumber\\ &&\hspace{0.3in}+\bigg( \frac{3}{2}-\frac{3y}{4}\bigg)H(1,1;y)-\bigg(1-\frac{y}{2}\bigg)H(1,0;z)-\bigg(1-\frac{y}{2}\bigg)G(1-y,0;z) \bigg]\nonumber\\ &&+{\cal O}(\epsilon^0)\bigg\}, \end{eqnarray} \begin{eqnarray} \lefteqn{ {\cal \tilde{E}}^0_{g,Qq\bar{q}}=\left( Q^2+m_Q^2\right)^{-2\epsilon}}\nonumber\\ &&\times\bigg\{\frac{1}{\epsilon^2}\bigg[-\frac{1}{3}+\frac{1}{3(1-y)}+\frac{11y}{12}-\frac{y^2}{3}-\bigg( 1-\frac{y}{2}\bigg)H(1;y) \bigg]\nonumber\\ &&+\frac{1}{\epsilon}\bigg[-\frac{20}{9}+\frac{16y}{9}+\lambda y-\frac{13y^2}{18}+\lambda+\frac{1}{1-y}\bigg(\frac{20}{9}-\lambda\bigg)\nonumber\\ &&\hspace{0.3in}+\frac{y^2}{(1-y)(1-z)}\bigg( -1+2\lambda-\frac{4y}{3}+\lambda y \bigg)+\frac{y^3}{(1-y)(1-z)^2}\bigg(\frac{4}{3}-2\lambda\bigg)\nonumber\\ &&\hspace{0.3in}+\bigg(\frac{2}{3}-\frac{11y}{6}+\frac{2y^2}{3}-\frac{2}{3(1-y)}\bigg)H(0;y)+\bigg(-\frac{1}{3}-\lambda-\frac{7y}{12}+\frac{2y^2}{3}-\frac{2}{3(1-y)}\nonumber\\ &&\hspace{0.3in} -\frac{y-y^2}{1-y-z} \bigg)H(1;y)+\bigg( \frac{y}{1-y}\bigg( 2+\lambda-3y+\frac{2y}{3} \bigg) -\frac{y(1-y)}{1-y-z}\nonumber\\ &&\hspace{0.3in}-\frac{y}{(1-y)(1-z)}\left( 1+\lambda-3y +2\lambda y \right) +\frac{y^2}{(1-y)(1-z)^2}\left( -1+2\lambda-2y+2\lambda y\right) \nonumber\\ &&\hspace{0.3in}+ \frac{y^3}{(1-y)(1-z)^3}\bigg( \frac{4}{3}-\lambda \bigg) \bigg)H(0;z)+\frac{y}{(1-y)}\bigg( \frac{15}{6}-\frac{15y}{6}+\frac{2y^2}{3}\bigg)H(1;z)\nonumber\\ &&\hspace{0.3in}+(2-y)H(1;y)H(1;y)-(2-y)H(1;y)G(1-y;z)+\bigg(3-\frac{3y}{2}\bigg)H(1,1;y)\nonumber\\ &&\hspace{0.3in}+(2-y)H(1,0;y)-(2-y)H(1,0;z)-(2-y)G(1-y,0;z) \bigg]\nonumber\\ &&+{\cal O}(\epsilon^0)\bigg\}. \end{eqnarray} \section{Conclusions} We have derived the integrated forms of the massive initial-final tree-level four-parton antennae appearing in the subtraction terms required to compute the partonic processes $ q \bar{q} \to t \bar{t} q' \bar{q}'$ and $gg \to t \bar{t}q\bar{q}$ contributing to the double real corrections to $t \bar{t}$ hadronic production. We found that those integrated antennae can be written as combinations of three master integrals from which one was unknown. This master integral is derived using differential equation methods and presented for the first time in this paper. These integrated antennae, related to the terms proportional to the colour factor $N_{F}$ in top quark pair production at NNLO represent the core result of this paper. With the integrated forms of these antenna functions now available, the integrated subtraction terms containing these antennae can be easily computed and combined with the double-virtual, one-parton integrated real-virtual and mass factorisation counterterms participating in the $m$-parton final state channel of an $m$-jet observable evaluated at NNLO. These integrated subtraction terms will contribute in a non-trivial way to the cancellation of the explicit poles present in this $m$-parton channel. As such, the results presented in this paper are essential for the application of the extended antenna formalism to compute hadronic observables with massive fermions at the NNLO level. These results, furthermore enable the computation of the $N_{F}$ contributions to the hadronic $t \bar{t}$ cross section evaluated at NNLO. \section{Acknowledgments} We would like to thank Werner Bernreuther for comments on our manuscript. Oliver Dekkers would like to thank the institute for theoretical physics at ETH Zurich, where most of this research project has been carried out, for its kind hospitality. This research was supported by the Swiss National Science Foundation (SNF) under contract PP00P2-139192 and in part by the European Commission through the 'LHCPhenoNet' Initial Training Network PITN-GA-2010-264564', which are hereby acknowledged. Furthermore, the work of Oliver Dekkers was supported by the Deutsche Forschungsgemeinschaft (DFG), SFB/TR9.
2,877,628,089,317
arxiv
\section{Introduction} Let $(X, \tau)$ be a topological space and $\rho$ be a metric on $X$. Given $\epsilon > 0$, a nonempty subset $A$ of $X$ is said to be {\it fragmented by $\rho$ down to $\epsilon$} if each nonempty subset of $A$ contains a nonempty $\tau$--relatively open subset of $\rho$-diameter less than $\epsilon$. $A$ is called {\it fragmented by $\rho$} if $A$ is fragmented by $\rho$ down to $\epsilon$ for each $\epsilon > 0$. The set $A$ is said to be $\sigma$-{\it fragmented by $\rho$} if for every $\epsilon > 0$, $A$ can be expressed as $A = \cup_{n = 1}^{\infty} A_{n, \epsilon}$ with each $A_{n, \epsilon}$ fragmented by $\rho$ down to $\epsilon$. The notion of fragmentability was originally introduced in \cite{3} to investigate the existence of nice selections for upper semicontinuous compact-valued mappings. The notion of $\sigma$-fragmentability appeared in \cite{1} in order to study Banach spaces, the weak topology of which is $\sigma$-fragmented by the norm (such Banach spaces are said to be $\sigma$-fragmentable). Since then, these two concepts have been playing an important role in the study of the geometry of Banach spaces. Kenderov and Moors \cite{4} used the following topological game to characterize fragmentability of a topological space $X$: Two players $\Sigma$ and $\Omega$ alternatively select subsets of $X$. $\Sigma$ starts the game by choosing some nonempty subset $A_{1}$ of $X$. Then $\Omega$ chooses some nonempty relatively open subset $B_{1}$ of $A_{1}$. In general, if the selection $B_{n} \neq \emptyset$ of the player $\Omega$ is already specified, the player $\Sigma$ makes the next move by selecting an arbitrary nonempty set $A_{n + 1}$ contained in $B_{n}$. Continuing the game the two players generate a sequence of sets \begin{equation*} A_{1} \supset B_{1} \supset \cdots \supset A_{n} \supset B_{n} \supset \cdots \end{equation*} which is called a play and is denoted by $p = (A_{i}, B_{i})_{i = 1}^{\infty}$. If \begin{equation*} p_{1} = (A_{1}), \ldots, p_{n} = (A_{1}, B_{1}, \ldots, A_{n}) \end{equation*} are the first `$n$' move of some play (of the game), then $p_{n}$ is called the $n$th {\it partial play} of the game. The player $\Omega$ is said to have won the play $p$ if $\cap_{i = 1}^{\infty} A_{i} = \cap_{i = 1}^{\infty} B_{i}$ contains at most one point. Otherwise the player $\sum$ is said to be the winner in this play. Under the term {\it strategy s for $\Omega$-player}, we mean a rule by means of which the player $\Omega$ makes his/her choices. More precisely, the strategy $s$ is a sequence of mappings $s = \{s_{n}\}_{n \geq 1}$, which are defined inductively as follows: $s_{1}$ assigns to each possible first move $A_{1}$ of $\Sigma$-player a nonempty relatively open subset $B_{1} = s_{1} (A_{1})$. Therefore, the domain of $s_{1}$ is the set of all nonempty subsets of $X$ and $s_{1}$ assigns to each such an element a nonempty relatively open subset of it. The domain of $s_{2}$ consists of triples of the type $(A_{1}, B_{1}, A_{2})$, where $A_{1}$ is from the domain of $s_{1}, B_{1} = s_{1}(A_{1})$ and $A_{2}$ is an arbitrary nonempty subset of $B_{1}$. $s_{2}$ assigns to such a triple a nonempty relatively open subset $B_{2} = s_{2} (A_{1}, B_{1}, A_{2})$ of $A_{2}$. In general, the domain of $s_{n + 1}$ consists of partial plays of the type \begin{equation*} (A_{1}, \ldots, A_{i}, B_{i}, A_{i + 1}, \ldots, A_{n + 1}), \end{equation*} where, for every $i \leq n, (A_{1}, \ldots , A_{i})$ is from the domain of $s_{i}, B_{i} = s_{i} (A_{1}, \ldots, A_{i})$ and $A_{n + 1}$ is an arbitrary nonempty subset of $B_{n}$. To every element from its domain $s_{n + 1}$ assigns a nonempty relatively open subset $B_{n + 1}$ of $A_{n + 1}$. A play $p = (A_{i}, B_{i})_{i \geq 1}$ is called an $s$-play if $B_{i} = s_{i} (p_{i})$ for each $i \geq 1$. $s$ is called a {\it winning strategy} for the player $\Omega$ if he/she wins every $s$-play. If the space $X$ is fragmentable by a metric $d(\cdot\,,\cdot)$, then $\Omega$ has an obvious winning strategy $s$. Indeed, to each partial play $p_{n}$ this strategy puts into correspondence some nonempty subset $B_{n} \subset A_{n}$ which is relatively open in $A_{n}$ and has d-diameter less than $1/n$. Clearly, the set $\cap_{i \geq 1} A_{i} = \cap_{i \geq 1} B_{i}$ has at most one point because it has zero d-diameter. Kenderov and Moors have shown that the existence of a winning strategy for the player $\Omega$ characterizes fragmentability, that is, \begin{theor}[\cite{4}] The topological space $X$ is fragmentable if and only if the player $\Omega$ has a winning strategy. \end{theor} Of special interest is the case when the topology generated by the fragmenting metric contains the original topology of the space (in this case it is said that $X$ {\it is fragmented by a metric which is stronger than its topology}). \begin{theor}[\cite{4}] The topological space $X$ is fragmentable by a metric stronger than its topology if and only if the player $\Omega$ has a strategy a such that{\rm ,} for every $s$-play $p = (A_{i}, B_{i})_{i \geq 1}$ the intersection $\cap_{i = 1}^{\infty} A_{i} = \cap_{i = 1}^{\infty} B_{i}$ is either empty or contains just one point $x_{0}$ and for every neighborhood $U$ of $x_{0}$ there exists some $k$ such that $A_{i} \subset U$ for all $i > k$. \end{theor} This characterization of fragmentability has some applications (see e.g. \cite{4,5,6}). In \cite{5}, it is shown that fragmentability and $\sigma$-fragmentability of the weak topology in a Banach space are related to each other in the following way: \begin{theor}[(\cite{5}, Theorems~1.3, 1.4 and 2.1)] For a Banach space $X$ the following are equivalent{\rm :} \begin{enumerate} \renewcommand\labelenumi{\rm (\roman{enumi})} \leftskip .35pc \item $(X,\,\hbox{weak})$ is $\sigma$-fragmented by the norm {\rm (}i.e. $X$ is $\sigma$-fragmented{\rm );} \item $(X,\,\hbox{weak})$ is fragmented by a metric which is stronger than the weak topology{\rm ;} \item $(X,\,\hbox{weak})$ is fragmented by a metric which is stronger than the norm topology{\rm ;} \item There exists a strategy $s$ for the player $\Omega$ in $(X,\,\hbox{weak})$ such that{\rm ,} for every $s$-play $p = (A_{i}, B_{i})_{i \geq 1}$ either $\cap_{i \geq 1} B_{i} = \emptyset$ or $\lim_{i \rightarrow \infty}$ norm-diam $(B_{i}) = 0$. \item There exists a strategy $s$ for the player $\Omega$ in $(X,\,\hbox{weak})$ such that{\rm ,} for every $s$-play $p = (A_{i}, B_{i})_{i \geq 1}$ either $\cap_{i \geq 1} B_{i} = \emptyset$ or every sequence $\{x_{i}\}_{i \geq 1}$ with $x_{i} \in B_{i}, i \geq 1$ has a weak cluster point. \end{enumerate} \end{theor} Moreover, we have the following: The norm $\| \cdot\|$ of a Banach space $X$ is said to be {\it Kadec} if the norm topology and the weak topology coincide on the unit sphere $\{x \in X\hbox{:}\ \|x\| = 1\}$. In \cite{2}, it was shown that every Banach space with Kadec norm is $\sigma$-fragmented. It follows that there exists a strategy for the player $\Omega$ satisfying condition (iv)~from the theorem of Kenderov and Moors. In the next section, we will directly construct such a strategy (without using the theorem of Kenderov and Moors). The norm $\|\hbox{$\cdot$}\|$ of a Banach space $X$ is said to be {\it rotund {\rm (}or strictly convex{\rm )}} if the unit sphere $\{x \in X\,\hbox{:}\,\|x\| = 1\}$ does not contain nontrivial line segments. Ribarska has shown in \cite{7} that the weak topology of a rotund Banach space is fragmented by a metric. By the abovementioned characterization of fragmentability it follows that the player $\Omega$ has a winning strategy. In the next section we will directly define such a strategy (without using the result of Ribarska and the mentioned theorem of Kenderov and Moors). Moreover, if the norm of $X$ is weakly locally uniformly rotund, then the strategy we construct satisfies condition (v)~from the above theorem of Kenderov and Moors. Recall that the Banach space $X$ is called {\it locally uniformly rotund {\rm (}resp. weakly locally uniformly rotund{\rm )}} if $\lim_{n \rightarrow \infty} \|x_{n} - x\| = 0$ (resp. $\hbox{\it weak--}\lim(x_{n} - x) = 0$, whenever $\lim_{n \rightarrow \infty} \|(x_{n} + x)/2\| = \lim_{n \rightarrow \infty}\|x_{n}\| = \|x\|$. \section{Description of the strategies} \begin{lem} Let $X$ be a Banach space with Kadec norm. Then{\rm ,} for every $\epsilon > 0$ and $x \in X${\rm ,} there exists some positive number $\alpha_{\epsilon, x}$ and a weakly open set $W_{\epsilon, x} \ni x$ such that $\|y - x\| < \epsilon$ whenever $y \in W_{\epsilon, x}$ and $|\|y\| - \|x\|| \leq \alpha_{\epsilon, x}$. \end{lem} \begin{proof} If $x = 0$, it suffices to put $W_{\epsilon, x} = X$ and to take as $\alpha_{\epsilon, x}$ any positive number smaller than $\epsilon/2$. Suppose $x \neq 0$ and take a convex weakly open neighborhood $G$ of $x$ such that the norm diameter of $G \cap \{z\hbox{:}\ \|z\| = \|x\|\}$ is less than $\epsilon/2$. Define $\alpha_{\epsilon, x} > 0$ to be smaller than $\epsilon/2, \|x\|$ and such that $\alpha_{\epsilon, x} B \subset (G - x)/2$ (as usual $B$ stands for the closed unit ball of $X$). Put $W_{\epsilon, x} := x + (G - x)/2 = (x + G)/2$. Let $y \in W_{\epsilon,x}$ and $|\|y\| - \|x\|| < \alpha_{\epsilon,x}$. Then we have \begin{gather*} (\|x\|/\|y\|) y = ((\|x\|/\|y\|)y - y) + y = (\|x\| - \|y\|)y/\|y\| + y\\[.3pc] \in |\|y\| - \|x\|| B + W_{\epsilon, x} \subset \alpha_{\epsilon, x} B + W_{\epsilon, x} \subset (G - x)/2 + (G + x)/2 = G. \end{gather*} Hence $\|(\|x\|/\|y\|)y - x)\| < \epsilon/2$. Finally we have \begin{equation*} \|y - x\| \leq \|y - (\|x\|/\|y\|)y\| + \|(\|x\|/\|y\|)y - x\| < \alpha_{\epsilon, x} + \epsilon/2 < \epsilon. \end{equation*} $\left.\right.$\vspace{-2pc} \hfill $\Box$ \end{proof} We also need the following result: \begin{lem}\hskip -.3pc {\rm (\cite{5}, Proposition~2.1).}\ \ If the closed unit ball $B$ of a Banach space $X$ admits a strategy $s$ with the property {\rm (iv)} of Theorem~{\rm 1.3,} then the whole space also admits such a strategy. \end{lem} \setcounter{theore}{0} \begin{theor}[\!] Let $X$ be a Banach space with Kadec norm. Then there exists a strategy $s$ for the player $\Omega$ in {\rm (}$B${\rm ,} weak{\rm )} such that{\rm ,} for every $s$-play $p = (A_{i}, B_{i})_{i \geq 1}$ either $\cap_{i \geq 1} B_{i} = \emptyset$ or $\lim_{i \rightarrow \infty}$ norm-diam $(B_{i}) = 0$. \end{theor} \begin{proof} Let $\|\hbox{$\cdot$}\|$ denote the Kadec norm on $X$ and $A_{1}$ be the first choice of $\Sigma$-player. By Lemma~2, we may assume that $A_{1} \subset B$, where $B$ denotes the closed unit ball of $X$. Put \begin{equation*} \rho_{1} = \sup \{\|x\|\,\hbox{:}\,x \in A_{1}\}\quad \hbox{and}\quad \epsilon_{1} = 1. \end{equation*} Two cases may happen. \begin{enumerate} \renewcommand\labelenumi{(\arabic{enumi})} \leftskip .15pc \item There is an element $x_{1} \in A_{1}$ such that $\alpha_{\epsilon_{1}, x_{1}} + \|x_{1}\| > \rho_{1}$. Then we take such a point $x_{1}$ and define $s_{1} (A_{1}) = B_{1} := W_{\epsilon_{1},x_{1}} \cap A_{1}\backslash(\|x_{1}\| - \alpha_{\epsilon_{1}, x_{1}})B$ and $\epsilon_{2} := \epsilon_{1}/2$. Then for each $y \in B_{1}, \|y\| \leq \rho_{1} < \alpha_{\epsilon_{1}, x_{1}} + \|x_{1}\|$ and $\|y\| \geq \|x_{1}\| - \alpha_{\epsilon_{1}, x_{1}}$. Therefore, by Lemma~1, $\|y - x_{1}\| < \epsilon_{1}$. Hence $\|\ \|-\hbox{diam}(B_{1}) < 2\epsilon_{1}$. \item For every $x \in A_{1}, \alpha_{\epsilon_{1}, x} + \|x\| \leq \rho_{1}$. Then, \begin{equation*} \hskip -1.2pc s_{1}(A_{1}) = B_{1} := A_{1}\backslash(1/2) \rho_{1} B \end{equation*} and set $\epsilon_{2} = \epsilon_{1}$. Suppose the mappings $(s_{i})_{i \leq n}$ participating in the definition of a strategy for player $\Omega$ have already been defined. Let $(A_{i}, B_{i})_{1 \leq i \leq n}$ be a partial play which is generated by the strategy mappings defined so far. This partial play is accompanied by the numbers $\{\epsilon_{i}\}_{1 \leq i \leq n}$ and the points $x_{1}, \ldots, x_{n}$. If $A_{n + 1}$ is the next move of the player $\Sigma$, we put \begin{equation*} \hskip -1.2pc \rho_{n + 1} = \sup \{\|x\|\,\hbox{:}\,x \in A_{n + 1}\} \end{equation*} and consider the following two possible cases: \setcounter{enumi}{0} \item There exists an element $x_{n + 1} \in A_{n + 1}$, such that $\alpha_{\epsilon_{n + 1}, x_{n + 1}} + \|x_{n + 1}\| > \rho_{n + 1}$. In this case, we take such a point $x_{n + 1}$, define \begin{align*} \hskip -1.2pc s_{n + 1} (A_{1}, \ldots , A_{n + 1}) = B_{n + 1} : = W_{\epsilon_{n + 1}, x_{n + 1}} \cap A_{n + 1} \backslash (\|x_{n + 1}\| - \alpha_{\epsilon_{n}, x_{n + 1}})B \end{align*} and set $\epsilon_{n + 2} = \epsilon_{n + 1}/2$. As above one shows that in this case $\|\ \|-\hbox{diam}(B_{n + 1}) < 2 \epsilon_{n + 1}$. \item For every point $x \in A_{n + 1}, \alpha_{\epsilon_{n + 1},x} + \|x\| \leq \rho_{n + 1}$. In this case, we define \begin{equation*} \hskip -1.2pc s_{n + 1}(A_{1},\ldots,A_{n + 1}) = B_{n + 1} := A_{n + 1} \bigg\backslash \left(1 - \frac{1}{(n + 2)}\right) \rho_{n + 1} B \end{equation*} and set $\epsilon_{n + 2} = \epsilon_{n + 1}$. In this way the strategy $s = (s_{i})_{i \geq 1}$ for the $\Omega$-player is already defined. \end{enumerate} Suppose $(A_{i}, B_{i})_{i \geq 1}$ is an $s$-play with $x \in \cap_{n \geq 1} A_{n}$ and $\lim_{n \rightarrow \infty} \|\hbox{$\cdot$}\|-\hbox{diam}(B_{n}) \neq 0$. Then there exists some $\delta > 0$, such that $\|\hbox{$\cdot$}\|-\hbox{diam}(B_{n}) > \delta$ for each $n \in N$. This means that for all but finitely many $n$, the case (2) happens and thus $\{\epsilon_{n}\}$ is eventually constant: $\epsilon_{n} = \epsilon > 0$ for all $n > k$. Since $x \in \cap_{n \geq 1} A_{n}$, \begin{equation*} \left(1 - \frac{1}{n}\right) \rho_{n} < \|x\| < \rho_{n},\quad \hbox{for all}\quad n. \end{equation*} Let $\rho_{n}\!\hbox{$\searrow$}\rho$. Then the above inequality shows that $\|x\| = \rho$. On the other hand, $\alpha_{\epsilon, x} + \|x\| = \alpha_{\epsilon_{n}, x} + \|x\| \leq \rho_{n}$ for $n > k$ which implies the contradiction $\alpha_{\epsilon, x} + \|x\| = \|x\|$. \hfill $\Box$ \end{proof} \begin{rem} {\rm Lemma~1 directly implies that Banach spaces with Kadec norm are $\sigma$-fragmentable. Actually, Theorem~2.3 of \cite{2} indirectly implies that every Kadec renormable Banach space $X$ has a countable cover by sets of small local norm diameter, i.e., for each $\varepsilon > 0$, it is possible to write $X = \cup_{n \in N} X_{n, \epsilon}$ such that for each $n \in N$ and $x \in X_{n,\epsilon}$, there exists an open neighborhood $V_{x}$, of $x$ such that the norm diameter of $V_{x} \cap X_{n, \epsilon}$ is less then $\epsilon$. Using Lemma~1, we can give another proof of this result.} \end{rem} \begin{propo}$\left.\right.$\vspace{.5pc} \noindent Let $X$ be a Banach space with Kadec norm. Then for every $\epsilon > 0$ there exists a countable cover of $X, X = \cup_{i \geq 0} X_{i}${\rm ,} such that{\rm ,} for every $x \in X_{i}${\rm ,} there exists a weakly open neighborhood $W$ of $x$ such that $W \cap X_{i}$ is contained in $x + \epsilon B${\rm ,} in particular the points of $X_{i}$ have weak neighborhoods with norm-diameter smaller than $2\epsilon$. \end{propo} \begin{proof} Given $\epsilon > 0$ consider, for $k = 1, 2, \ldots,$ and $n = 0, 1, 2, \ldots ,$ the sets $X_{kn} = \{ x \in X\,\hbox{:}\, \alpha_{\epsilon, x} > 2/k$, and $n/k \leq \|x\| \leq (n + 1)/k\}$. Clearly, $X$ is covered by $X_{kn}$. Put $W := W_{\epsilon, x}$. By Lemma~1 the set $W \cap X_{kn}$ is contained in $x + \epsilon B$.\hfill $\Box$ \end{proof} \begin{theor}[\!] Let $X$ be a Banach space. \begin{enumerate} \renewcommand\labelenumi{\rm (\alph{enumi})} \leftskip .15pc \item If the norm of $X$ is rotund{\rm ,} then {\rm (}$X${\rm ,}\,weak{\rm )} is fragmentable by a metric. \item If the norm of $X$ is weakly locally uniformly rotund{\rm ,} then {\rm (}$X${\rm ,}\,weak{\rm )} is fragmented by a metric which is stronger than the norm topology.\vspace{-.5pc} \end{enumerate} \end{theor} \begin{proof} According to Theorems~1.2 and 1.3 and Lemma~2, it is enough to show that in $(B,\,\hbox{\it weak})$ the player $\Omega$ has a winning strategy $s$ such that, for every $s$-play $p = (A_{i}, B_{i})_{i \geq 1}, \cap_{i \geq 1} B_{i}$ has at most one point and in case (b) either $\cap_{i \geq 1} B_{i} = \emptyset$ or every sequence $\{y_{n}\}, y_{n} \in B_{n}, n \geq 1$ is weakly convergent to the element of $\cap_{i \geq 1} B_{i}$. Let $\|\ \|$ be the equivalent norm on $X$ and $\Sigma$ start a game by choosing a nonempty subset $A_{1}$ of $B$. Define \begin{equation*} \rho_{1} = \sup \{\|x\|\,:\,x \in A_{1}\}. \end{equation*} Choose an element $x_{1} \in A_{1}$ such that $\|x_{1}\| > \rho_{1} - 1/2$ and find some $\mu_{1} \in X^{*}$ such that $\|\mu_{1}\| = 1$ and $\mu_{1}(x_{1}) = \|x_{1}\|$. Define \begin{equation*} s_{1} (A_{1}) = B_{1} := \{x \in A_{1}\hbox{:}\ \mu_{1}(x) > \rho_{1} - 1/2\} \end{equation*} as the first choice of $\Omega$-player. Then for each $x \in B_{1}$, we have \begin{equation*} \rho_{1} - 1/2 < \mu_{1} (x) \leq \|x\| \leq \rho_{1}. \end{equation*} Suppose that the finite sequence $\{x_{k}\}_{k \leq n}$ of points of $X, \{\mu_{k}\}_{k \leq n}$ of elements of $X^{*}$, and the partial play $p_{n} = (A_{1}, \ldots, B_{n})$ have already been specified so that for each $x \in B_{k}, k \leq n$ the inequality \begin{equation*} \rho_{k} - \frac{1}{k + 1} < \mu_{k} (x) < \|x\| \leq \rho_{k} \end{equation*} holds. Let $A_{n + 1}$ be the answer of $\Sigma$-player to $p_{n}$. Put \begin{equation*} \rho_{n + 1} = \sup \{\|x\|\,\hbox{:}\,x \in A_{n + 1}\} \end{equation*} and find some $x_{n + 1} \in A_{n + 1}, \|x_{n + 1}\| > \rho_{n + 1} - \frac{1}{n + 2}$. Take some $\mu_{n + 1} \in X^{*}, \|\mu_{n + 1}\| = 1$ with $\mu_{n + 1} (x_{n + 1}) = \|x_{n + 1}\|$ and define \begin{align*} s_{n + 1} (A_{1}, \ldots , A_{n + 1}) = B_{n + 1} = \left\lbrace x \in A_{n + 1}\hbox{:}\ \mu_{n + 1}(x) > \rho_{n + 1} - \frac{1}{n + 2}\right\rbrace, \end{align*} as the next choice of the player $\Omega$. Clearly, for each $x \in B_{n + 1}$, the inequality \begin{equation*} \rho_{n + 1} - \frac{1}{n + 2} < \mu_{n + 1} (x) < \|x\| \leq \rho_{n + 1} \end{equation*} holds. Thus, by induction on $n$, we have shown that the $\Omega$-player can choose sets of the form \begin{equation*} B_{n} = \left\{x \in A_{n}\,\hbox{:}\,\mu_{n} (x) > \rho_{n} - \frac{1}{n + 1}\right\}, \end{equation*} where $\|\mu_{n}\| = 1$ and $\rho_{n} = \sup \{\|x\|\,\hbox{:}\,x \in A_{n}\}$ for each $n \in N$. Let $\cap_{n \geq 1} B_{n} \neq \emptyset$ and $\mu$ be a weak$^{*}$ cluster point of $\{\mu_{n}\}$. Then for each $x \in \cap_{n \geq 1} B_{n}$, the inequality \begin{equation*} \rho_{n} - \frac{1}{n + 1} < \mu_{n} (x) < \|x\| \leq \rho_{n} \end{equation*} for each $n \in N$ implies that $\mu(x) = \|x\| = \rho$, where $\rho$ is the limit of the decreasing sequence $\{\rho_{n}\}$. It follows that for each $x, y \in \cap_{n \geq 1} B_{n}$, we have $\mu(x) = \|x\| = \|y\| = \mu(y)$. Rotundity of $X$ implies that $x = y$, thus, in this case, $\cap_{n \geq 1} B_{n}$ has at most one point. In case (b), suppose that $x \in \cap_{n \geq 1} B_{n}$. If $y_{n} \in B_{n}$, the inequality \begin{equation*} \rho_{n} - \frac{1}{n + 1} < \frac{1}{2} \mu_{n} (x + y_{n}) \leq \frac{1}{2} \|x + y_{n}\| \leq \frac{1}{2} (\|x\| + \|y_{n}\|) \leq \rho_{n} \end{equation*} shows that $\lim_{n \rightarrow \infty} \|(x + y_{n})/2\| = \lim_{n \rightarrow \infty} \|y_{n}\| = \|x\| = \rho$. Since $(X, \|\ \|)$ is weakly locally uniformly rotund, it follows that $\lim_{n \rightarrow \infty} (x - y_{n}) = 0$. By Theorem~1.2, the space is fragmented by a metric stronger than the weak topology. This completes the proof.\hfill $\Box$ \end{proof} \begin{rem} {\rm It is well-known that locally uniformly rotund norms are Kadec. Therefore statement (b) from the above theorem follows from Theorem~2.1 as well. } \end{rem} \section*{Acknowledgement} The author is indebted to P~Kenderov for the useful remarks while this work was in progress.
2,877,628,089,318
arxiv
\section{Introduction} A large and rapidly growing number of applications of the heavy quark theory requires predictions in the Minkowski domain. The inclusive width of the heavy flavour hadrons is one the best known examples. All predictions of this type are based on the so-called QCD duality (for a recent discussion see, e.g. Ref. \cite{1}). The issue of duality is as old as QCD itself. Because of its complexity it was virtually neglected for a long time, basically after the classical paper of Poggio {\em et al.} \cite{PQW}. Recently the interest to this question was revived by the necessity of having QCD predictions valid up to per cent accuracy in several problems of practical importance. The first attempts at quantifying possible deviations from duality were presented in Refs. \cite{2,3}. It is not easy to estimate the scale of the duality violations from fundamental QCD. Therefore, it is of paramount importance to gain experience in various particular problems. The present work investigates duality violations in an applied aspect. We will be mostly concerned with the spectral density in the two-point function induced by the heavy-light currents $\bar Q \Gamma q$, where $Q$ and $q$ are the light and heavy quarks, respectively. It turns out that in the limit $m_Q\rightarrow\infty$, $m_q\rightarrow 0$ the spectral density in the scalar (axial) channel is very peculiar: it starts {\em below} the position of the ground-state resonance, and is enhanced by the Goldstone meson contribution. Qualitatively the emerging picture is quite different from that taking place in the ``conventional" vector (pseudoscalar) channel, where the ground state resonance is physically the lowest-lying state. A key role in this phenomenon belongs to the spontaneous breaking of the chiral symmetry for the light quarks. The formulation of the problem to be discussed below is, in a sense, complementary to Ref. \cite{2}. If in the latter work one approaches from the high-energy side (i.e. one considers the oscillation and asymptotic zones, in the nomenclature of Ref.~\cite{2}, see Sect.~5.2), where duality is defined point-by-point; here we mostly deal with the resonance zone. Correspondingly, by duality we mean here that certain finite-range integrals over the hadronic spectral density are equal to the same integrals over the quark spectral density. The equality (or, rather, deviations from it) is checked by saturating the hadronic contribution by exclusive modes (cf. Ref. \cite{3}). The size of an integration interval large enough for duality to take place characterizes the scale of structures and disturbances due to low-lying resonances. In the second part of the paper, as a practical application we consider a particular physical process, namely $b\rightarrow c\bar c s$. Due to a relatively small energy release in this transition it is a natural suspect \cite{buv,4,5} for duality violations, which might be relevant in the so-called ``semileptonic branching ratio versus $n_c$ problem" \cite{6}. The model of Ref. \cite{2} estimates possible violations of duality in the $b\rightarrow \bar c cs$ only at the per cent level. If so, they can be neglected, and the duality-based predictions \cite{BBSL,volos} should be valid. The above model is admittedly crude, however, and independent estimates are badly needed. Since we observe a strong enhancement of the spectral density in the axial $\bar c s$ channel, associated with the soft kaon contribution, a corresponding enhancement in the $\bar B\rightarrow D^{*}\bar D^{*}\bar K$ could be naturally expected. {\em A priori}, it could then be duality violating. We suggest to use factorization, and the so-called generalized small velocity (GSV) limit in the $b\rightarrow c\bar c s$ transition. Both approximations are described at length below; here we just note that using them opens all decays of the type $\bar B \rightarrow D \bar D +\,\,\, S$-wave kaons (kaons and pions) for applications of the soft pion technique \cite{VZ}. The most economic way to implement the idea is by using the effective Lagrangian approach, which combines the chiral and heavy-quark symmetries \cite{16,A,B}. Surprisingly, our estimates of the decay modes $\bar B \rightarrow D^{(*)} \bar D^{(*)} \bar K$, where the kaon is treated as a soft Goldstone boson, give no indications on significant duality violations. We hasten to add, though, that the accuracy of the above estimates is not high. The mechanism we discuss does not differentiate noticeably between the $B$ meson and $\Lambda_b$ baryon decays. Thus, it adds nothing new to the problem of the $\Lambda_b$ lifetime. However, the $1/N_c$-motivated factorization used in the $B$ decays may no longer be applicable when a heavy baryon is produced in the final state of $\Lambda_b$ decays. This may provide a mechanism for the observed lifetime difference, but we will not go into this issue here. An interesting lesson follows for the lattice analysis of the heavy-light systems in the scalar and axial channels, where the quenched approximation is expected to give results different from those with light dynamical quarks. The paper consists of two distinct parts. In the first part we discuss general features of the spectral densities corresponding to the two-point functions induced by $\bar Q q$ in the limit $m_Q\rightarrow\infty,\,\,\, m_q\rightarrow 0$. (Sect.~2). Implications for the transition $b\rightarrow \bar c c s$ are considered in the second part (Sects.~3 and 4). Our conclusions are summarized in Sect.~5. \section{Spectral densities and duality in the scalar and pseudoscalar $\bar Q q$ channels} As a preparatory step let us consider the spectral densities for the two-point functions induced by the current $\bar Q\gamma_\mu (1-\gamma_5)q$. Since our consideration at this stage focuses on qualitative aspects, it is convenient to work in the limit where the mass of the heavy quark $m_Q\rightarrow \infty$ while that of the light quark $m_q\rightarrow 0$. In the chiral limit all mesons ($\pi, K, \eta )$ belonging to the Goldstone octet are massless. The effects due to the finite quark masses will be incorporated later, when we pass to $b\rightarrow \bar c c s$. The vector and pseudoscalar $\bar Q q$ mesons are degenerate. The same is valid for the scalar and axial ones. The splitting between the first and the second multiplets is of order $\Lambda_{\rm QCD}$ (practically, about 500 MeV). The first analysis of the two-point functions $$ \Pi_{\mu\nu}^{(V)}= i\int d^4x \, {\rm e}^{iqx}\, \langle 0\vert V_\nu (x) V^\dagger_\mu (0)\vert 0\rangle = \Pi\, g_{\mu\nu}+...\, , $$ $$ \Pi_{\mu\nu}^{(A)}= i\int d^4x\, {\rm e}^{iqx}\, \langle 0\vert A_\mu (x), A^\dagger_\nu (0)\vert 0\rangle = \tilde\Pi \, g_{\mu\nu}+...\, , $$ $$ \Pi^{(S)}=i\int d^4x \, {\rm e}^{iqx}\, \langle 0 \vert S(x), S^\dagger (0)\vert 0\rangle \, , $$ \begin{equation}} \def\eeq{\end{equation} \Pi^{(P)}= i\int d^4x \, {\rm e}^{iqx}\, \langle 0 \vert P(x),P^\dagger (0)\vert 0\rangle\, , \label{1} \eeq dates back to the early days of QCD \cite{8}. The currents $V_\mu ,\,\, A_\mu ,\,\, S$ and $P$ are defined as \begin{equation} V_\mu=\bar Q\gamma_\mu q,\,\,\, A_\mu=\bar Q\gamma_\mu\gamma_5q,\,\,\, S=\bar Qq,\,\,\, P=\bar Qi\gamma_5q \, . \label{2} \end{equation} Although the weak currents are $V-A$, the scalar and pseudoscalar two-point functions appear as their longitudinal parts, $$ S=\frac{-i}{m_Q-m_q}\, \partial _\mu V_\mu\;\; , \;\;\;\;\; P=\frac{1}{m_Q+m_q}\, \partial _\mu A_\mu\;\; . $$ Already in Ref. \cite{8} it was noted that the quark condensate correction is enhanced in the heavy-light quark case compared to, say, the classical $\rho$-meson sum rule \cite{9}, producing stronger disturbances of the spectral density at small energies and, thus, leading to stronger deviations from duality \footnote{We stress again that we focus now on the resonance zone, where a typical scale is set by low-dimension condensates, unlike Ref. \cite{2} where the emphasis was on the oscillation and asymptotic zones, and on the modelling of the impact of high-dimension condensates.}. First of all, its dimension in the former case is 3, while in the latter case it is 6. Second, the numerical coefficient is larger. Further studies \cite{10,11,Dai} confirmed this observation in a more quantitative way. Actually, Ref. \cite{10} was the first to introduce the heavy quark limit. It was noted there that in this limit the spectral densities in the (transverse) vector and pseudoscalar channels are degenerate. The same is valid for the scalar and (transverse) axial channels. For this reason we will limit ourselves in this section only to $S$ and $P$ channels. Even more remarkable is the fact \cite{10} that using the standard ``first resonance plus parton-like continuum" model of the spectral density produces an abnormally large residue of the ground-state resonance in the scalar (axial) channels, an order of magnitude larger than that in the pseudoscalar (vector) channels. This observation, which went unnoticed, must be viewed as a hint that something unusual takes place in the scalar channel. Now we are ready to explain this anomaly. As a matter of fact, the standard model of the spectral density mentioned above does not work in the scalar (axial) channels because of an enhancement at low energies due to the contribution of the $D\pi$ intermediate state. (The mesons of the type $D$ and $D^*$ built from $Q\bar q$ and degenerate in the limit $m_Q\rightarrow \infty, m_q\rightarrow 0$ will be generically referred to as $D$'s. The massless pseudoscalar mesons will be generically referred to as $\pi$'s). In the free-quark approximation the four spectral densities (for $\Pi^{(V,P)}$ and $\Pi^{(S,A)}$) are all identical and equal to \begin{equation} {\rm Im}\, \Pi=\frac{N_c}{2\pi}\, \varepsilon^2 \; ; \label{3} \end{equation} only chirally odd condensate corrections distinguish between the first and the second pairs of above currents. Here $\varepsilon$ is the excitation energy measured from the quark threshold (i.e. from the heavy quark mass $m_Q$). Logarithmic factors due to anomalous dimensions and $\alpha_s$ corrections are neglected in Eq. (\ref{3}). Although they play a role numerically, see e.g. Ref. \cite{11}, qualitatively they are not important. Equation (\ref{3}) will set a natural scale for the spectral density in the heavy-light channels. Its important feature is a strong suppression at low energies, as $\varepsilon^2$. Now, let us turn to the physical (hadron-saturated) spectral densities. In the pseudoscalar channel we encounter a more or less familiar picture. The physical spectral density starts from the ground-state pseudoscalar $D$, then there is a gap, and then multiparticle intermediate states (continuum) add up into a curve that is expected to be relatively close to the quark one. The continuum actually is a sum over broad resonances, the radial excitations of $D$. The first resonance in the pseudoscalar (vector) channel is situated at $\varepsilon =\bar \Lambda$ where $\bar\Lambda = M_D-m_c$; the continuum threshold is at $\varepsilon = \varepsilon_c\sim 2 \bar \Lambda$, see Fig.~1. The onset of ``continuum" approximately coincides with the position of the first radial excitation in the given channel. The parameter $\bar\Lambda$ above is a basic parameter of the heavy quark theory \cite{Luke}. Numerically $\bar\Lambda \sim 500$ MeV \cite{NS,Baga,MV}. (We recall that although we keep using the name ``$D$ mesons" so far we work in the limit $m_Q\rightarrow\infty$.) The scalar (axial) resonances we are interested in are $P$-wave in the language of the naive quark model and $1/2^+$ in the language of HQET. Due to the chiral symmetry breaking they lie higher.\footnote{The scalar resonance will be referred to as $D_0^*$. The axial $1/2^+$ resonances are sometimes denoted as $D_1'$, to distinguish them from the axial $3/2^+$ resonances denoted as $D_1$. Since in the present paper we will discuss only the axial $1/2^+$ resonances, the prime will be omitted. The primes are reserved for radial excitations.} (For a concise review of the higher $\bar Q q$ states see Ref. \cite{Adam}.) The heavy quark symmetry implies that the scalar $D^*_0$ and the axial $D_1$ are degenerate; one expects both $D^*_0$ and $D_1$ at $\varepsilon\sim$ 2$\bar\Lambda$. What is remarkable is that the scalar (transverse axial) spectral density is non-vanishing and large {\em below } $D^*_0$ ($D_1$), in the interval from $\bar\Lambda$ to $\sim 2 \bar\Lambda$, since it receives an enhanced contribution from the states $D\pi$ (in the scalar channel) and $D^*\pi$ (in the axial channel). The pions are strongly coupled to the $\bar Q q$ current, in the $S$ wave. \begin{figure} \vspace{5.0cm} \special{psfile=test1.ps hscale=70 vscale=60 hoffset=-15 voffset=-140 } \caption{A sketch of the spectral density versus $\varepsilon /\bar \Lambda$ in the pseudoscalar channel. To set the scale we show also the free quark spectral density represented by the parabola. } \end{figure} \begin{figure} \vspace{4.0cm} \special{psfile=test3.ps hscale=70 vscale=60 hoffset=-15 voffset=-150 } \caption{A sketch of the spectral density versus $\varepsilon /\bar \Lambda$ in the scalar channel. } \end{figure} Consider for definiteness $\Pi^{(S)}$. In the low-energy limit the pion is soft and can be reduced by using the soft-pion technique. Then, \begin{equation} \langle 0\vert \bar Qq\vert D \pi \rangle_{\vec k\rightarrow 0} = \frac{1}{f_\pi}\langle 0\vert\bar Qi\gamma_5q\vert D\rangle = -\frac{f_Q}{f_\pi}\, m_Q\, . \label{4} \end{equation} Here $f_Q$ is defined as $$ \langle 0\vert \bar Q\gamma_\mu\gamma_5q\vert D\rangle =if_Qp_\mu\, , $$ and we do not differentiate between $M_D$ and $m_Q$ in the limit $m_Q\rightarrow\infty$; $\vec k$ is the pion momentum. The spectral density takes the form \begin{equation}} \def\eeq{\end{equation} {\rm Im}\, \Pi^{(S)}=\frac{1}{2}\vert\langle 0\vert\bar Qq\vert D\pi\rangle \vert^2\times \mbox{phase space} = \frac{f^2_Qm_Q}{f^2_\pi}\frac{1}{8\pi}\left( N_f- \frac{1}{N}_f\right)\, (\varepsilon-\overline{\Lambda})\; , \label{5} \eeq where $N_f=3$ is the number of massless flavours ($N^2_f-1$ is the number of Goldstone mesons) and $\varepsilon-\overline{\Lambda}$ is the energy measured from the position of the ground state $D$. Comparing Eqs. (\ref{3}) and (\ref{5}) reveals the remarkable enhancement of the spectral density at low energies that was mentioned above. First, the dependence on $\varepsilon$ is parametrically different. Due to the $S$-wave nature of the matrix element $\langle 0\vert S\vert D\pi\rangle $ the $D\pi$ contribution to the spectral density vanishes as $O(\varepsilon )$ while the free quark expression (\ref{3}) is O($\varepsilon^2)$. Second, as is well known, the static coupling $f_Q$ is rather large numerically \cite{11,12}. The combination $f^2_Qm_Q$ stays constant in the limit $m_Q\rightarrow\infty$ \cite{10}, modulo the hybrid logarithms \cite{13}, and this constant is approximately 0.2 to 0.25 GeV$^3$ \cite{11,12}. Treating Eq. (\ref{5}) literally we get that the $D\pi$ contribution to the spectral density at $\varepsilon = 2\bar\Lambda$ exceeds the free quark expression by a factor of $\sim 1.5$ (Fig. 2). The integral over the $D\pi$ spectral density up to $\varepsilon = 2\bar\Lambda$ is approximately $1.2$ times the integral over the free quark spectral density from 0 to $\varepsilon=2\bar\Lambda$. Of course, at a certain energy, the soft-pion result (\ref{4}) becomes invalid -- the momentum-dependent part of the interaction will cut off the amplitude. The cut off presumably occurs near the position of the resonance $D^*_0$. The excess of the spectral density below $D^*_0$ and at $D^*_0$ has to be compensated by an extended gap immediately after $D^*_0$. This gap stretches, presumably, up to $\varepsilon \sim 3\bar\Lambda$. In general, the situation is very similar to what occurs with the two-pion contribution in the spectral density induced by the gluonic current $\alpha_s G^2$ \cite{Shif}, where a strong low-energy enhancement is supplemented by an extended gap, and the scale of the duality violation is quite large. Note that the $D\pi$ contribution we have calculated has proper dependence on parameters $N_c$ and $N_f$. In the limit $N_c\rightarrow\infty$ it decouples compared to Eq. (\ref{3}), as it should. There is no decoupling, however, if $N_f/N_c$ stays fixed. The pattern we observe presents another example, similar to Ref. \cite{Shif}, that in the low-energy domain the $1/N_c$ counting rules should be taken with caution. There exist certain mechanisms that can totally upset the $1/N_c$ estimates at $N_c=3$ (although at academically large $N_c$ the $1/N_c$ counting will work, of course). Thus, in the pseudoscalar channel we expect the duality interval to be $\sim 2\bar\Lambda$, while in the scalar channel its size is expected to be larger, $\sim 3\bar\Lambda$. By the duality interval we mean the following: the minimal interval of energy $(0, \varepsilon_0)$ needed to make the smeared resonance contribution (approximately) equal to the quark-gluon one. In other words, in the smearing integrals over the spectral densities from 0 to $\varepsilon_0$ the upper limit $\varepsilon_0$ should be chosen at $\sim 2\bar\Lambda$ in the pseudoscalar channel and $\sim 3\bar\Lambda$ in the scalar one. We pause here to make a few comments. First, if we consider the sum of the scalar plus pseudoscalar heavy-light channels (or vector plus axial) the quark condensate correction $\langle\bar qq\rangle $ cancels, leaving us with no hint that the duality violation scale is larger in this case than in the classical $\rho$ meson sum rule, at least at the level of analyzing the first terms in the operator product expansion. This fact is rather alarming, since in the weak inclusive decays we deal with the $V-A$ currents, and the above cancellation is quite typical. The model of Ref. \cite{2} gives no clues in this case either, since it admittedly omits all effects specifically related to the spontaneous breaking of the chiral symmetry. At an observational level we detect here a possible signal that the scale of the duality violations for $V-A$ currents may be larger than can be inferred from the analysis of the lowest-dimension condensates. Does this mean that some other condensates, of higher dimension, must be important? And how can they be identified? These questions still remain open. It is clear that the sum rules for the heavy-light currents have to be reanalyzed with emphasis on this aspect. In particular, an updated analysis of the scalar (axial) channel is more than in order. One has to replace the standard ``lowest resonance plus continuum" pattern by the more complicated picture described above. The impact seems to be obvious. The residue of the $D^*_0$ ($D_1$) state will go down with respect to the prediction of Ref. \cite{10}. A part of the spectral density will be pumped out from the resonance into the non-resonant $D\pi$ background. The second remark refers to the lattice calculations of the heavy-light systems (spectra and coupling constants). Pseudoscalar and scalar channels should be drastically different with respect to inclusion of the light dynamical quarks. If the pseudoscalar channel seems to be relatively stable, unquenching quarks and making light quarks really light must produce a dramatic effect in the scalar one. Of course, this is not the first time the dynamic light quarks lower the threshold. For instance, in the $\rho$ meson case there is the two-pion cut as well. This state, however, is only relatively weakly coupled to the current $\bar q\gamma_\mu q$, since the pions are in the $P$ wave; therefore, speaking in practical terms, this contribution is rather unimportant (although at asymptotically large separations it will dominate anyway). This is not the case for the current $\bar Q q$, where the $D\pi$ intermediate state is not only lower in mass than $D^*_0$ ($D_1$), but also strongly coupled to the current. Therefore, its impact in the two-point function should be essential and the correlator must drastically change once the light quarks are unquenched and made light. Studying this problem on the lattice seems to be a nice testing ground for various approximations routinely made within this approach. \section{Nonleptonic $B$ decays ($b\rightarrow \bar ccs$): outlining the problem} Having established the enhancement of the $D\pi$ intermediate state in ${\rm Im}\Pi^{(S)}$ (or $D^*\pi$ in ${\rm Im}\Pi^{(A)}$) near the threshold, it is natural to turn to nonleptonic decays of $B$ mesons, the $b\rightarrow \bar ccs$ transition. Indeed, in this case the $\bar c \Gamma_\mu s $ current produces charmed/strange hadronic states with the quantum numbers $1^-$, $0^+$ (the vector current) and $1^+$, $0^-$ (the axial current). The $1^+$ and $0^+$ channels were shown above to be responsible for an enhanced production of Goldstone mesons in the $S$ wave. Of course, in the actual $B$ decays the situation is not quite the same as in the limit $m_Q\rightarrow\infty\, ,\,\,\, m_q\rightarrow 0$ considered in Sect. 2. Suffice it to mention that the actual values of the $c$ and $s$ quark masses are such that the axial ground-state meson $D_{s1}$, shown in Fig. 2 at the end of the shoulder (at $\varepsilon \sim 2\bar\Lambda$), turns out to be almost exactly at its beginning, presumably barely above the threshold of $D^*K$. The $1/2^+$ charmed strange mesons have not been detected experimentally so far. Moreover, the actual value of $f_B^2$ is also noticeably lower than its static value. Also, the energy release in the light particles in the case at hand is $\le 1.5$ GeV. It is quite clear that the kaon mass is not negligible in the phase space. For this reason in the nonleptonic $B$ decays a dedicated analysis is needed. First, however, a few remarks regarding the general situation with the $b\rightarrow \bar ccs$ transition are in order. This transition came under renewed scrutiny recently \cite{5} on purely phenomenological grounds, in an attempt to find a solution of the ``branching ratio versus $n_c$ problem". In Ref. \cite{5} it was assumed that the theoretical understanding of $b\rightarrow c\bar u d$ is solid. Then, from the measured semileptonic branching ratio of approximately $10.5\%$, the branching ratio of $b\rightarrow \bar ccs$ was predicted to be close to 30\%, with the corresponding charm yield $n_c\approx 1.3$. Moreover, using the most naive duality estimates in conjunction with the parton model for $b\rightarrow c\bar cs$, it was suggested \cite{5} that approximately 1/2 of $b\rightarrow c\bar cs$ hadronizes in the channels of the type $\bar B\rightarrow D^{(*)} \bar D^{(*)} \bar K X$, and only the remaining 1/2 goes to $D^{(*)}\bar D^{(*)}_sX$, the channels on which the attention was focused previously. If so, about 15\% of the $\bar B$ decays have to end up with the ``wrong" sign $D$'s and $K$'s in the final state. This prediction was confirmed by recent results from the CLEO, ALEPH and DELPHI experiments \cite{7}, reporting the yield of the ``wrong" sign $D$'s at the 10\% level. The corresponding value of $n_c$ is close to 1.24 \cite{7}. Additionally, ALEPH has recently reported a value for $n_c$ in $Z\rightarrow b\bar b$ \cite{aleph} of $1.23\pm 0.07$. Although the situation can be looked at quite optimistically, serious questions are still to be answered. The first question is, of course, purely experimental. As was emphasized in Ref. \cite{7}, the charm yield $n_c$, as computed in the usual way from the measurements at $\Upsilon (4S)$, remains unchanged: $n_c = 1.10\pm 0.06$ \cite{17}. The contradiction is obvious, suggesting that the experimental situation is still not settled. We clearly cannot comment on this aspect, and quickly pass to what is known theoretically. Since the leading nonperturbative corrections are expected to play no significant role in the issue \cite{6}, the focus of theoretical analysis is shifted towards perturbative calculations in $b\rightarrow \bar cc s$. The first dedicated calculation of the gluon corrections was carried out in Ref. \cite{AP}. The most advanced analysis existing today is presented in Refs. \cite{BBSL,volos} (see also references therein). To quote representative values of the predicted parameters, let us assume that $\alpha_s (M_Z) = 0.11$ and $\mu =m_b/2$, in $\overline{MS}$; then \cite{BBSL} $n_c = 1.28$ (the corresponding BR$_{\rm sl} (B)$ is slightly lower than 11\%). Within a somewhat different procedure of treating the ratio of the quark masses $m_c/m_b$ (but the same values of the parameters as above), the theoretical numbers become \cite{Pat} $n_c =1.23$ and BR$_{\rm sl}(B)= 11.5\%$. Thus, it is fair to say that at the current level of understanding the theoretical prediction for $n_c$ is close to 1.25. Without the ${\cal O}(\alpha_s )$ correction the charm yield is 1.15. Thus, the inclusion of the ${\cal O}(\alpha_s )$ gluon correction enhances BR$(b\rightarrow\bar ccs)$ by a factor $\sim 1.6$. In the approximation of Ref. \cite{BBSL} factorization is explicit. This statement will be explained in more detail below (Sect. 3), where we will introduce the factorization hypothesis, one of the key elements of our consideration. The impatient reader may turn to Ref. \cite{MBV} for very clear explanations. Here we just note that, in any perturbative calculation respecting factorization, the transitions corresponding to the vector $\bar c \gamma_{\mu} s$ and the axial $\bar c \gamma_{\mu}\gamma_5 s$ currents necessarily have equal probabilities, provided very small effects $\propto m_s^2$ are neglected. If so, the result \cite{BBSL} implies that the branching ratios of $b\rightarrow \bar ccs$, with $\bar c s$ from the vector and axial currents, respectively, are 12 to 13\% each.\footnote {In Ref. \cite{MBV} some of the ${\cal O}(\alpha_s^2)$ corrections were estimated, and at this level nonfactorizable terms appear. For the purpose of our analysis one can safely use factorization, at least as a starting point.} It is clear that the theoretical predictions \cite{BBSL} discussed above are quite compatible with the newest experimental trend. The basic assumption underlying the theoretical analysis is the validity of the quark-hadron duality. Strong violations of duality in $b\rightarrow \bar ccs$ would destroy the predictive power of the existing theory. Although at the moment no sources for such large violations were identified \cite{2}, in the absence of a complete theory it is obviously desirable to have as many independent confirmations as possible. We will analyze below the transition $b\rightarrow \bar ccs$ from this point of view. The transition $b\rightarrow \bar ccs$ is singled out in this aspect for the following reason. The relative smallness of the energy release, which alerts one regarding duality violations, can be turned into an advantage. Indeed, in this case the number of the hadronic channels saturating the physical decays is not large, and one can try to estimate these channels one by one to see whether they add up to the quark-gluon result. As a by-product one could hope to get a more direct estimate for the $\bar B\rightarrow D^{(*)} \bar D^{(*)}\bar KX$ rate. The goal here is to check whether the 10\% yield reported experimentally is well understood theoretically, invoking as few unsubstantiated assumptions as possible. \section{Nonleptonic $B$ decays ($b\rightarrow \bar ccs$): analysis of exclusive modes} The relevant part of the weak Lagrangian contains two structures, with the ``direct" and ``twisted" colour flow \begin{equation} L=\frac{G_F}{\sqrt{2}} V_{cb}V_{sc} \left[ a_1(\bar b\Gamma_\mu c)(\bar c \Gamma_\mu s)+a_2(\bar b \Gamma_\mu s) (\bar c\Gamma_\mu c)\right] \, . \label{7} \end{equation} We will disregard the second term, with the twisted color structure, for the following reason. First, the value of $\vert a_2/a_1\vert $ is rather small numerically (see e.g. the review paper \cite{17}), approximately 0.1. Therefore, the square of the second term contributes at the level of 0.01. The interference with the first term is suppressed by $N_c$ and is thus expected to show up at the level of corrections $\sim 0.2/3$ in the probability of the decay modes we are interested in. The estimates to be presented below have intrinsic theoretical uncertainties of this order of magnitude or larger. Furthermore, in considering the first term it is reasonable to accept, at least at this stage of the analysis, the factorization approximation. By this we mean that in treating the hadronic matrix elements, the $\bar b\Gamma_\mu c $ bracket of the effective Lagrangian, will be factored out from the $\bar c \Gamma_\mu s$ bracket. The corresponding hadronic subsystems are assumed not to communicate with each other through the soft gluon exchanges, although inside the subsystems all these exchanges are taken into account in full. Note, that we also automatically include all hard gluons (with off-shellness larger than $m_b$), through the factor $a_1$. The standard theoretical justification for the factorization hypothesis is the $1/N_c$ counting. All non-factorizable contributions are suppressed by $1/N_c$. Note that the modes with the hidden charm (e.g. $J/\psi K$) will not concern us here. Certainly, we are well aware that the non-factorizable contributions must be present (see e.g. \cite{Blok}); their effect is noticeable in the fine structure of the nonleptonic decays and in some special modes, but otherwise it is quite modest. For instance, in $ B\rightarrow D \bar D_s$ the non-factorizable contributions were estimated to be less then 10$\%$ of the factorizable part \cite{bs}. We will ignore the non-factorizable terms in the present work. Another theoretical tool which will help us is the generalized small velocity limit, i.e. the assumption that two charmed mesons we deal with in the $b\rightarrow \bar ccs$ transition, in the final state, are slow. Kinematically this means that \begin{equation}} \def\eeq{\end{equation} M_B- 2M_D \ll M_D \, . \label{gsv} \eeq This is not the first time the GSV limit is exploited in the context of $B$ decays, see e.g. \cite{MBV,Shif2}. Using the GSV limit, in combination with factorization, will allow us to disregard excitations in the $\bar b \Gamma_\mu c$ bracket, and exploit the well developed formalism of the soft Goldstone mesons for the modes of the type $D\bar D\bar K$. Although in purely theoretical aspect the GSV limit is an excellent tool, in the actual $B$ decays Eq. (\ref{gsv}) is valid only marginally. We are neither too close to the GSV point nor too far from it (cf. e.g. \cite{MBV}). Under the circumstances, in kinematical factors we will keep the terms containing the $D$ meson velocities, while omitting ${\vec v}^2$ terms when they are additionally suppressed. In the future, with more phenomenological input, this approximation can be improved, including all terms quadratic in the (spatial) velocities. To warm up let us consider two-particle decays. This exercise is not new (see Ref. \cite{Mannel}), and we repeat it merely to introduce our notation and explain the choice of numerical values of the relevant parameters. Thus, factorization and the GSV limit are starting elements of our analysis. If so, then the bracket $\bar b\Gamma_\mu c$ is responsible for the $\bar B\rightarrow D$ ($B\rightarrow D^*$) transition. Production of excitations by this bracket -- either resonances (say, radial excitations of $D$ and $D^*$), or nonresonant states of the type $D\pi$ -- is suppressed by the velocity squared of the charmed meson. The $D\pi$ production by $\bar b\Gamma_\mu c$ can be studied within the chiral perturbation theory \cite{USV}. One has to consider pole graphs which, in addition to the velocity suppression, are proportional to the $D^*D\pi$ coupling constant $g$. The latter was calculated within the QCD sum rules \cite{BBKR}, and turns out to be rather small, $g\sim 0.3$. Therefore, here and below we will consistently disregard all contributions to the decay rate that are proportional to $\vec v^2$ and $g^2$. Limiting ourselves to the $D$ and $D^*$ states in the $\bar b\Gamma_\mu c$ bracket gives us, by itself, a predictive power since the form factors of the $B\rightarrow D^{(*)}$ transitions at zero recoil are normalized to unity \cite{NW,Shif3}, and near zero recoil are well approximated by the first derivative of the Isgur-Wise function $\xi$ \cite{IW}. The second bracket, $\bar c\Gamma_\mu s$, creates sometimes $\bar D_s \,\,{\rm or}\,\, \bar D^*_s$ states, sometimes radial excitations of $\bar D_s$ and $\bar D^*_s$, and sometimes nonresonant $\bar D^{(*)} \bar K$ pairs. The axial current, in its transverse part, can produce $\bar D_{s1}$ and excitations, while the longitudinal part of the vector current can produce $\bar D^*_{s0}$ and excitations. Let us first concentrate on $\bar D_s$ and $\bar D^*_s$. \vspace{0.2cm} {\em (i) $B\rightarrow D^{(*)}\bar D^{(*)}_s$. ``Wrong" spin correlations} \vspace{0.2cm} The amplitudes for two-body transitions are given by \begin{equation}} \def\eeq{\end{equation} {\cal A}(\bar B\rightarrow D\bar D_s)=\frac{G_F}{\sqrt{2}} V_{cb}V^*_{cs} \, a_1\,\left( 2\sqrt{M_BM_D}f_{D_s}M_{D_s}\right) \,\left[\frac{M_B-M_D}{M_{D_s}} \frac{1+vv'}{2}\,\xi (vv')\right]\, , \label{A1} \eeq \vspace*{0.2cm} $$ {\cal A}(\bar B\rightarrow D^*\bar D^*_s)=\frac{G_F}{\sqrt{ 2}} V_{cb}V^*_{cs}\, a_1\, \left( 2\sqrt{M_BM_{D^*}}f_{D_s}M_{D_s^*}\right)\times $$ \begin{equation}} \def\eeq{\end{equation} \left\{ (\epsilon '\epsilon '')\frac{1+vv'}{2} +\left[ -\frac{1}{2}(\epsilon ' v)(\epsilon '' v') + \frac{i}{2}\epsilon_{\mu\nu\rho\lambda} \epsilon '_\mu\epsilon ''_\nu v_\rho v'_\lambda\right] \right\}\xi (vv')\, , \label{A2} \eeq \vspace*{0.2cm} $$ {\cal A}(\bar B\rightarrow D\bar D_s^*)=\frac{G_F}{\sqrt{2}} V_{cb}V^*_{cs} \, a_1\times $$ \begin{equation}} \def\eeq{\end{equation} \left(2 \sqrt{M_BM_D}f_{D_s^*} M_{D_s}\right) \,\left[ \frac{M_B+M_D}{2M_{D}}\, (\epsilon '' v)\, \xi (vv') \right]\, . \label{A4} \eeq \vspace*{0.2cm} $$ {\cal A}(\bar B\rightarrow D^*\bar D_s)=\frac{G_F}{\sqrt{2}} V_{cb}V^*_{cs} \, a_1\times $$ \begin{equation}} \def\eeq{\end{equation} \left( 2\sqrt{M_BM_{D^*}}f_{D_s^*} M_{D_s^*}\right) \,\left[ \frac{M_B+M_{D^*}}{2M_{D_s}}\, (\epsilon ' v)\, \xi (vv') \right]\, . \label{A3} \eeq \vspace*{0.2cm}\\ Here we omitted inessential overall phase factors in front of some amplitudes. The constants $f_{D_s}$ are defined as , $$ \langle \bar D_s |\bar c\gamma_\mu\gamma_5 s|0\rangle = -i f_{D_s}M_{D_s}v''_\mu\, , \,\,\, \langle \bar D_s^* |\bar c\gamma_\mu s|0\rangle = f_{D_s}M_{D_s^*}\epsilon''_\mu\, . $$ The pseudoscalar and vector constants are identical in the limit $m_c\rightarrow\infty$, but may differ a little, due to preasymptotic terms, for the actual $c$ quarks. For simplicity the difference between the pseudoscalar and vector constants are neglected within any hyperfine multiplet. Anyway, we do not know them to that accuracy. Moreover, $v$ is the four-velocity of the decaying $B$ meson, $v'$ is the four-velocity of the $D^{(*)}$ meson and $v''$ is the four-velocity of $\bar D_s^{(*)}$, $$ vv'=\frac{M_B^2+M_D^2-M_{D_s}^2}{2M_BM_D}\;,\;\;\; vv''= \frac{M_B^2+M_{D_s}^2-M_D^2}{2M_BM_{D_s}}\;,\;\;\; v'v''=\frac{M_B^2-M_D^2-M_{D_s}^2}{2M_D M_{D_s}}\, , $$ and likewise for other decays, $\epsilon '$ and $\epsilon ''$ are the polarization vectors of $D^{*}$ and $\bar D_s^{*}$, respectively. In the limit of slow $D$'s, when $vv',\,vv'',\;v'v'' \rightarrow 1$, these expressions essentially simplify. Thus, in Eq. (\ref{A1}) the square bracket becomes unity, and in Eqs. (\ref{A2}), (\ref{A4}) and (\ref{A3}) the square brackets tend to zero -- as $\sqrt{vv''-1}$ in Eq. (\ref{A4}) and as $\sqrt{vv'-1}$ in Eq. (\ref{A3}). Thus, in this limit we have a rigid spin correlation. If $D$ is produced by the $\bar bc$ bracket, the $\bar c s$ bracket will produce a pseudoscalar $\bar D_s$; if $D^*$ is produced by the $\bar bc$ bracket we get, in association, a vector $\bar D^*_s$. This is because in this limit the $B\rightarrow D$ transition is caused by the zeroth (time) component of the current, whereas $B\rightarrow D^*$ is due to the spatial component. The ``wrong"-spin transitions, Eqs. (\ref{A4}) and (\ref{A3}), switch off in this limit. Table 1 gives the values of the parameters $y=vv'$, $v'v''$ and $vv''$ for various transitions in the ground state and excited resonances. In the GSV limit all these kinematic parameters reduce to unity. \begin{table} \begin{center} \caption{ } \vspace{0.1in} \begin{tabular} {|c|c|c|c|c|}\hline Decay &$y$&$v'v{''}$&$vv{''}$&${\rm block}\,\,{\rm factor}$\\ \hline $B\rightarrow D\bar D_s$&1.39 &2.79&1.36&1.50\\ \hline $B\rightarrow D\bar D^*_s$&1.36&2.53&1.29&1.25\\ \hline $B\rightarrow D^*\bar D_s$&1.32&2.52&1.33&.93\\ \hline $B\rightarrow D^*\bar D^*_s$&1.29&2.28&1.27&$\sqrt{3}\cdot 0.94$\\ \hline $B\rightarrow D\bar D_s'$ &1.25 & 1.82 & 1.13&1.22\\ \hline $B\rightarrow D\bar D_s^{*\prime} $ & 1.21 & 1.64& 1.10& 0.79\\ \hline $B\rightarrow D^*\bar D'_s$ & 1.19 &1.64 &1.11&0.58 \\ \hline $B\rightarrow D^*\bar D^{*\prime}_s$ &1.15&1.47&1.08&$\sqrt{3}\cdot 1.04$\\ \hline $B\rightarrow D\bar D_s''$ & 1.10 &1.28 &1.04 &1.07 \\ \hline $B\rightarrow D\bar D_s^{*\prime}$ & 1.05 & 1.14& 1.02& 0.37\\ \hline $B\rightarrow D^*\bar D_s''$ & 1.05 &1.14 &1.02&0.28 \\ \hline $B\rightarrow D^*\bar D_s^{*\prime}$ & 1.006 &1.02 &1.002 &$\sqrt{3}\cdot 1.09$\\ \hline \end{tabular} \end{center} \end{table} This table also gives the values of ``block factors". They are defined as $$ \mbox{block factor} = \left( \frac{{\cal A}^2}{|\frac{G_F}{\sqrt{2}} V_{cb}V^*_{cs} \, a_1\, \left(2 \sqrt{M_BM_D}f_{D_s}M_{D_s}\right) |^2}\right)^{1/2} $$ where the summation over the polarizations of $D_{(s)}^*$ is implied, where applicable; for the excited $D_s$ states considered below $M_{D_s}$ and $f_{D_s}$ in the denominator will be understood as the mass and the coupling of the respective pseudoscalar. We included here the IW function $\xi(vv')$, see below. In the first decay the block factor is just the square bracket in Eq. (\ref{A1}). We see that in the $\bar B \rightarrow D\bar D_s$ transition the block factor which is equal to unity in the GSV limit, is actually close to $1.5$, while in the ``wrong" spin correlation transition $\bar B \rightarrow D\bar D_s^*$ the block factor, which vanishes in the GSV limit is actually close to $1.25$. This is a manifestation of the fact that in the two-body decays into the lowest-lying $D$'s the GSV approximation is not too good, as was expected, of course. Table 1 shows that the GSV approximation becomes much better for excited $\bar D$'s where the wrong spin transitions are indeed suppressed. The GSV approximation, quite naturally, significantly improves in the three-particle decays as well, where a part of the overall energy release goes to create, additionally, the kaon mass, and the remainder is shared by three particles, not two. If instead of $\bar D_s$ and $\bar D_s^*$ we have their radial excitations, Eqs. (\ref{A1})--(\ref{A3}) are modified in a minimal way. Apart from the masses and the kinematical factors, one must replace $f_{D_s}$ by the corresponding coupling. For numerical estimates of the decay rates we need to fix various parameters. First, we take $f_{D_s}\approx 200 \,\mbox{MeV}$ \cite{12,BE}. Now, the Isgur-Wise function $\xi$ must be evaluated at the proper recoil values, which are given in Table 1. We use the linear approximation for the Isgur-Wise function \begin{equation}} \def\eeq{\end{equation} \xi (vv') = 1-\rho^2 (vv'-1) \, , \label{iwlin} \eeq where $\rho^2$ is the slope parameter. Its numerical value is more or less known, and we put $$ \rho^2 = 0.7\, . $$ This value is compatible with the experimental data \cite{BR} and with the QCD sum rule calculations \cite{20}. Finally, $a_1\approx 1.1$. The relevant decay rates are \begin{equation}} \def\eeq{\end{equation} \Gamma (B\rightarrow D\bar D_s)\;=\;\Gamma_0\, \left( \frac{M_B-M_D}{M_{D_s}}\, \frac{1+vv'}{2} \right)^2 \, \xi^2 \, , \label{G1} \eeq \vspace*{0.2cm} \begin{equation} \Gamma (B\rightarrow DD^*_s) \;=\; \Gamma_0^*\, \frac{(M_B+M_D)^2}{4M_D^2}\, ((vv'')^2-1) \, \xi^2 \, , \label{cf} \end{equation} \vspace*{0.2cm} \begin{equation} \Gamma (B\rightarrow D^*\bar D_s)\;=\;\Gamma '_0\, \frac{(M_B+M_{D^*})^2}{4M_{D_s}^2}\, ((vv')^2-1)\, \xi^2\, , \label{df} \end{equation} \vspace*{0.2cm} \begin{equation}} \def\eeq{\end{equation} \Gamma (B\rightarrow D^*\bar D^*_s)=3\Gamma_{0}'^*\, \left[1+\frac{vv'-1}{3} +\frac{(vv')^2-1}{4} - \frac{(v'v'')^2-1}{12} +\frac{(vv')(vv'')-1}{3} \right]\,\xi^2 . \label{G2} \eeq Here $$ \Gamma_0 = \frac{G^2_F}{4\pi} \vert V_{cb}V_{cs}\vert^2a_1^2 \, f^2_{D_s}\, M_DM_{D_s}^2 \frac{|\vec p |}{M_B}\, , $$ and $\Gamma_0*$, $\Gamma_0'$, $\Gamma_0'^*$ are the same with the $D$ ($D_s$) replaced by $D^*$ ($D_s^*$) for asterisk, and $\vec p$ is the c.o.m. momentum of the produced mesons. Numerically the relevant branching ratios are given in Table 2. \begin{table} \begin{center} \caption{ } \vspace{0.1in} \begin{tabular} {|c|c|}\hline Decay &${\rm Branching}\,,\;\%$\\ \hline $B\rightarrow D\bar D_s$&1.1\\ \hline $B\rightarrow D\bar D^*_s$&0.87\\ \hline $B\rightarrow D^*\bar D_s$&0.45\\ \hline $B\rightarrow D^*\bar D^*_s$&1.5\\ \hline ${\rm Total}\;\; {\rm Br}\,, \;\%$ & $4.0$ \\ \hline \end{tabular} \end{center} \end{table} So far we basically repeated the calculations of Mannel {\em et al.} \cite{Mannel}; our estimate of the Isgur-Wise function is different, and our value of $f_{D_s}$ is essentially lower than that accepted in Ref. \cite{Mannel}. We note that the total sum of the above four modes amounts to $\sim 4\%$ in the branching ratio. None of these modes produces ``wrong" sign $D$'s. The above numerical results are in agreement with the experimental data \cite{21}, \begin{equation} \Gamma (B^0\rightarrow D\bar D_s)= 0.7\pm 0.4\%\; ,\; \;\;\; \Gamma (B^0\rightarrow D^*\bar D^*_s) = 1.9\pm 1.2\%\; , \label{11a} \end{equation} \begin{equation} {\rm Br} (B^0\rightarrow D^*\bar D_s)=1.2\pm 0.6\%\; ,\; {\rm Br} (B^0\rightarrow D\bar D^*_s)=2\pm 1.5\%\, , \end{equation} \begin{equation} {\rm Br} (B^+\rightarrow D\bar D_s)=1.7\pm 0.6\%\; , \;\;\;\; {\rm Br} (B^+\rightarrow D^*\bar D^*_s) = 2.3\pm 1.4\%\; , \label{11l} \end{equation} \begin{equation} {\rm Br} (B^0\rightarrow D^*\bar D_s)=1\pm 0.7\%\; , \; {\rm Br}(B^0\rightarrow D\bar D^*_s) =1.2\pm 1\%)\, . \end{equation} Encouraged by this success we can proceed to other exclusive modes which have not been discussed in the literature so far. \vspace*{0.2cm} {\em (ii) Modes with the radial excitations of $\bar D_s^{(*)}$} \vspace*{0.2cm} We got used to the fact that the radial excitations are coupled to corresponding currents significantly weaker than the ground-state mesons with the given quantum numbers. This is, for instance, the case with the $J/\psi$ and $\Upsilon$ mesons whose coupling to the current $\bar Q\gamma_\mu Q$ decreases with the excitation number. In the decays $ B\rightarrow D^{(*)} \bar D_s^{(*)'}$ (where the prime denotes the first radial excitation and the double prime will denote the second excitation) we expect the opposite trend. This is a specific feature of the $\bar Q q$ current. The corresponding parton spectral density grows quadratically with the energy $\varepsilon$, see Sect. 2. If duality takes place, the residues should be roughly proportional to the corresponding areas under the parabola in Fig.~1. To get an idea of the excited state contribution we will assume that $M_{D_s'} \approx 2.6 \,\mbox{GeV}$, $M_{D_s''} \approx 3.1 \,\mbox{GeV}$, $M_{D_s^{{*\prime}}} \approx 2.75 \,\mbox{GeV}$ and $M_{D_s^{*\prime\prime}} \approx 3.25 \,\mbox{GeV}$, i.e. are equidistant. Imposing the duality condition in the form $$ \int_{M_{k-1}}^{M_k}\; dM\;\Im \Pi_{\rm pert}(M)\;=\; \int\; dM\;\Im \Pi_k(M)\;,\;\;\; \Im \Pi_k(M)\;\propto \; M_k f_k^2 \,\delta(M- M_k) $$ we then obtain \begin{equation}} \def\eeq{\end{equation} \frac{f_{D_s'}^2 M_{D_s'}}{f_{D_s}^2 M_{D_s}} \approx \frac{27-8}{8} \approx 2.5\;,\;\;\;\; \frac{f_{D_s''}^2 M_{D_s''}}{f_{D_s}^2 M_{D_s}} \approx \frac{64-27}{8} \approx 4.5\;. \label{dual} \eeq Note that we use here an ``extended" definition of the excited resonances, which need not coincide with the standard Breit-Wigner peaks, with the backgrounds subtracted. Rather, we include backgrounds, collecting, say, in $D_s'$ everything with the invariant mass lower than $\approx 2.3 \,\mbox{GeV}$ (except the ground state $D_s$, of course). This is a natural operational definition from the point of view of the QCD practitioner. The estimates in Eq. (\ref{dual}) are crude and intended only for the purpose of orientation. However, it is a model for the spectral density that exactly respects duality to the given approximation, and, thus, allows simultaneous estimates of its violations in the actual decay width, where only a limited energy release is available. The contribution of the last open multiplet is actually a reasonable estimate of the scale of duality violations that might occur. On the practical side, to allow a measure of theoretical uncertainty in the excited state residues, we will allow the ratios to float, \begin{equation}} \def\eeq{\end{equation} (f_{D_s'}^2 M_{D_s'})/(f_{D_s}^2 M_{D_s})\; =\; 2.5 \, x \; , \; \;\;\; (f_{D_s''}^2 M_{D_s''})/(f_{D_s}^2M_{D_s}) \;= \;4.5 \, x \eeq where $x$ will be varied between 0.6 and 1. The values of $x<1$ reflect a relative enhancement of the lower end of the spectral density by the perturbative and, to some extent, condensate effects. Also the higher end of the spectral density is somewhat suppressed since at these energies relativistic effects become important, and temper the $\varepsilon^2$ growth of the spectral density characteristic to the non-relativistic limit. As we will see later on, the data are more consistent with $x\sim 0.6$. In Table 3 the branching ratios for the two-particle decays into excited states are quoted for three different values of $x$ . \begin{table} \begin{center} \caption{} \vspace{0.1in} \begin{tabular} {|c|c|c|c|}\hline Decay & ${\rm Br}, \%\,\, (x=0.6)$ & ${\rm Br}, \%\,\, (x=0.8)$ & ${\rm Br}, \%\,\, (x=1)$\\ \hline $B\rightarrow D\bar D_s'$ & $1.1$ & $1.5$ & $1.9$\\ \hline $B\rightarrow D\bar D_s^{*\prime}$ & $0.5$ & $0.65$ & $0.8$\\ \hline $B\rightarrow D^*\bar D_s'$ & $0.25$ & $0.35$ & $0.4$\\ \hline $B\rightarrow D^*\bar D_s^{*\prime}$ & $2.4$ & $3.2$ & $4.0$\\ \hline ${\rm Total}\;\;\bar D_s^{(*)\prime}\; {\rm Br}, \%$ & $4.3$ & $5.7$ & $7.1$ \\ \hline $B\rightarrow D\bar D_s''$ &$1.2$ &$1.6$ & $2.0$ \\ \hline $B\rightarrow D\bar D_s^{*\prime\prime}$ &$0.1$ &$0.15$ & $0.2$\\ \hline $B\rightarrow D^*\bar D_s''$ &$0.06$ &$0.08$ & $0.1$\\ \hline $B\rightarrow D^*\bar D_s^{*\prime\prime}$ &$1.1$ &$1.5$ & $1.9$\\ \hline ${\rm Total}\;\;\bar D^{(*)\prime\prime}\; {\rm Br}, \%$ & $2.5$ &$3.3$ & $4.1$ \\ \hline ${\rm Total}\;\;\bar D^{(*)\rm excit}\;{\rm Br},\%$&$6.8$&$9.0$&$ 11.3$\\ \hline \end{tabular} \end{center} \end{table} Altogether we get that $$ {\rm Br} (B\rightarrow D^{(*)} + \mbox{excited}\,\, D_s^{(*)}) \sim 7\; {\rm to }\, 12\%\; . $$ Let us discuss the decay pattern of these excitations. This issue is interesting not only in connection with the ``wrong" sign $D$'s in the $B$ decays, but by itself as well. $\bar D_s'$ can decay neither into $\bar D\bar K$ nor into $\bar D^*\bar K$. The second mode is presumably below the threshold. Even if it is slightly above the threshold, the small energy release, in conjunction with the $P$ wave nature of the decay, strongly suppresses it. Thus, the dominant mode is expected to be $\bar D_s \pi\pi $. Therefore, the $\bar D_s'$ production does not generate the ``wrong" sign $D$'s. On the other hand, $\bar D_s''$ will presumably decay predominantly into $\bar D^{*}\bar K$. One third of the above goes into $\bar D_s^{*} \eta$. Some $\bar D\bar K^{*}$ are also possible. The latter mode also leads to a ``wrong" sign $D$. Its presence may somewhat distort, however, our estimate of $\bar D_s^{*} \eta$. A competition from the $\bar D_s \pi\pi $ mode is generally possible, but unlikely to be essential: the $\bar D_s'' \rightarrow \bar D_s \pi\pi $ is a three-body decay, with the additional suppression due to the pion momenta. (It can proceed via an $S$-wave $\pi\pi$ resonance; however, the $\sigma$ meson is too broad and leads only to a moderate enhancement, whereas higher resonances do not have much of a phase space.) As for $\bar D_s^{{*\prime}}$ and $\bar D_s^{{*\prime\prime}}$, their dominant decay mode is likely to be $\bar D^{(\prime)}\bar K$ and $\bar D_s^{(\prime)} \eta$. Thus, in the three latter cases we end up with the ``wrong" sign $D$'s. Due to $SU(3)_{\rm fl}$ symmetry the yield of $\eta$ is three times smaller than that of $\bar K$'s. The total branching of the corresponding modes is $5$ to $ 9\%$. The $D\bar D_s\eta$ yield from this mechanism is $1.2$ to $2.3\%$. \vspace{0.2cm} {\em (iii) Nonresonant $D^{(*)}\bar D^{(*)} \bar K$ modes} \vspace{0.2cm} So far we discussed the $1^-$ and $0^-$ channels of the $\bar c s$ current. Now let us proceed to the positive-parity channels. As is clear from Sect. 2 the nonresonant $S$ wave kaons are produced by $\bar c\gamma_\mu\gamma_5 s$ in association with $\bar D^{(*)}$. The longitudinal part of the current produces $\bar D \bar K$, while the transverse component produces $\bar D^* \bar K$. We will assume that the kaons can be treated as soft Goldstone mesons. Needless to say, factorization is implied too. Then, we have to write out the currents $\bar b\Gamma_\mu c$ and $\bar c\Gamma_\mu s$ in the chiral/heavy quark theory. The second current is given in Ref. \cite{16}. The amplitude of the kaon ``bremsstrahlung" in the transition $B\rightarrow D^{(*)}\bar D^{(*)}\bar K$ consists of two parts -- contact and pole (see e.g. graphs of Figs.~1 and 2, respectively, in Ref. \cite{16}; see also \cite{USV}). It is not difficult to check that all pole graphs vanish in the GSV limit, when $D$'s spatial velocities are set equal to zero. Additionally they are numerically suppressed by $g$, the $D^*D\pi$ coupling constant, which, according to Ref. \cite{BBKR}, is about 0.3. Therefore, at the current level of accuracy, we prefer to omit the pole graphs altogether. In the future, as confidence in the numerical value of $g$ grows and estimates of the excited meson couplings acquire better accuracy, it will be necessary to return to the issue and include the pole graphs in the theoretical predictions. With the pole graphs discarded, calculating the decay rates of the above transitions becomes trivial. Indeed, the ratio of the amplitude $D^{(*)}\bar D^{(*)} \bar K$ in the limit $p_K\rightarrow 0$ to the corresponding two-body amplitude $D^{(*)}\bar D^{(*)}_s $ is given by $i/f_\pi$. (Note that we will consistently neglect all SU(3)$_{\rm fl}$ breaking effects everywhere, except the phase spaces. Accounting for SU(3)$_{\rm fl}$ breaking effects in the amplitudes in the first order in $m_s$ is possible; we will defer this exercise as well.) Since the kaons are produced in the $S$ wave, and the contact amplitudes depend neither on the kaon energy nor on the angles, the ratio of probabilities is given merely by the ratio of the three-body to two-body phase spaces $V_{2,3}$, \begin{equation}} \def\eeq{\end{equation} \frac{\Gamma (\bar B\rightarrow D^{(*)}\bar D^{(*)} \bar K)}{\Gamma (D^{(*)}\bar D^{(*)}_s)} = \frac{1}{f_\pi^2}\, \frac{V_3}{V_2} \, . \label{ddkrat} \eeq The three-particle phase space is conveniently written out e.g. in Ref. \cite{22}. Note that the above estimate is valid only for the ``right" spin correlation modes, $D\bar D\bar K$ and $D^{*}\bar D^{*} \bar K$. The ``wrong" spin correlation modes vanish in the GSV limit, for the same reason as was explained above, and cannot be computed without inclusion of the pole graphs. Their role, however, is significantly reduced in the three-particle decays compared to the two-particle case, since in the three-particle decays we are closer to the GSV limit. It is convenient to present the ratio $V_3/V_2$ as follows: \begin{equation}} \def\eeq{\end{equation} \frac{V_3}{V_2} = \frac{M_B^2}{32\pi^2} \, \frac{I_3}{I_2}\, , \label{vv} \eeq where the first factor corresponds to the ratio of the phase spaces with all {\em massless} final particles. The dimensionless factors $I_3$ and $I_2$ reflect the finite masses: $$V_3\,=\,M^2_BI_3/(256\pi^3)\;,\;\;\; V_2\,=\,\frac{1}{4\pi}\vec p/M_B=\frac{I_2}{8\pi}\;. $$ Numerically $I_3$ are summarized in Table 4, while the branching ratios for three-particle yield for different nonresonant decay modes are given in Table 5. \begin{table} \begin{center} \caption{} \vspace{0.1in} \begin{tabular} {|c|c|}\hline Decay &$I_3$\\ \hline $B\rightarrow D\bar D\bar K\;(\eta)$ &$0.073\;(0.068)$\\ \hline $B\rightarrow D^*\bar D^*\bar K\;(\eta)$& $0.039\;(0.035)$\\ \hline $B\rightarrow D\bar D'\bar K\;(\eta)$&$0.014\;(0.002)$\\ \hline $B\rightarrow D^*\bar D^{*\prime}\bar K\;(\eta)$&$0.002\;(0.001)$\\ \hline \end{tabular} \end{center} \end{table} Using the numerical values for $I_3$ one can easily estimate the corresponding branching ratios: \begin{equation}} \def\eeq{\end{equation} \frac{\Gamma (\bar B\rightarrow D\bar D \bar K)}{\Gamma (D\bar D_s)} \approx 0.5 \, , \,\,\, \frac{\Gamma (\bar B\rightarrow D^{*}\bar D^{*} \bar K)}{\Gamma (D^{*}\bar D^{*}_s)} \approx 0.7 \, , \eeq and \begin{equation}} \def\eeq{\end{equation} \frac{\Gamma (\bar B\rightarrow D\bar D' \bar K)}{\Gamma (D\bar D_s')} \approx 0.2\, , \,\,\, \frac{\Gamma (\bar B\rightarrow D^{*}\bar D^{{*\prime}} \bar K)}{\Gamma (D^{*}\bar D^{{*\prime}}_s)} \approx 0.04\, . \eeq Here we accounted for the fact that one can have two $\bar K$'s, of different charge, in each process at hand, and incorporated the block factors for the two-body modes. The relevant branching ratios, computed directly, are collected in Table 5. \begin{table} \begin{center} \caption{} \vspace{0.1in} \begin{tabular} {|c|c|}\hline Decay & ${\rm Br}\,, \%\;\, (x=1)$\\ \hline $B\rightarrow D\bar D\bar K$ & $0.5$\\ \hline $B\rightarrow D^*\bar D^{*}\bar K$ & $1$\\ \hline $B\rightarrow D\bar D'\bar K$ & $0.3$\\ \hline $B\rightarrow D^*\bar D^{{*\prime}}\bar K$ & $0.15$\\ \hline \end{tabular} \end{center} \end{table} Altogether we see that the total branching for four possible nonresonant channels is $\sim 2\%$. In the same approximation it is easy to estimate the yield of nonresonant $\bar D^{(*)}_s\eta$ instead of $\bar D^{(*)}\bar K$. In the SU(3)$_{\rm fl}$ limit it is 1/3 of the kaon yield. Taking account of the differences in the phase spaces we get Br($ B\rightarrow D\bar D_s\eta ) \sim 0.3$ to $0.5\%$. Altogether the nonresonant channels give $\sim 2-3\%$ of the total branching ratio. \vspace{0.2cm} {\em (iv) Other modes with nonresonant Goldstone mesons} \vspace{0.2cm} Invoking the chiral/heavy quark technique and factorization in the same vein as above, it is possible to calculate amplitudes with two soft Goldstone mesons (nonresonant), e.g. $D \bar D \bar K\pi$ or $D\bar D_s\pi\pi$. For instance, the $\bar D^* \bar K\pi$ state will be produced by the vector $\bar c\gamma_\mu s$ current, and all relevant matrix elements are already known in the literature \cite{B}. A number of pole graphs are to be taken into account. We did not attempt this calculation, because the four-particle phase space in the processes at hand is prohibitively small. The decays with two nonresonant Goldstone mesons are rare. \vspace{0.2cm} {\em (v) Resonances with the quantum numbers $0^+$ and $1^+$ in the $\bar c s$ channel } \vspace{0.2cm} Since the non-resonant contributions in the above channels turn out to be rather small, well below the 12\% level following from the quark-gluon calculation (plus duality), it is natural to conclude that $D_{s1}$ and $D_{s1}'$ (and $D_{s0}^*$ and $D_{s0}^{{*\prime}}$) play a significant role. The mass of $D_{s1}$ is expected to be $\sim 2.5$ GeV, and it is natural to expect that the mass of $D_{s1}'$ is close to 3 GeV. The masses of $D_{s0}^*$ and $D_{s0}^{{*\prime}}$ are close to those above. The estimates of the corresponding yield can be approached in a way similar to what has been done for the vector (pseudoscalar) channel. (The effect of the $s$-quark mass may be harder to take into account properly, since, as we have seen, the significant chiral kaon contribution gets suppressed, which may affect the residues of the resonances.) We use the following simplified estimate. The rate of $D_{s1}$ is expected to be approximately the same as that of $D_s^{*\prime}$. The first excitation of $D_{s1}$ is produced roughly with the same rate as the second excitation of $D_s^*$. The same is valid with respect to $D_{s0}^*$ and $D_{s0}^{{*\prime}}$ relative to the excitations of $D_s$. One may expect that all these resonances decay in the $\bar D^{(*)}\bar K$ mode; in some cases $K^* $ may appear, instead of $K$. This will still lead to the ``wrong" sign $D$. Then one gets $\sim 7$ to $11\%$ depending on the value of $x$. \vspace{0.2cm} {\em (vi) Excited D states produced by $\bar b\Gamma_\mu c$ bracket} \vspace{0.2cm} Above we considered only decays of $B$ into $D^{(*)}$ and some radially excited $\bar D_s$ states. There may also be decays into the excited states of $D$. The total yield for such decays with the radial excitations is quite small. There exist two possibilities: one can produce from the $\bar b\Gamma_\mu c$ current either radial excitations of $D^{(*)}$, or $P$-wave mesons. In the first case the corresponding partial widths are proportional to the square of the Isgur-Wise function of the transition between $B$ and the radial excitations of $D$. However, the corresponding IW function vanishes as $vv'-1$. Thus the corresponding partial widths are suppressed by a factor $(vv'-1)^2\sim \vec v^{\prime \,4}$ relative to the transitions into $D^{(*)}$. A direct calculation shows that if one takes the masses of the excited $D$ states to lie uniformly below the masses of the excited $\bar D_s$ states by 150 MeV, the corresponding recoil values of $vv'-1$ spread from 0 to 0.2. Thus the corresponding partial widths are suppressed by a factor of order 0.04 relative to the decays into the non-excited states. Our estimate would show that the corresponding contribution is less than $1\%$; although a larger effect is expected from the power corrections \cite{USV,optical}, up to 10 to $20\%$ of the elastic transition, they still seem to be small. Higher rates can be due to the $P$-wave mesons from the $\bar b\Gamma_\mu c$ current; in this case the rates are only linear in $vv'-1$. An upper bound (and a reasonable estimate) of them can be obtained by merely calculating the sum over all decay modes with $D$ and $D^*$, with a ``faked" $\rho=0.25$ (or with the excitation-free IW function $\xi_0(vv')=((1+vv')/2vv')^{1/2}$), through the sum rules for the transition amplitudes \cite{optical}. We then compare the result obtained to that with the ``actual'' $\xi$ of \eq{iwlin}: $$ {\rm Br}(B\rightarrow D^{\rm excit}+X_{\bar{c}s})\; \approx \; {\rm Br}(B\rightarrow D^{(*)}+X_{\bar{c}s})\vert_{\xi=\xi_0}- {\rm Br}(B\rightarrow D^{(*)}+X_{\bar{c}s})\vert_{\xi=1-\rho^2(vv'-1)}\;\;. $$ Since for higher excitations of $D_s^{(*)}$ the phase space is limited, this relation only overestimates somewhat the associated production of excited $D^{(*)}$. In this way we arrive at the conclusion that the contribution of the excited $D^{(*)}$ states from the $\bar b\Gamma_\mu c$ bracket is $\mathrel{\rlap{\lower3pt\hbox{\hskip0pt$\sim$} 3\%$. \section{Conclusions} The existence of the Goldstone mesons results in a very peculiar pattern of the heavy-light spectral densities, which turn out to be drastically different, say, in the vector and axial channels. By treating kaons as soft, we are able to calculate the nonresonant $D^{(*)}\bar D^{(*)}\bar K$ ($D^{(*)}\bar D_s^{(*)} \eta$) yield in $B$ decays. Altogether, summing all two-body and three-body decays modes, we get $23$ to $31$\% for the total $b\rightarrow \bar ccs$ yield. The first number corresponds to $x=0.6$ and the second to $x=1$. Moreover, if our assumptions about the decay modes of $D_s''$, $D_s^{*\prime}$, $D_s^{*\prime\prime}$, $D_{s1}$, $D_{s1}'$, $D_{s0}^*$ and $D_{s0}^{*\prime}$ are justified, they lead predominantly to the ``wrong" sign $D$'s. The yield of the ``wrong" sign $D$'s then comes out to be 13 to $ 19\%$.\footnote{We assume here, rather arbitrarily, that the excited $D^{(*)}$ mesons from the $\bar b c$ bracket are produced at the level of 2\%, and this yield is split equally between the ``wrong" sign $D$'s and $D_s$.} The yield of the ``right sign" $D$'s is estimated to be 10 to $ 12\%$; about 4\% comes from the modes $D^{(*)}\bar D_s^{(*)}$, and about the same from the modes with $\eta$ in the final state. Thus, not only we confirm, exploiting a different line of reasoning, the conclusion of Ref. \cite{5} regarding the abundance of the ``wrong" sign $D$'s, but we encounter a menace of ``overshooting": the yield of the ``wrong" sign $D$'s naturally tends to be too high. Under the circumstances it is tempting to calibrate theoretical predictions by using the CLEO and ALEPH data on the ``wrong" sign $D$'s, ${\rm Br}\,(B\rightarrow D^{(*)}\bar D^{(*)}\bar K )=10\pm 4\%$. Combining this result with our estimates we see that the value of $x\approx 0.6$ is somewhat preferred. If so, the theoretically expected branching ratio of the ``wrong" sign $D$'s is about 13\%, that of the ``right sign" $D$'s is about 10\%, and the total branching ratio of the $\bar c c s$ channel is about 23\%. Thus, we see that our model of saturation of inclusive data with two- and three-particle exclusive modes is perfectly consistent with the duality-based prediction of Ref. \cite{BBSL}, $\sim 23\%$ or slightly higher. It is quite remarkable that the exclusive estimates presented above do not contradict the quark-hadron duality. This is not too surprising, of course, since in modelling the resonance yields we respected the constraints imposed by QCD plus factorization. The accuracy of this conclusion is not high, though, at the level of $30\%$. As an experimental confirmation of our approach the $D\bar D_s\eta +...$ yield will have to be seen at the $4\%$ level. The predictions for the absolute rates made above do not pretend to have too high an accuracy; suffice it to mention that the rates are proportional to $f^2_{D_{(s)}}$, and this quantity carries a noticeable uncertainty. However, it seems certain that, contrary to naive expectations, the modes with $\bar D_s$ (the ``right" sign $D$'s) constitute a smaller fraction of the $b\rightarrow c\bar c s$ channel. This seems to be a rather general feature not depending on details of the approximations we rely on. The spectral density emphasizes larger invariant masses of the $\bar c s$ system, which then predominantly generates decay chains with strangeness and charm separated. In our calculations we employed factorization, which must be violated at some level. However, if the cross-talking between two currents $\bar c s$ and $b\rightarrow c$ appears to be strong, then one must expect dissolving $D_s$ into separate charm and strange particles from the very beginning. It is worth noting that the relative yield of $D_s^{(*)}$ is more sensitive to the strange quark mass, than the overall $\bar c c s$ yield. We point out that, although our calculations do not seemingly suggest a prominent mechanism for differentiating the hadronic widths of $B$ and $\Lambda_b$ in the $b\rightarrow c\bar c s$ channel, where the calculations naively go in closest analogy to the one presented above, there still is a potential of locating the origin of differences. The standard arguments for factorization based on $1/N_c$ counting rules are not necessarily applicable for the decays involving heavy baryons, in particular when the velocities of the final particles are small. This theoretical possibility must be carefully explored. On the other hand, a detailed experimental information about $\Lambda_b$ hadronic width and, first of all, the value of $n_c$, can provide us with an invaluable direct input. \vspace*{.3cm} {\bf Acknowledgements}: \hspace{0.2em} The authors would like to thank G. Altarelli, P. Ball, I.~Bigi and J.~Rosner for useful discussions, T. Browder for comments on the experimental situation. One of the authors (B.B.) thanks I. Dunietz for explaining his results. This work was supported in part by DOE under the grant number DE-FG02-94ER40823, by NSF under the grant number PHY 92-13313 and by the Israel Academy of Sciences and the VPR Technion fund.
2,877,628,089,319
arxiv
\section{Introduction}\label{S1} Internet access has become almost ubiquitously supported by the global terrestrial mobile networks relying on the fourth-generation (4G) and fifth-generation (5G) wireless systems. Hence, having Internet access has become virtually indispensable. The provision of Internet-above-the-clouds \cite{zhang2019aeronautical} is also of ever-increasing interest to both the civil aviation airlines and to the passengers. In-flight WiFi relying on satellites and cellular systems has been available on the some flights of global airlines, such as British Airways, American Airlines, United Airlines, Emirates and Delta Airlines, just to name a few. However, aeronautical communications directly relying on satellites and/or cellular systems suffer from high cost, limited coverage, limited capacity, and/or high end-to-end latency. Furthermore, the cellular systems that can support ground-to-air (G2A) communications are limited to a line-of-sight range and require specially designed ground stations (GSs), which necessitate the roll-out of an extensive ground infrastructure to cover a wide area. Intuitively, it is quite a challenge to provide ubiquitous coverage for every flight at a low cost by directly relying on satellites and/or cellular systems. As an alternative architecture, aeronautical {\it{ad-hoc}} networks (AANETs) \cite{medina2011airborne,vey2014aeronautical,sakhaee2006aeronautical} are capable of extending the coverage, whilst reducing the communication cost by hopping messages from plane to plane. Each aircraft as a network node is capable of sending, receiving and relaying messages until the messages are delivered to or fetched from a GS, so as to enable Internet access. Hence, routing, which finds an `\emph{optimal}' path consisting of a sequence of relay nodes, is one of the most important challenges to be solved in support of this Internet-above-the-clouds application. Routing protocols have been intensively investigated in mobile {\it{ad-hoc}} networks (MANETs) \cite{mauve2001survey} and vehicular {\it{{\it{ad-hoc}}}} networks (VANETs) \cite{li2007routing} as well as in the flying {\it{{\it{ad-hoc}}}} networks (FANETs) \cite{lakew2020routing}. However, as our analysis in \cite{zhang2019aeronautical} has revealed, AANETs have their unique features in terms of flying speed, altitude, propagation characteristics and network coverage as well as node mobility, which are different from those of MANETs, VANETs and FANETs. Therefore, the routing protocols specially developed for MANETs, VANETs and FANETs cannot be directly applied to AANETs, although their philosophies may be appropriately adopted. Hence, we will focus our attention on routing protocols specially designed for aeronautical networks. Sakhaee and Jamalipour \cite{sakhaee2006global} showed that the probability of finding at least two but potentially up to dozens of aircraft capable of establishing an AANET above-the-cloud is close to $100\%$. It was inferred by investigating a snapshot of flight data over the United States (US). They also proposed a quality of service (QoS) based so-called multipath Doppler routing protocol by jointly considering both the QoS and the relative velocity of nodes in order to find stable routing paths. Luo {\it et al} \cite{luo2019AeroMRP} proposed a reliable multipath transport protocol for AANETs by exploiting the path diversity provided by heterogeneous aeronautical networks. By exploiting the geographical information, Iordanakis {\it et al} \cite{iordanakis2006ad} proposed a routing protocol for aeronautical mobile ad hoc networks, which may be viewed as an evolved version of the classical {\it{ad-hoc}} on demand distance vector based routing (AODV) originally developed for MANETs. Furthermore, geographical information was intensively exploited in designing the routing protocols of \cite{jabbar2009aerorp,peters2011geographical,rohrer2011aerorp,medina2011geographic,gu2011delay,wang2013gr,mahmoud2013ads,swidan2015secure,zheng2016load,pang2017secure,luo2017multiple} for AANETs by considering that the locations of civil passenger aircraft are always available with the aid of the on-board radar and the automatic dependent surveillance-broadcast (ADS-B) system \cite{ADS-B1,ADS-B2}. Explicitly, the authors of \cite{jabbar2009aerorp,peters2011geographical,rohrer2011aerorp} developed a routing protocol termed as AeroRP, which is a highly adaptive location-aware routing algorithm exploiting the broadcast nature of the wireless medium along with the physical node location and trajectory knowledge for improving the data delivery in Mach-speed mobile scenarios. However, AeroRP ignores the delay imposed by relaying and it is prone to network congestion. Medina {\it et al.} \cite{medina2011geographic} proposed a geographic load sharing strategy for fully exploiting the total air-to-ground capacity available at any given instant in time. In their work, the network congestion was avoided by a congestion-aware handover strategy capable of efficient load balancing among Internet Gateways. Gu {\it et al.} \cite{gu2011delay} proposed a delay-aware routing scheme using a joint metric relying on both the relative velocity and the expected queueing delay of nodes for selecting the next node. Wang {\it et al.} \cite{wang2020delay} also designed a delay-aware routing protocol for aeronautical networks, which explored the effect of dual connectivity on delay-aware resource control in heterogeneous aeronautical networks. Du {\it et al.} \cite{du2021dynamic} aimed for minimizing the end-to-end transmission delay by jointly exploiting the direct transmissions, relayed transmissions and opportunistic transmissions. By contrast, we have also minimized the end-to-end delay of AANETs by exploiting a weighted digraph and the shortest-path algorithm in \cite{cui2021minimum} and by invoking deep reinforcement learning in \cite{liu2021deep}, respectively. Both Wang {\it et al.} \cite{wang2013gr} as well as Mahmoud and Larrieu \cite{mahmoud2013ads} exploited the geographical information provided by the ADS-B system. Specifically, Wang {\it et al.} \cite{wang2013gr} eliminated need for the traditional routing beaconing and improved the next hop selection, whilst Mahmoud and Larrieu \cite{mahmoud2013ads} concentrated on improving the information security of routing protocols. The security issues routing were further considered by Swidan {\it et al.} \cite{swidan2015secure} and Pang {\it et al.} \cite{pang2017secure}. Specifically, Swidan {\it et al.} \cite{swidan2015secure} proposed a secure geographical routing protocol by using the GS as a trusted third party for authentication and key transport. However, their solution required an additional transceiver at each aircraft having a communication range of 150\,km and wide downlink bandwidth for point-to-point communication with the GS. By contrast, Pang {\it et al.} \cite{pang2017secure} advocated an identity-based public key cryptosystem, which relies on the authentication of neighbor nodes and establishes a shared secret during the neighbor discovery phase, followed by the encryption of the data during the data forwarding phase. Luo and Wang \cite{luo2017multiple} proposed a multiple QoS parameters-based routing protocol in order to improve the overall network performance for communication between aircraft and the ground. Explicitly, they jointly optimized the maximum path life-time period, maximum residual path load capacity, and minimum path latency from all the available paths between an aircraft node and the Internet gateways with the aid of carefully selected weighting factors for the path life-time period, residual path load capacity and path latency. Since the overall network performance depends on multiple factors, it is unfair to optimize the overall network performance by relying on a single factor, such as the error probability, latency or capacity. As argued in \cite{zhang2019aeronautical}, in contrast to conventional single-objective optimization, multi-objective optimization is capable of finding all the global Pareto optimal solutions by potentially allowing the system to be reconfigured in any of it most desired optimal operating mode. In \cite{cui2021twin,cui2021multiobjective}, we developed a twin-component near-Pareto routing scheme by invoking the classic non-dominated Sorting Genetic Algorithm II (NSGA-II), which is capable of finding a set of trade-off solutions in terms of the total delay and the throughput. Furthermore, in \cite{liu2021deeplearning}, we extended our single objective optimization efforts of \cite{liu2021deep} to multi-objective packet routing optimization in AANETs in the north-Atlantic region. In the existing AANET literature, there is a paucity of contributions on applying multiple-objective routing optimization by jointly considering the end-to-end throughput, end-to-end latency and the corresponding path expiration time (PET). However, the nodes in AANETs are airplanes, which typically fly at a speed of 880 to 926 km/h \cite{zhang2019aeronautical}, hence a path may break quite soon. Hence, the PET also becomes a much more critical metric in AANETs compare to MANETs, VANETs and FANETs. Furthermore, most of the existing routing protocols were investigated mainly based on randomly generated flight data, which cannot reveal the network performance of real-world AANETs constituted by the real flights in the air. Against this background, we propose a multiple-objective routing optimization scheme for the AANET, and we evaluate its overall network performance using large-scale real historical flight data over the Australian airspace. Explicitly, our main contributions are summarized as follows. \begin{itemize} \item [1)] We propose multi-objective routing optimization for jointly optimizing the end-to-end spectral efficiency (SE), the end-to-end latency and the PET. More specifically, the latency is addressed as one of the objectives by constraining the maximum number of affordable hops, whilst the congestion is addressed by imposing a certain queueing delay at each node. Furthermore, the distance-based adaptive coding and modulation (ACM) of \cite{zhang2017adaptive,zhang2018regularized}, which was specifically designed for aeronautical communications is adopted for quantifying the each link's SE so as to determine the final end-to-end SE. Naturally, the lowest link-throughput limits the entire path's throughput. \item [2)] At the time of writing, there is no discrete $\epsilon$-MOGA version in the open literature and there is no application example of $\epsilon$-MOGA in the context of routing problems. Based on the philosophy of $\epsilon$ multi-objective genetic algorithm ($\epsilon$-MOGA) \cite{herrerowell}, which operates on a continuous parameter space for finding the Pareto-front of optimal solutions, we develop a discrete version of $\epsilon$-MOGA by considering the specific features of the routing paths consisting of discrete aircraft IDs, which we refer to as the discrete $\epsilon$ multi-objective genetic algorithm ($\epsilon$-DMOGA). Explicitly, in order to accommodate the unique feature of routing problem in AANETs, we have adapted the existing $\epsilon$-MOGA to create a discrete $\epsilon$-MOGA by considering the discrete search space of routing problems in AANETs relying on discrete aircraft IDs. This adaptation is not straightforward at all, because it involves new operations conceiving the encoding/decoding of chromosomes, as well as new crossover and mutation operations with respect to the specific nature of discrete variables that constitute a routing path emerging from a source aircraft node to a destination ground station. We use this $\epsilon$-DMOGA for efficiently solving the proposed multi-objective routing optimization problem. \item [3)] The overall network performance of our multiple-objective routing optimization quantified in terms of the end-to-end latency, the end-to-end SE and the PET, are investigated based on large-scale real historical flight data recorded over the Australian airspace. More specifically, real historical flight data of two representative dates of the top-five airlines in Australia's domestic flights are exploited for our investigations. \end{itemize} The remainder of this paper is organized as follows. The network architecture is presented in Section~\ref{S2}, which includes the mobility model and the multiple-objective functions to be investigated. In Section~\ref{S3}, we develop a discrete version of $\epsilon$-MOGA, termed as $\epsilon$-DMOGA, by exploiting the discrete nature of the routing paths specified by the aircraft IDs, which provides an effective tool to solve our proposed multi-objective routing optimization. Our simulation results based on real historical flight data recorded over the Australian airspace are presented in Section~\ref{S4}, while our conclusions are offered in Section~\ref{S5}. \begin{figure*}[tbp!] \vspace{-2mm} \begin{center} \includegraphics[width=0.75\textwidth]{figures/delay_figure} \end{center} \vspace{-2mm} \caption{Illustration for the sources of packet delay in routing} \label{fig1A} \vspace{-2mm} \end{figure*} \section{Network Architecture}\label{S2} The avionic network considered takes into account the peculiarities of aeronautical communications and exploits them for optimizing our multiple objectives. Satellites are also included into the AANET considered, which are used as the last resort for an aircraft outside all the neighbouring aircrafts' communications range. In contrast to MANETs and VANETs, the nodes of an AANET are distributed in a 3D space, and they move at extremely high speed over long distances. The geographic information of each aircraft is available for routing optimization and network design, which can be obtained with the assistance of the global positioning system (GPS) and airborne radar carried by each aircraft \cite{batabyal2015mobility}. Moreover, ADS-B systems have been widely deployed on commercial passenger aircraft, which can also provide an information vector including the aircraft ID, position, ground speed and heading directions \cite{luo2017multiple}. \subsection{Mobility model}\label{S2-1} In our network optimization, we consider all the aircraft during the 24 hours of both the busiest and the quietest day in a year. Based on the historical flight observation on Flightradar24\footnote{Flightradar24 is a global flight tracking service that provides real-time information of aircraft around the world, which is accessible on \url{https://www.flightradar24.com}}, the busiest day was June 29th in 2018, whilst the quietest day was December 25th in 2018. The movement of a node that represents an aircraft was recorded as real historical flight data, which typically includes the flight phases of holding, takeoff/landing, taxiing, and parking that always performs at an airport or near an airport, as well as the longest en-route phase. Contrast to the traditional nodes in MANETs and VANETs as well as even the nodes in FANETs, the nodes in AANETs move at a relatively high speed during the en-route phase, typically at velocities of 800 to 1000\,km/h. As our evaluation is based on real historical flight data over the Australian airspace, the nodes tend to be sparser compared to Europe and North-America. The nodes typically fly between coastal cities, such as Sydney, Melbourne, Perth and Gold Coast. The eastern coastal airspace is much busier than the northern, western and southern coastal airspace, since most people reside in eastern coastal cities, and most international flights depart from/arrive at eastern coastal cities. The central Australian airspace quite sparse, since only few people live in the central area of Australia. \subsection{Objective functions}\label{S2-2} In the Australian aeronautical network, there are $N_{G}$ GSs at five Australian airports. An aircraft can access the Internet by relying on an optimal routing path to a GS. Hence the flights do not have to rely on costly satellites as a relay node to access the Internet. We will consider the achievable end-to-end latency and end-to-end throughput as well as the stability of the routing path. Explicitly, the end-to-end latency is the sum of the signal propagation delays, signal processing delays and queuing delays. The end-to-end throughput is determined by the specific link having the minimum throughput. Again, the stability of the routing path is quantified in terms of the PET, which is in turn determined by the link having the minimum expiration time given a specific ACM mode. \subsubsection{The end-to-end latency} Let us now quantify the propagation, signal processing and queuing delays. Although there may be a ground-reflection component in the received signal of aeronautical communications, it is dominated by the Line of Sight (LoS) path for air-to-air (A2A) communications in AANETs \cite{zhang2017adaptive}. Hence, the ground-reflected component may be neglected in A2A transmission and the propagation delay can be modelled by that of the LoS path limited by the speed of light. Explicitly, let the distance between node $r_{n}$ and node $r_{n + 1}$ be denoted by $d_{r_{n},r_{n + 1}}$. The propagation delay $D_{p}(r_{n} \rightarrow r_{n + 1})$, as illustrated in Fig.~\ref{fig1A}, between node $r_{n}$ and node $r_{n + 1}$ is given by \begin{align}\label{eq1} D_{p}(r_{n} \rightarrow r_{n + 1}) = \frac{d_{r_{n},r_{n + 1}}}{c} , \end{align} where $c$ is the speed of light, $r_{n}\! \rightarrow\! r_{n + 1}$ is the link spanning from node $r_{n}$ to node $r_{n + 1}$ in the routing path $\bm{r}\! =\! \{r_{n}\! \rightarrow\! r_{n + 1}\! \rightarrow\! \cdots\! \rightarrow\! r_{N + n - 1}\}$, which consists of $N\! -\! 1$ hops from the source node $r_n$ to the destination node $r_{n + N - 1}$, with $N$ being the number of nodes in the routing path $\bm{r}$. As shown in Fig.~\ref{fig1A}, the signal processing delay is the time that a relay node takes to process a packet before it can forward it to its output queue, which includes the decoding and forwarding operations, destination lookup, packet-header updates, etc. Intuitively, when node $r_n$ is the source node, obviously there is no decoding and forwarding operations, destination lookup, packet-header updates, hence $D_s(r_n \to r_{n+1}) = 0$ in this case. However, when node $r_n$ is not a source node, but a relay node, it has to carry out the decoding and forwarding operations, destination lookup, packet-header updates. Then a constant signal processing delay will be imposed. Depending on the digital signal processing capability of the hardware and the detailed operations needed, the signal processing latency ranges from 0.5 to ten milliseconds with some complex designs having as much as 30 milliseconds \cite{McNell2021networked}. In our investigations, we set it to 5 ms as a compromise consideration. Without loss of generality, we formulate signal processing delay at node $r_{n}$ as \begin{align}\label{eq2} D_{s}(r_{n} \rightarrow r_{n + 1}) = \left\{ \begin{array}{ll} 0\,\text{ms}, & \text{if node $r_{n}$ is the source node} , \\ 5\,\text{ms}, & \text{if node $r_{n}$ is not a source node} . \end{array} \right. \end{align} The queueing delay is the time that a packet waits in a relay node after arriving at the node's queue until it can be processed plus the waiting time before the processed packet can be transmitted to the output link, as illustrated in Fig.~\ref{fig1A}. The input queueing delay is proportional to the number of packets that have already been waiting in the queue at a given time, while the output transmission delay is upper bounded by the time needed for transmitting a packet. Extensive research has been dedicated to queue theory, queue scheduling and/or minimizing the queuing delay at a node in networks \cite{das2015spatial,baz2014analysis,al2019queuing}. We can reasonably model the input queueing delay at node $r_n$ as follows \begin{align}\label{eq3} D_{q_{1}} = O_{r_{n}}D_{q_{0}} , \end{align} where the indicator $O_{r_n}\! \in\! \{0,1,\cdots,N_{B}\}$, which indicates how many packets are at the front of the queue, and $N_{B}$ is the maximum number of packets that the node can have in its queue, while $D_{q_{0}}$ is a fixed processing delay related to transmitting a whole packet through the node's output link. The total buffering delay, denoted as $D_{q_2}$, is clearly upper bounded by $D_{q_0}$, i.e., we have $D_{q_2}\! \le\! D_{q_0}$. The total queueing delay of forwarding a packet from $r_n$ to $r_{n+1}$, which is the sum of the input queueing delay $D_{q_1}$ and the output buffering delay $D_{q_2}$, can be expressed as \begin{align}\label{eq4} D_{q}(r_{n} \rightarrow r_{n + 1}) = D_{q_{1}} + D_{q_{2}} \approx \left(O_{r_n} + 1\right)D_{q_{0}} . \end{align} Note that a packet can only be routed through a node if the node's queue is not full, i.e. the number of packets waiting in the node's queue is less than $N_B$. This imposes a constraint on the routing decisions. Again, the delay imposed on a packet during its passage from node $r_{n}$ to node $r_{n + 1}$ is the sum of the signal propagation delay, signal processing delay and queuing delay, which is given by \begin{align}\label{eq5} D(r_{n} \rightarrow r_{n + 1}) =& D_{p}(r_{n} \rightarrow r_{n + 1})+ D_{s}(r_{n} \rightarrow r_{n + 1}) \nonumber \\ & + D_{q}(r_{n} \rightarrow r_{n + 1}) . \end{align} Therefore, the end-to-end latency of the routing path $\bm{r}$ is given by \begin{align}\label{eq6} D(\bm{r}) = \sum\limits_{n = 1}^{N - 1}D(r_{n} \rightarrow r_{n + 1}) . \end{align} \subsubsection{The end-to-end throughput} The end-to-end throughput is determined by the link in the routing path $\bm{r}$, which has the minimum link throughput. Let $C(r_{n} \rightarrow r_{n + 1})$ denote the link throughput between node $r_{n}$ and node $r_{n + 1}$. Then the end-to-end throughput is given by \begin{align}\label{eq7} C(\bm{r}) = \min\limits_{1\le n\le N-1} C(r_{n} \rightarrow r_{n + 1}). \end{align} The achievable link throughput $C(r_{n} \rightarrow r_{n + 1})$ is affected by the channel conditions and other factors, such as the co-channel inference. The link throughput is a function of instantaneous ignal-to-interference-plus-noise ratio (SINR), where the instantaneous SINR may be estimated using pilot signals in traditional terrestrial mobile communications. However, the problem in aeronautical communication applications is that the high speed of aircraft may result in uncorrelated small-scale fading and consequently in unreliable estimates of the instantaneous SNR or SINR, further aggravated by frequently switching among the ACM modes. Using erroneous instantaneous SNR or SINR estimates for frequently switching ACM modes may cause frequent unsuccessful transmissions, because the SINR estimates quickly become obsolete. Moreover, the instantaneous SINR does fluctuate around its average, but it has a limited range in the typical LoS scenarios. Hence, in the routing problem in AANETs, the best available distance-based ACM \cite{zhang2017adaptive,zhang2018regularized} is invoked for quantifying the link throughput of the air-to-air aeronautical communications. Hence, the achievable link throughput $C(r_{n} \rightarrow r_{n + 1})$ is a function of the communication distance between node $r_{n}$ and node $r_{n + 1}$, which is also affected by the co-channel interference imposed by the neighbour aircraft. Furthermore, given a set of $K$ ACM modes, there are $K\! +\! 1$ distance thresholds $\{d_{k}\}_{k=0}^{K}$. The data-transmitting aircraft selects an ACM mode to transmit/relay the data according to \begin{align}\label{eq8} \text{if}\quad d_{k} \le d < d_{k-1} \quad \text{choose $k$-th ACM mode} , \end{align} where $d_0\! =\! D_{\max}\! =\! D_{\text{A2A}}\! >\! d_1\! >\! \cdots\! >\! d_{K-1}\! >\! d_K\! =\! D_{\min}$. Clearly, if the distance $d$ is outside the range of $[D_{\text{min}},~D_{\text{max}}]$, there will be no adequate communication link. More specifically, $D_{\text{min}}$ is the minimum flight-safety separation that must be obeyed according to the aviation safety regulation, whilst $D_{\text{max}}$ is the maximum communication range of two aircraft, which is given by the radio distance to horizon of A2A communication \cite{zhang2019aeronautical}. \subsubsection{The path expiration time} The PET is determined by the most vulnerable link of the routing path $\bm{r}$ that has the shortest link expiration time (LET). Let $T_{\text{LET}}(r_{n} \rightarrow r_{n + 1})$ be the LET of the link between node $r_{n}$ and node $r_{n + 1}$ for offering ACM mode-$i$. Then the PET of $\bm{r}$ is given by \begin{align}\label{eq9} T_{\text{PET}}(\bm{r}) = \min\limits_{1\le n\le N-1} T_{\text{LET}}(r_{n} \rightarrow r_{n + 1}) . \end{align} Since the ACM is adopted, we have to modify the formulation of calculating the LET given in \cite{sakhaee2006global,luo2017multiple}. Specifically, given the speeds $v_{r_{n}}$ and $v_{r_{n + 1}}$, the heading directions $\theta_{r_{n}}$ and $\theta_{r_{n + 1}}$ as well as the coordinates $(x_{r_{n}}, y_{r_{n}})$ and $(x_{r_{n + 1}}, y_{r_{n + 1}})$ for node $r_{n}$ and node $r_{n + 1}$, respectively, assume that the distance between $r_{n}$ and $r_{n + 1}$ satisfies $d_k \le d_{r_{n},r_{n + 1}} < d_{k-1}$, where $d_{k-1}$ is the distance threshold or maximum distance that may be bridged over by the ACM mode~$k$, as defined in Eq.\,(\ref{eq8}). Then the LET of $T_{\text{LET}}(r_{n} \rightarrow r_{n + 1})$ is calculated according to: \begin{align}\label{eq10} T_{\text{LET}}(r_{n} \!\!\rightarrow \!\!r_{n + 1})\!\! = \!\!\frac{(ab \!\!+\!\! ef)\!\! +\!\! \sqrt{(a^2 \!\!+\!\! e^2)d_{k-1}^2 \!\!-\!\! (af\!\! -\!\!be)^{2}}}{a^{2} \!\!+ \!\!e^{2}} , \end{align} where $a, b, e$ and $f$ are given by \cite{sakhaee2006global,luo2017multiple} \begin{align} a &= v_{r_{n}}\cos \theta_{r_{n}} - v_{r_{n + 1}}\cos \theta_{r_{n + 1}} , \\ b &= x_{r_{n}} - x_{r_{n + 1}} ,\\ e &= v_{r_{n}}\sin \theta_{r_{n}} - v_{r_{n + 1}}\sin \theta_{r_{n + 1}} ,\\ f &= y_{r_{n}} - y_{r_{n + 1}} . \end{align} Intuitively, when aircraft $r_{n}$ and aircraft $r_{n + 1}$ have the same speed and heading direction, the LET between them is theoretically infinity. However, the associated LET is always upper-bounded by their flight time in practice. When aircraft $r_{n}$ and aircraft $r_{n + 1}$ have the exact opposite heading direction, they will have the minimum LET. \subsection{Multi-objective routing optimization}\label{S2-3} The specific multi-objective optimization is advocated here aims for maximizing the end-to-end achievable throughput, for minimizing the end-to-end latency and for maximizing the PET, which is formulated as \begin{align}\label{eq15} \left\{ \begin{array}{l} \mathcal{J}_{1}(\bm{r}) = \max \,C(\bm{r}) , \\ \mathcal{J}_{2}(\bm{r}) = \min \,D(\bm{r}) , \\ \mathcal{J}_{3}(\bm{r}) = \max \,T_{\text{PET}}(\bm{r}) , \end{array} \right. \end{align} \begin{align}\label{eq16} \text{s.t.} \left\{ \begin{array}{l} D(\bm{r}) \le 250\,\text{ms} , \\ N - 1 \le 5 . \end{array} \right. \end{align} The round-trip latency of geostationary satellite links is about 250 ms \cite{medina2011geographic}, which is imposed by the propagation delay up and down from the satellite. Hence intuitively, the end-to-end latency should be less than 250 ms, which results in the first constraint $D(\bm{r}) \le 250\,\text{ms}$. Naturally, when low earth-obit (LEO) satellites are considered at say 600 Km altitude, their round-trip delay is as low as 4 ms. The second constraint of $N - 1 \le 5$ is based on a practical consideration of the Australian scenario. Explicitly, the aircraft tend to fly over the land of Australia, where the GSs are at airports on the ground. Hence, an aircraft is typically capable of accessing the Internet with a small number of hops. Furthermore, the second constraint of $N - 1 \le 5$ is also used for limiting the number of nodes involved, which is helpful for controlling the AANET size. Since a routing path having more hops is more vulnerable to cyber attacks, limiting the number of nodes in a routing path also helps to secure information transmission. Clearly, for geographic areas, such as the AANET over the Atlantic Ocean, we will have to set a higher value for the maximum number of hops. \section{Discrete $\epsilon$-MOGA based Pareto-optimization of AANET routing problem}\label{S3} No closed-form solution can be derived for the multi-objective optimization problem (\ref{eq15}) under the constraint (\ref{eq16}). There are diverse methods of solving multi-objective optimization problems, such as the Lexicographic method \cite{Isermann1982linear}, weighted sum method \cite{Zadeh1963Optimality}, elitist non-dominated sorting genetic algorithm (NSGA-II) \cite{Zitzler1999Multiobjective}, Strength Pareto Evolutionary Algorithm (SPEA) \cite{Deb2002afast}, SPEA-II \cite{Zitzler2002SPEA2}, Pareto Enveloped based Selection Algorithm (PESA) \cite{Corne2000the} and PESA-II \cite{Corne2001PESA-II} as well as numerous other variations with their pros and cons. For example, the Lexicographic method is sensitive to the iteration order of objectives, while the weighted method is sensitive to the weightings and both of them suffer from high computational burden. The NSGA-II does not perform very well for more objectives, while the SPEA and SPEA2 are also exhibit high computational complexity, and the PESA as well as PESA2 are sensitive to the size of hyperbox. By contrast, as a member of the elitist multi-objective evolutionary algorithm family based on the concept of $\epsilon$-dominance, $\epsilon$-MOGA has the compelling characteristics of efficient parallel computing along with the efficient control of the elitist archive, where problem solutions are stored. Hence, $\epsilon$-MOGA outperforms the above-mentioned multi-objective optimization algorithms in terms of its convergence, diversity of solutions and computational efficiency. Hence, we apply the $\epsilon$-DMOGA for determining the optimal Pareto-front at a moderate computational burden. Our $\epsilon$-DMOGA is developed from the $\epsilon$-MOGA \cite{herrerowell,Martinez2009Applied}, which is an elitist multi-objective evolutionary algorithm based on the concept of $\epsilon$-dominance \cite{reynoso2014controller}, by taking into consideration the discrete nature of the routing path constituted by discrete aircraft IDs. \subsection{$\epsilon$-DMOGA} In the Pareto optimal set for the multi-objective optimization problem (\ref{eq15}), no single solution should dominate others. Explicitly, a solution routing path $\bm{r}_{1}$ by definition dominates another routing path $\bm{r}_{2}$ in the routing path space, if and only if all the objectives of $\bm{r}_{1}$ are no worse than the objectives of $\bm{r}_{2}$ and at least one objective of $\bm{r}_{1}$ is better than that of $\bm{r}_{2}$, which is formulated as \begin{align}\label{eq17} \forall i \!\!= \!\!1,2,3, \mathcal{J}_{i}(\bm{r}_{1}) \!\!\preceq \!\!\mathcal{J}_{i}(\bm{r}_{2}) \,\, \text{and}\,\, \exists k \!\!= \!\!1,2,3, \mathcal{J}_k(\bm{r}_{1}) \!\!\prec \!\! \mathcal{J}_k(\bm{r}_{2}) . \end{align} The operator $\preceq$ represents that the lefthand objective is no worse than the righthand one. For example, $\mathcal{J}_{1}(\bm{r}_{1}) \preceq \mathcal{J}_{1}(\bm{r}_{2})$ is equal to $C(\bm{r}_{1}) \ge C(\bm{r}_{2})$, whilst $\mathcal{J}_{2}(\bm{r}_{1}) \preceq \mathcal{J}_{2}(\bm{r}_{2})$ is equal to $D(\bm{r}_{1}) \le D(\bm{r}_{2})$. Similarly, $\prec$ represents that the lefthand objective is better than the righthand one. Then, the Pareto-front solution set $\mathbf{R}$ can be formulated as \begin{align}\label{eq18} \mathbf{R} = \left\{\bm{r} \in \mathbf{R}|\nexists \widetilde{\bm{r}} \in \mathbf{R}\,: \,\widetilde{\bm{r}} \preceq \bm{r}\right\} , \end{align} where $\nexists$ represents `does not exist'. Hence, for $\bm{r}\in \mathbf{R}$, $\nexists \widetilde{\bm{r}} \in \mathbf{R}\,: \,\widetilde{\bm{r}} \preceq \bm{r}$ means that there is no single $\widetilde{\bm{r}}$, which dominates $\bm{r}$ if it does not belong to $\mathbf{R}$. Again, without loss of generality, we discuss the AANET over the Australian airspace. For other AANETs, similar discussions can be applied subject to minor modifications. Intuitively, the end-to-end throughput is a more dominant criterion for the Internet-above-the-clouds than the end-to-end latency and the PET. Thus, our $\epsilon$-DMOGA based multi-objective routing optimization may start from a direct connection to any of the $N_G$ GSs by finding the Pareto-front of optimal solutions with respect to throughput, latency and PET. Since $N_G$ is small, we can easily find a `Pareto-optimal' GS that dominates other GSs by enumerating the multiple objectives to each GS. This is effectively the single-hop solution. Then, the $\epsilon$-DMOGA based multi-objective routing optimization can proceed to find all the Pareto-front solutions within an affordable number of hops, say $N-1=2, 3, 4, 5$ with respect to the multi-component objective function (\ref{eq15}). The $\epsilon$-DMOGA based multi-objective routing optimization is characterized by its initialization, individual mutation, crossover and selection operations used throughout exploring the search space in a generation-based progression, until the termination criterion is met. We now detail this $\epsilon$-DMOGA. \begin{itemize} \item [1)] \textbf{Initialization}. At the first generation of $g=1$, where $g$ denotes the generation index, the $\epsilon$-DMOGA commences its search by randomly generating an initial population of $P_s$ $N$-element routing path vectors, denoted as $\bm{P}^{(g)}$. Explicitly, permutation encoding is invoked for generating a chromosome representing a routing path, which consists of a string of aircraft IDs from source aircraft to the destination ground station. For each individual of $\bm{P}^{(g)}$, the first element is the source node $r_1$, and its second element is randomly selected from the node space \begin{align}\label{eq19} \mathbb{A}\backslash\left\{r_1\right\} = \left\{r \in \mathbb{A}, \sim \left(r \in \{r_1\}\right)\right\} , \end{align} where $\mathbb{A}$ is the node space consisting of the aircraft in air, and $\mathbb{A}\backslash\{r_1\}$ represents the node space with $r_1$ removed. In general, the $n$-th element of an individual, where $2\le n\le N-1$, is randomly selected from the node space \begin{align}\label{eq20} &\mathbb{A}\backslash\left\{r_{1},r_{2},\cdots,r_{n - 1}\right\} \nonumber\\ &\qquad= \left\{r \in \mathbb{A}, \sim \left(r \in \{r_{1},r_{2},\cdots,r_{n - 1}\right\}\right)\} . \end{align} The last node in a routing path, i.e. the $N$-th node, is a GS, randomly selected from the GS node space $\mathbb{B}$, which consists of the $N_G$ GSs at airports. The archive $\bm{A}^{(g)}$ that contains the elite population is initialized as the null set at the first generation $g=1$. The $\epsilon$-DMOGA scheme solves the multi-objective routing optimization by evolving the main population $\bm{P}^{(g)}$ of $P_s$ $N$-element routing path vectors from one generation to the next. \begin{figure}[tbp!] \vspace{-2mm} \begin{center} \includegraphics[width=0.75\columnwidth]{figures/epsilon_box3} \end{center} \vspace{-2mm} \caption{The illustration of $\epsilon$-dominance and the $\epsilon$-Pareto-front solution.} \label{fig1} \vspace{-2mm} \end{figure} \item [2)] \textbf{Archive}. By calculating and comparing the multi-objective functions for the individuals of $\bm{P}^{(g)}$, the $\epsilon$-Pareto-front solution set $\widetilde{\mathbf{R}}$ is selected. Explicitly, the individuals in the $\epsilon$-Pareto-front solution set $\widetilde{\mathbf{R}}$ $\epsilon$-dominate all the other individuals that are not selected into $\widetilde{\mathbf{R}}$. The concept of $\epsilon$-dominance is illustrated in Fig.~\ref{fig1}\footnote{In Fig.~\ref{fig1}, we only plot two dimensions in order to have a clear illustration. For our three objective dimensions as formulated in (\ref{eq15}), the light blue areas will be cubic.}, where the shade areas are $\epsilon$-dominated by $\bm{r}^{*}$ and the other blue points on the $\epsilon$-Pareto-front are $\epsilon$-Pareto-front solutions. Furthermore, $\epsilon_i$, $i = 1,2,3$, is the width of a box, which is defined as \begin{align}\label{eq21} \epsilon_i = \frac{\mathcal{J}_{i}^{max} - \mathcal{J}_{i}^{min}}{N_{\text{box},i}} , \end{align} where $N_{\text{box},i}$ is the number of partitions in the dimension of the $i$-th objective, which preserves the diversity of $\epsilon$-dominance solution in the $i$-th objective dimension. The Pareto-front limits $\mathcal{J}_{i}^{min}$ and $\mathcal{J}_{i}^{max}$ for $i = 1,2,3$ are calculated as follows \begin{align}\label{eq22} \mathcal{J}_{i}^{max} = & \max\limits_{\bm{r} \in \widetilde{\mathbf{R}}} \mathcal{J}_{i}\left(\bm{r}\right), i = 1,2,3 ,\\ \label{eq23} \mathcal{J}_{i}^{min} = & \min\limits_{\bm{r} \in \widetilde{\mathbf{R}}} \mathcal{J}_{i}\left(\bm{r}\right), i = 1,2,3. \end{align} The individuals in $\widetilde{\mathbf{R}}$ that are not $\epsilon$-dominated by the individuals in the elite population archive $\bm{A}^{(g)}$ are stored into $\bm{A}^{(g)}$. Hence the size of the archive $\bm{A}^{(g)}$ may vary at different generations. Furthermore, $\widetilde{\mathbf{R}}$ is an intermediate Pareto-front set at the current generation that converges towards the Pareto optimal set $\mathbf{R}$ as the population evolves over generations. \item [3)] \textbf{Variant}. A new variant is generated by the amalgamation of the `\emph{crossover}' and `\emph{mutation}' operations, which are typically two separate operations in single-objective GA optimization. However, we use the terminology `variant' for the operations of crossover and mutation in our $\epsilon$-DMOGA, since both operations are controlled by the probability of crossover/mutation $p_{c/m}$. Explicitly, two individuals, $\bm{r}^{(P)}$ and $\bm{r}^{(A)}$, are randomly selected from the main population $\bm{P}^{(g)}$ and the elite population $\bm{A}^{(g)}$, respectively. Then, a randomly generated value $\alpha\in [0, ~ 1]$ decides which operation should be applied to $\bm{r}^{(g,P)}$ and $\bm{r}^{(g,A)}$. \begin{itemize} \item [\circled{1}] {\bf{Crossover}}. If $\alpha > p_{c/m}$, $\bm{r}^{(g,P)} = \left\{r_{1}^{(g,P)},r_{2}^{(g,P)},\cdots,r_{N}^{(g,P)}\right\}$ and $\bm{r}^{(g,A)} = \Big\{r_{1}^{(g,A)},r_{2}^{(g,A)},$ $ \cdots,r_{N}^{(g,A)}\Big\}$ will cross over part of their elements. There exist numerous crossover mechanisms, and we opt for employing the single-point crossover due to its simplicity. Explicitly, a point on both $\bm{r}^{(g,P)}$ and $\bm{r}^{(g,A)}$ is picked randomly, which is designated as the crossover point. Then the elements to the right of the crossover point are swapped between $\bm{r}^{(P)}$ and $\bm{r}^{(A)}$, which results in two new offspring, each carrying some genetic information from both parents. Given the crossover point $n\ge 2$, the two new offspring can be expressed as \begin{align}\label{eq24} \left\{ \begin{array}{lll} \!\!\!\!\widehat{\bm{r}}_1^{(g,G)} \!\!\!\!\!\!&=&\!\!\!\!\!\! \left\{\!\!r_{1}^{(g,P)},r_{2}^{(g,P)},\!\!\cdots,\!\!r_{n}^{(g,P)},r_{n + 1}^{(g,A)}\!\!\cdots,\!\!r_{N}^{(g,A)}\!\!\right\} ,\\ \!\!\!\!\widehat{\bm{r}}_2^{(g,G)} \!\!\!\!\!\!&=&\!\!\!\! \!\!\left\{\!\!r_{1}^{(g,A)},r_{2}^{(g,A)},\!\!\cdots,\!\!r_{n}^{(g,A)},r_{n + 1}^{(g,P)}\!\!\cdots,\!\!r_{N}^{(g,P)}\!\!\right\} , \end{array} \right. \end{align} where the superscript $G$ indicates that both $\widehat{\bm{r}}_1^{(g,G)}$ and $\widehat{\bm{r}}_2^{(g,G)}$ are stored into an auxiliary population $\bm{G}^{(g)}$. Note that the same aircraft ID should be avoided both in $\widehat{\bm{r}}_1^{(g,G)}$ and $\widehat{\bm{r}}_2^{(g,G)}$ by applying sophisticated operations, such as for example, checking and mutating to a new non-same aircraft ID if a $r_{n + j}^{(g,A)}$ is same as a $r_{j}^{(g,P)}$. \begin{figure}[tbp!] \vspace{-4mm} \begin{center} \includegraphics[width=0.75\columnwidth]{figures/3-objective_opt} \end{center} \vspace{-2mm} \caption{The objective space can be divided into the four areas, namely, $S_1$: $\mathcal{J}_i^{min}\le \mathcal{J}_i\le\mathcal{J}_i^{max}$, $\forall i = 1,2,3$; $S_2$: $\mathcal{J}_i > \mathcal{J}_i^{max}$, $\forall i = 1,2,3$; $S_4$: $\mathcal{J}_i< \mathcal{J}_i^{min}$, $\forall i = 1,2,3$; and $S_3$: the rest of the objective space.} \label{fig2} \vspace{-2mm} \end{figure} \begin{figure*}[bp!]\setcounter{equation}{24} \hrulefill \vspace*{-1mm} \begin{align}\label{eq25} \left\{\begin{array}{lll} \widehat{\bm{r}}_1^{(g,G)} &=& \left\{r_{1}^{(g,P)},\cdots,\widehat{r}_{l_1}^{(g,P)},\cdots,\widehat{r}_{l_2}^{(g,P)},\cdots,\widehat{r}_{l_{N_m}}^{(g,P)},\cdots,r_{N}^{(g,P)}\right\} ,\\ \widehat{\bm{r}}_2^{(g,G)} &=& \left\{r_{1}^{(g,A)},\cdots,\widehat{r}_{l_1}^{(g,A)},\cdots,\widehat{r}_{l_2}^{(g,A)},\cdots,\widehat{r}_{l_{N_m}}^{(g,A)},\cdots,r_{N}^{(g,A)}\right\} , \end{array}\right. \end{align} \vspace*{-1mm} \end{figure*} \item [\circled{2}] {\bf{Mutation}}. If $\alpha \le p_{c/m}$, $\bm{r}^{(g,P)} = \left\{r_{1}^{(g,P)},r_{2}^{(g,P)},\cdots,r_{N}^{(g,P)}\right\}$ and $\bm{r}^{(g,A)} = \Big\{r_{1}^{(g,A)},r_{2}^{(g,A)},$ $ \cdots,r_{N}^{(g,A)}\Big\}$ will mutate some of their elements. Intuitively, the mutation may occur in a single element or in multiple elements. We opt for the latter. Explicitly, an integer $N_m$ is randomly generated in the range of $[1, ~ N - 1]$. Then an $N_m$-length vector $\bm{l} = \left\{l_1,l_2,\cdots,l_{N_m}\right\}$ is generated and each of its elements $l_i$, $i = 1,2,\cdots, N_m$, is selected from the integer set of $\left\{2,3,\cdots,N\right\}$ without repetition. Specifically, $N_m$ determines the number of mutated elements with $l_i$, $1\le i\le N_m$, specifying the positions of these elements. The pair of new offspring generated by mutation are expressed as Eq.~(\ref{eq25}), where $\widehat{r}_{l_i}^{(g,P)}$ and $\widehat{r}_{l_i}^{(g,A)}$, $i = 1,2,\cdots,N_m$, are the new genes generated by mutation from parent $\bm{r}^{(g,P)}$ and $\bm{r}^{(g,A)}$, respectively. These mutated elements are randomly drawn from the aircraft node set. However, the mutated elements must not be duplicated with other elements within the same individual to avoid loops in the routing path. Therefore, if a mutated element is the same as another element in the same individual, it must be mutated again until it becomes different. Similarly, both $\widehat{\bm{r}}_1^{(g,G)}$ and $\widehat{\bm{r}}_2^{(g,G)}$ are stored into the auxiliary population $\bm{G}^{(g)}$. \end{itemize} The crossover or mutation operations are operated $N_O / 2$ times, which results in total of $N_O$ new offspring in the auxiliary population $\bm{G}^{(g)}$. \item [4)] \textbf{Selection}. The selection operation of multiple-objective optimization is much more complex than that of single-objective optimization. Explicitly, the $\epsilon$-DMOGA calculates the multiple objective functions of the individuals in the auxiliary population $\bm{G}^{(g)}$ and decides, which specific individual will be selected into the elite population $\bm{A}^{(g)}$ on the basis of its location in the objective space, as illustrated in Fig.~\ref{fig2}. More specifically, there are four scenarios depending on the particular location of the individual in the objective space. \begin{itemize} \item [\circled{1}] {\emph{Located in $S_1$}}. If an individual $\widehat{\bm{r}}_i^{(g,G)}$, $i\in\{1,2,\cdots, N_O\}$ is located in the objective function space area $S_1$ and it is not $\epsilon$-dominated by any individual of $\bm{A}^{(g)}$, it will be stored into the elite population $\bm{A}^{(g)}$, and the individuals in $\bm{A}^{(g)}$ that are $\epsilon$-dominated by $\widehat{\bm{r}}_i^{(g,G)}$ will be removed from the elite population. \item [\circled{2}] {\emph{Located in $S_2$}}. If an individual $\widehat{\bm{r}}_{i}^{(g,G)}$, $i\in\{1,2,\cdots, N_O\}$ is located in the objective function space area $S_2$, it will not be stored into the elite population $\bm{A}^{(g)}$, since it is $\epsilon$-dominated by all the individuals in $\bm{A}^{(g)}$. \item [\circled{3}] {\emph{Located in $S_3$}}. If an individual $\widehat{\bm{r}}_i^{(g,G)}$, $i\in\{1,2,\cdots, N_O\}$ is located in the objective function space area $S_3$, the $\epsilon$-DMOGA calculates and compares the objective functions of the individuals in $\widetilde{\bm{P}}^{(g)} = \bm{A}^{(g)} \cup \widehat{\bm{r}}_i^{(g,G)}$. Then, the $\epsilon$-Pareto-front set $\widetilde{\mathbf{R}}^{(\widetilde{\mathbf{P}})}$ is selected and the elite population $\bm{A}^{(g)}$ is updated as $\widetilde{\mathbf{R}}^{(\widetilde{\mathbf{P}})}$. Additionally, both the Pareto-front limits $\mathcal{J}_i^{min}$ and $\mathcal{J}_i^{max}$ as well as the box width $\epsilon_i$ are updated for all the three dimensions $i = 1,2,3$ according to (\ref{eq23}), (\ref{eq22}) and (\ref{eq21}). \item [\circled{4}] {\emph{Located in $S_4$}}. If an individual $\widehat{\bm{r}}_{i}^{(g,G)}$, $i\in\{1,2,\cdots, N_O\}$ is located in the objective function space area $S_4$, all the individuals in the elite population $\bm{A}^{(g)}$ are deleted, since all of them are $\epsilon$-dominated by $\widehat{\bm{r}}_1^{(g,G)}$, and $\widehat{\bm{r}}_1^{(g,G)}$ is stored into $\bm{A}^{(g)}$. The limit of each objective function $\mathcal{J}_i^{min}$, $i = 1,2,3$, is updated as $\mathcal{J}_i\left(\widehat{\bm{r}}_1^{(g,G)}\right)$, $i = 1,2,3$. \end{itemize} \item [5)] \textbf{Update}. Update the main population $\bm{P}^{(g)}$ by comparing its individuals and the individuals selected from the auxiliary population $\bm{G}^{(g)}$. Explicitly, an individual $\widehat{\bm{r}}_i^{(g,G)}$, $i = 1,2,\cdots,N_{G}$, is compared to an individual $\bm{r}_j^{(g,P)}$ that is randomly selected from $\bm{P}^{(g)}$: if $\widehat{\bm{r}}_i^{(g,G)}$ dominates $\bm{r}_j^{(g,P)}$ as defined by (\ref{eq17}), $\bm{r}_j^{(g,P)}$ is replaced by $\widehat{\bm{r}}_i^{(g,G)}$ in the main population $\bm{P}^{(g)}$. The updating operations are continued until all the individuals in the auxiliary population $\bm{G}^{(g)}$ are compared to an individual selected from the main population $\bm{P}^{(g)}$. \item [6)] \textbf{Termination}. The ultimate stopping criterion would be that the Pareto-front solutions of the multiple-objective routing optimization problem have been found. However, we cannot offer any proof of evidence that the Pareto-optimal routing paths have indeed been found. In order to have limited and predicable computational complexity, we opt for halting the optimization procedure when the pre-defined maximum affordable number of generations $g_{\max}$ has been exhausted, namely, $g=g_{\max}$, and the individuals from $\bm{A}^{(g_{\max})}$ comprise the near-Pareto solutions. Otherwise, we set $g = g + 1$ and go to 2)~\textbf{Archive}. \end{itemize} As a population-based nature-inspired multiple-objective optimization algorithm, the computational complexity of $\epsilon$-DMOGA is bounded by the number of generations and the population size, with some additional complexities imposed by the crossover and mutation as well as selection operations. Hence, the computational complexity can be roughly quantified by the number of cost function (CF) evaluations, which is given by $(P_{s} + N_{O})g_{\max}$ CF-evaluation. \subsection{Convergence of $\epsilon$-DMOGA} As a nature-inspired multiple-objective optimization algorithm, there is randomness in the search procedures of $\epsilon$-DMOGA, hence it is quite challenge to definitely say whether a Pareto-optimal solution has been achieved. Nevertheless, $\epsilon$-DMOGA tries to ensure that the elite population archive $\bm{A}^{(g)}$ converge toward an $\epsilon$-Pareto set $\widetilde{\mathbf{R}}^{(\widetilde{\mathbf{P}})}$ in a smart distributed manner along the Pareto front. The convergence of $\epsilon$-DMOGA can be studied in a similar manner to \cite{Hanne1999onthe} by the probability of convergence, which is defined as \begin{align} \lim_{g \to \infty} Pr\left(d\left(\bm{A}^{(g)},\mathbf{R}\right) \to 0 \right) = 1, \end{align} where $d\left(\bm{A}^{(g)},\mathbf{R}\right)$ is a distance function between the $g$-th generation's elite population archive and the Pareto optimal $\mathbf{R}$. Additionally, the convergence of $\epsilon$-DMOGA may be also studied in a manner similar to {\it{Theorem 1: Almost sure convergence}} in \cite{Hanne1999onthe}. Hence motivated readers are referred to \cite{Hanne1999onthe} for a detailed study of convergence in multiple-objective evolutionary algorithms. \begin{table*}[tp!] \vspace*{-2mm} \caption{Distance-based adaptive coding and modulation scheme for aeronautical communications.} \vspace*{-2mm} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular*}{18cm}{@{\extracolsep{\fill}}C{1.2cm}|C{2.0cm}|C{2.0cm}|C{2.0cm}|C{4.0cm}|C{4.0cm}} \toprule Mode $k$ & Mode color & Modulation & Code rate & Spectral efficiency\,(bps/Hz) & Switching threshold $d_k$\,(km) \\ \toprule 0 & None & None & None & $< 0.459$ & $> 740.8$ \\ \midrule 1 & Black & BPSK & 0.488 & 0.459 & 500 \\ \midrule 2 & Magenta & QPSK & 0.533 & 1.000 & 350 \\ \midrule 3 & Green & QPSK & 0.706 & 1.322 & 200 \\ \midrule 4 & Yellow & 8-QAM & 0.642 & 1.809 & 110 \\ \midrule 5 & Blue & 8-QAM & 0.780 & 2.194 & 40 \\ \midrule 6 & Cyan & 16-QAM & 0.731 & 2.747 & 25 \\ \midrule 7 & Red & 16-QAM & 0.853 & 3.197 & 5.56 \\ \bottomrule \end{tabular*} } \end{center} \label{Tab1} \vspace*{-2mm} \end{table*} \section{Simulation results}\label{S4} In this section, we investigate the achievable network performance of our proposed $\epsilon$-DMOGA based multi-objective routing optimization scheme. \subsection{Simulated AANET}\label{S4.1} The mobility characteristics of the nodes are critical for designing and analysing AANETs. In \cite{zhang2021semi}, we developed a semi-stochastic aircraft mobility model, which is capable of generating an arbitrary number of flights. However, ``it would be ideal to use actual node position", as stated by Kingsbury \cite{Kingsbury2009Mobile}. from Massachusetts Institute of Technology. Hence, in contrast to relying on a mobility model, which generates artificial flights and their trajectories for approximating aircraft movement, we simulate a realistic AANET in the Australian airspace based on a large-scale real historical flight data on both the busiest and quietest day of 2018. Specifically, June 29, 2018, which represents the busiest day, and December 25, 2018, which represents the quietest day. The busiest/quietest day is determined by the number of flights in air on the day, which indicates the traffic in airspace. We assume that there are $N_G=5$ GSs, namely these at Perth airport (PER), Melbourne airport (MEL), Sydney airport (SYD), Brisbane airport (BNE), and Darwin International airport (DRW). The selection of these five representative airports has jointly considered the geographical distribution and flight handling capacity. The flights considered for our investigation are real historical flights of the top-5 domestic airlines scheduled on June 29, 2018 and December 25, 2018. The top-5 domestic airlines in Australia were Quantas, Jetstar, Tigerair, Virgin Australia and Rex (Regional Express) in 2018. The AANET employs the time division duplexing (TDD) protocol, which has already been standardized by existing aeronautical communication systems, such as the Automatic Dependent Surveillance-Broadcast (ADS-B) \cite{Minimum2002}, L-band digital aeronautical communications system (L-DACS) \cite{Schnell2014LDACS} and the aeronautical mobile airport communication system (AeroMACS) \cite{Budinger2011Aeronautical}. Following the physical layer design in [31], orthogonal frequency-division multiplexing (OFDM) is adopted as the transmission technique of broadband aeronautical communications. Each aircraft has 32 transmit antennas and 4 receiver antennas. The network is allocated $B_{\text{total}}=6$\,MHz bandwidth at the carrier frequency of 5\,GHz. This bandwidth is divided into 512 subcarriers. The number of cyclic prefix (CP) samples is $N_{\text{cp}} = 32$. The transmit power per antenna is $P_t= 1$\,Watt. Furthermore, the distance-based ACM scheme of \cite{zhang2017adaptive,zhang2018regularized} designed for aeronautical communications is employed for quantifying the link quality between a pair of communicating aircraft, in which the transmit aircraft activates a specific ACM mode based on its distance from the receiver aircraft. Explicitly, our distance-based ACM scheme using $K = 8$ modes is given in Table~I, where an ACM mode is represented by a color. If the distance of two aircraft is longer than 740.8 km, there exists no adequate communication link between these two aircraft, which is marked as `None' in Table~\ref{Tab1}. The default parameters of the AANET used in our simulations are summarised in Table~\ref{Tab2_add}. \begin{table*}[btp!] \vspace{-5mm} \caption[Parameters used in this report]{Parameters used in simulating AANET} \begin{center} \begin{tabular}{L{4.5cm}|L{6.0cm}|L{4cm}} \hline\hline \multirow{13}{4.5cm}{AANET environment} & Airspace & Australian airspace \\ &\multirow{3}{6.0cm}{Airlines considered} & Top-5 domestic airlines of Quantas, Jetstar, Tigerair, \\ && Virgin Australia and Rex \\ &Location of GSs & PER, MEL, SYD, BNE, and DRW \\ & \multirow{2}{6.0cm}{Representative dates investigated} & December 25th, 2018 \\ & & June 29th, 2018\\ & Time period observed & 00:00 $\sim$ 24:00 \\ & Total number of flights on December 25th, 2018 & 802 \\ & Total number of flights on June 29th, 2018 & 1007\\ & Latitude & Determined by each aircraft\\ & Longitude & Determined by each aircraft\\ & Altitude & Determined by each aircraft\\ \hline \hline \multirow{8}{4.5cm}{Communication parameters} & Carrier frequency & 5 GHz\\ & Bandwidth $B_{\text{total}}$ & 6 MHz \\ & Number of CPs $N_{\text{cp}}$ & 32 \\ & Number of subcarrier $N_{c\text{c}}$ & 512 \\ & Rice factor $K_{\text{Rice}}$ & 5 dB \\ & ACM & As detailed in Table~I \\ & Maximum A2A communication distance & 740.8 km \\ & Maximum A2G communication distance & 370.4 km \\ \hline \end{tabular} \end{center} \label{Tab2_add} \vspace{-5mm} \end{table*} \begin{table*}[tp!] \vspace*{-1mm} \caption{Comparing the multi-objectives of the Pareto-optimal routing paths for flight TT589 on June 29, 2018.} \vspace*{-1mm} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular*}{16.0cm}{@{\extracolsep{\fill}}C{3.0cm}|C{2.5cm}C{2.5cm}|C{2.5cm}C{2.5cm}} \toprule & 2-hop solution 1 & 2-hop solution 2 & 3-hop solution 1 & 3-hop solution 2 \\ \midrule $\mathcal{J}_1$ SE\,(bps/Hz) & 0.459 & 0.459 & 1.000 & 1.000 \\ $\mathcal{J}_2$ delay\,(s) & 0.01500361 & 0.01500365 & 0.03000395 & 0.03000379 \\ $\mathcal{J}_3$ PET\,(s) & 1902.86767 & 2586.56434 & 1900.42115 & 1132.96596 \\ \bottomrule \end{tabular*} } \end{center} \label{Tab2} \vspace*{-1mm} \end{table*} \begin{figure*}[htbp] \vspace{-4mm} \begin{center} \vspace{-2mm} \subfigure[No 1 hop to a GS]{ \includegraphics[width=0.45\textwidth]{figures/1_hops_routing_AirID_058_UTC06} \label{fig3a} }% \\ \vspace{-2mm} \subfigure[2 hops to a GS -- Solution-1]{ \includegraphics[width=0.45\textwidth]{figures/2_hops_routing_solution_1_AirID_058_UTC06} \label{fig3b} }\hspace*{5mm} \vspace{-2mm} \subfigure[2 hops to a GS -- Solution-2]{ \includegraphics[width=0.45\textwidth]{figures/2_hops_routing_solution_2_AirID_058_UTC06} \label{fig3c} }% \\ \vspace{-1mm} \subfigure[3 hops to a GS -- Solution-1]{ \includegraphics[width=0.45\textwidth]{figures/3_hops_routing_solution_1_AirID_058_UTC06} \label{fig3d} }\hspace*{5mm} \vspace{-1mm} \subfigure[3 hops to a GS --5 Solution-2]{ \includegraphics[width=0.46\textwidth]{figures/3_hops_routing_solution_2_AirID_058_UTC06} \label{fig3e} } \end{center} \vspace{-4mm} \caption{A specific example of the flight-TT589's routing path to a GS, all of which are Pareto-optimal compared to any of other routing paths.} \label{fig3} \vspace{-4mm} \end{figure*} \subsection{A specific example of the flight TT589}\label{S4.2} First, we investigate the achievable multiple objectives of the network layer performance via the specific example of the flight TT589 on June 29, 2018, which may be extrapolated to other flights and other dates. As shown in Fig.~\ref{fig3a}, there is no available direct link to any GS at the five airports considered. However, if we increase the number of the affordable relay nodes and tolerate a higher delay, there are available routing paths to the GS at Brisbane airport. As shown in Fig.~\ref{fig3b} and Fig.~\ref{fig3c}, there are two Pareto-optimal 2-hop routing paths to the GS deployed at Brisbane airport relying on the relay nodes JQ935 and QF2366, respectively. Both routing paths have a spectrum efficiency (SE) of 0.459\,bps/Hz according to the SE of the `black' link of Table~\ref{Tab1}. The multi-objective functions of these two solutions are compared in the left half of Table~\ref{Tab2}. It is clear that the two routing paths shown in Fig.~\ref{fig3b} and Fig.~\ref{fig3c} do not dominate each other, and hence both are Pareto-optimal routing paths, provided that the affordable number of relay nodes is one. If the affordable number of relay nodes is two, we can find routing paths having a higher SE. As shown in Fig.~\ref{fig3d} and Fig.~\ref{fig3e}, there are two Pareto-optimal routing paths to the GS at Brisbane airport via three hops, namely, the routing path TT589$\rightarrow$QF2407$\rightarrow$QF974$\rightarrow$BNE and the routing path TT589$\rightarrow$QF2407$\rightarrow$QF935$\rightarrow$BNE, respectively. Both the routing paths have a SE of 1.000\,bps/Hz according to the SE of the `magenta' link, which determines the end-to-end SE. As confirmed in the right half of Table~\ref{Tab2}, these two 3-hop routing paths are also Pareto-optimal, as they do not dominate each other. \begin{figure*}[tp!] \vspace{-2mm} \begin{center} \subfigure[June 29, 2018]{ \includegraphics[width=1.0\columnwidth]{figures/AU_Jun29_in_air_hops_all} \label{fig4a} }% \subfigure[December 25, 2018]{ \includegraphics[width=1.0\columnwidth]{figures/AU_Dec25_in_air_hops_all} \label{fig4b} } \end{center} \vspace{-2mm} \caption{The number of flights in air over 24 hours capable of accessing the Internet using the $\epsilon$-DMOGA multi-objective routing optimization.} \label{fig4} \vspace{-2mm} \end{figure*} \subsection{Overall multi-objective network-layer performance}\label{S4.3} We now investigate the overall network-layer performance, including how many flights in the air over a period of 24 hours can access the Internet through our $\epsilon$-DMOGA based multi-objective routing as well as the average end-to-end SE, the average end-to-end latency and the routing paths' average PET, on June 29, 2018 and December 25, 2018, respectively. \subsubsection{The number of flights that can access the Internet} Clearly, the number of flights in air changes over the 24 hours of the day, as shown in Fig.~\ref{fig4a} and Fig.~\ref{fig4b} for June 29, 2018 and December 25, 2018, respectively. The peak number of flights in air occurs at UTC time 06:00, namely 127 flights, whilst the lowest number of the flights in air occurs at UTC time 17:00, namely 16 flights. We start by investigating the number of flights that can access a GS at any one of the five airports considered. Explicitly, in Fig.~\ref{fig4}, we investigate how many flights can access a GS relying on 1 hop, 2 hops, 3 hops and 4 hops, respectively, using our $\epsilon$-DMOGA based multi-objective routing optimization scheme. Additionally, we also provides the numbers of flights that can access a GS with up to 2 hops, up to 3 hops and up to 4 hops, respectively, obtained by the $\epsilon$-DMOGA based multi-objective routing optimization. As shown by the solid lines of Fig.~\ref{fig4a} and Fig.~\ref{fig4b}, most of the flights in air can access a GS relying on 1 hop, i.e. via a direct link. Furthermore, by analyzing the results obtained using the $\epsilon$-DMOGA scheme, it can be found that there is a significant number of flights which can achieve higher end-to-end SE than the single-hop paths by relying on 2 hops. By contrast, seldom flights can achieve higher end-to-end SE relying on 3 hops and 4 hops than those relying on 2 hops. Although it is not explicitly shown by the figures, we found that almost all flights can access a GS with up to 2 hops. Hence, increasing the affordable number of hops to 3 and 4 contributes little to the total number of flights that can access a GS, regardless of the hour of the day. \begin{table*}[tbp!] \vspace*{-1mm} \caption{Comparing end-to-end spectrum efficiency of one-hop Pareto-optimal routing solutions and Pareto-optimal routing solutions with up to two hops at UTC time 18:00 on June 29, 2018.} \vspace*{-2mm} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular*}{17.0cm}{@{\extracolsep{\fill}}c|rrrrrrrr|c} \toprule & \multicolumn{8}{c|}{Individual flights' SE\,(bps/Hz)} & Average SE\,(bps/Hz) \\ \midrule 8 flights with 1 hop & 1.8090 & 1.3220 & 2.1940 & 2.1940 & 1.8090 & 1.0000 & 1.0000 & 1.3220 & 1.58125 \\ \midrule 10 flights with up & 1.8090 & 1.3220 & 2.1940 & 2.1940 & 2.1940 & 1.3220 & 1.3220 & 1.3220 & 1.52282 \\ to 2 hops & 0.7295 & 0.8197 & & & & & & & \\ \bottomrule \end{tabular*} } \end{center} \label{Tab3} \vspace*{-1mm} \end{table*} \begin{figure*}[tp!] \vspace{-2mm} \begin{center} \subfigure[June 29, 2018]{ \includegraphics[width=1.0\columnwidth]{figures/AU_Jun29_avg_throughput_all_hops} \label{fig5a} }% \subfigure[December 25, 2018]{ \includegraphics[width=1.0\columnwidth]{figures/AU_Dec25_avg_throughput_all_hops} \label{fig5b} } \end{center} \vspace{-2mm} \caption{The average end-to-end spectral efficiency achieved by the $\epsilon$-DMOGA based multi-objective routing optimization.} \label{fig5} \vspace{-2mm} \end{figure*} \subsubsection{Average end-to-end SE} Since the routes of the same number of hops for different flights have different end-to-end SEs, Fig.~\ref{fig5a} and Fig.~\ref{fig5b} portray the achievable average end-to-end SE over 24 hours on June 29, 2018 and December 25, 2018, respectively. As shown in Fig.~\ref{fig5a} and Fig.~\ref{fig5b}, the routing paths relying on up to 2, 3 and 4 hops are capable of achieving somewhat higher average end-to-end SE than those relying on 1 hop most times of the day except for UTC times 17:00, 18:00, 19:00 and 20:00 on June 29 as well as UTC time at 15:00 on December 25, mainly in the scenarios of having low flight times. But naturally, this SE improvement is achieved at the cost of a higher delay. To elaborate a little further, it is unexpected to encounter a lower SE for 2 hops than for 1 hop, because a single hop tends to be longer, which results in a lower SE. The issue here is that there are more Pareto-optimal routing paths of up to 2 hops than the number of Pareto-optimal 1-hop routes. We illustrate this point using the example at UTC time 18:00 on June 29, 2018. There are 8 flights having direct links to a GS, while there are 10 flights having routing paths to a GS relying on up to 2 hops -- some relying on direct link and some relying on 2 hops to access a GS. Table~\ref{Tab3} compares the individual flights' SEs of these two groups as well as their average SEs. Although the average SE of the second group is lower than that of the first group, observe that eight of the ten Pareto-optimal routing paths with up to 2 hops have the same or higher SEs than those of the eight 1-hop routing paths. Extrapolating from the results of the busiest day and the quietest day depicted in Fig.~\ref{fig5a} and Fig.~\ref{fig5b}, we may draw the conclusion that the AANET over the Australian airspace can achieve an average SE of about 1.2\,bps/Hz for low-flight-times during the period of UTC 12:00-20:00, while its average SE increases to around 1.7\,bps/Hz for high-flight-times. \begin{figure*}[tbp!] \vspace{-2mm} \begin{center} \subfigure[June 29, 2018]{ \includegraphics[width=1\columnwidth]{figures/AU_Jun29_avg_latency_all_hops} \label{fig6a} }% \subfigure[December 25, 2018]{ \includegraphics[width=1\columnwidth]{figures/AU_Dec25_avg_latency_all_hops} \label{fig6b} } \end{center} \vspace{-2mm} \caption{The average end-to-end latency achieved by the $\epsilon$-DMOGA based multi-objective routing optimization.} \label{fig6} \vspace{-2mm} \end{figure*} \subsubsection{Average end-to-end latency} The average end-to-end latency imposed by the Pareto-optimal routing paths over 24 hours on June 29, 2018 and December 25, 2018 are depicted in Fig.~\ref{fig6a} and Fig.~\ref{fig6b}, respectively. As expected, the direct links have the lowest latency, since they only have single-hop propagation delay upon accessing a GS. As seen in Fig.~\ref{fig6a} and Fig.~\ref{fig6b}, the average end-to-end latency relying on up to 2 hops is significantly higher than that relying on direct links. This is because the end-to-end latency of a 2-hop routing path also includes the signal processing delay and queuing delay, which are significantly higher than the propagation delay. Furthermore, the average end-to-end latency relying on up to 3 hops and up to 4 hops is higher than that relying on up to 2 hops. But the difference between the average end-to-end latency relying on up to 3 hops and up to 4 hops is small, because the number of routing paths relying on 4 hops is relatively small, as shown in Fig.~\ref{fig4a} and Fig.~\ref{fig4b}. As observed from Fig.~\ref{fig6a} and Fig.~\ref{fig6b}, the variation of the average end-to-end latency over 24 hours of a day is small. Furthermore, by extrapolating from the results of Fig.~\ref{fig6a} and Fig.~\ref{fig6b}, we may conclude that the average end-to-end latency may be as low as 0.01\,s in the AANET over the Australian airspace, provided that a link is available. \begin{figure*}[tp!] \vspace{-2mm} \begin{center} \subfigure[June 29, 2018]{ \includegraphics[width=1\columnwidth]{figures/AU_Jun29_avg_PET_all_hops} \label{fig7a} }% \subfigure[December 25, 2018]{ \includegraphics[width=1\columnwidth]{figures/AU_Dec25_avg_PET_all_hops} \label{fig7b} } \end{center} \vspace{-2mm} \caption{The average path expiration time achieved by the $\epsilon$-DMOGA based multi-objective routing optimization.} \label{fig7} \vspace{-2mm} \end{figure*} \subsubsection{Average path expiration time} The average PETs over 24 hours on June 29, 2018 and December 25, 2018 are portrayed in Fig.~\ref{fig7a} and Fig.~\ref{fig7b}, respectively. Intuitively, the routing path may become vulnerable to potential breakage upon increasing the number of hops. Our investigations based on real historical flight data both on June 29, 2018 and on December 25, 2018 confirm this intuition. Explicitly, as shown in Fig.~\ref{fig7a} and Fig.~\ref{fig7b}, the routing paths relying on a direct link to a GS have considerably longer average PET than those relying on 2 , 3 and 4 hops, except around UTC time 14:00 on June 29, 2018. Furthermore, the routing paths relying on 2 hops have longer average PET with a high probability than those relying on 3 and 4 hops, but the difference between the average PETs relying on 3 and 4 hops are hardly noticeable, since the number of routing paths relying on 4 hops is very low. It can be seen that the average PET varies over the 24 hours of a day. On average, we may extrapolate that the PET is around 800\,s i.e. over minutes in the AANET over the Australian airspace. \begin{table*}[tp!] \vspace*{-2mm} \caption{the achievable network performance of the AANET over Australia airspace} \vspace*{-2mm} \begin{center} \begin{tabular}{C{4.0cm}C{4.0cm}C{4.0cm}} \toprule Average spectrum efficiency & Average end-to-end latency & Average path expiration time \\ \midrule {1.2\,bps/Hz at low-flight times} {1.7\,bps/Hz at high-flight times} & 0.01\,s & 800\,s $\approx$ 13.3 min \\ \bottomrule \end{tabular} \end{center} \label{Tab4} \vspace*{-2mm} \end{table*} \subsubsection{Summary} Based on the results obtained using real historical flight data on two representative dates in 2018, namely on the busiest day of June 29 and on the quietest day of December 25, we may extrapolate the achievable network layer performance for the AANET over the Australian airspace using our $\epsilon$-DMOGA based multi-objective routing optimization. The overall network performance is summarized in Table~\ref{Tab4} at a glance. \section{Conclusions}\label{S5} In order to provide Internet service above the clouds, an $\epsilon$-DMOGA based multiple-objective routing optimization has been developed by taking into account the unique features of routing problem in AANETs. Explicitly, the end-to-end SE, the end-to-end latency and the PET have been jointly optimized by the proposed $\epsilon$-DMOGA for determining the Pareto-optimal routing paths. The achievable end-to-end SE, end-to-end latency and PET performance using our $\epsilon$-DMOGA based multiple-objective routing optimization have been investigated based on the top-5 Australia domestic airlines' real historical flight data on two representative dates in 2018, namely the busiest day of June 29 and the quietest day of December 25, in term of the number of flights in air. Our simulation results have quantified the networking capability of the AANET over the Australian airspace. Furthermore, our investigations have also offered useful design considerations for the AANETs in other parts of the worlds. \bibliographystyle{IEEEtran}
2,877,628,089,320
arxiv
\section{Introduction} The Selberg integral is a multi-dimensional generalization of the Euler beta integral \cite{Se44}. As surveyed in \cite{FW08}, after a dormant period of over thirty years since its discovery in the early 1940's, it shot into prominence upon the realization of its relevance to random matrix theory, the combinatorics of root systems, and orthogonal polynomials in many variables, amongst other topics of contemporary interest. In 1991 Anderson \cite{An91} gave a new proof of the Selberg integral by deriving a certain recurrence in the number of integral variables $n$. At the heart of this proof was a further multi-dimensional integral \begin{eqnarray} \label{eq:DA} &&\int_{z_n=x_{n-1}}^{x_{n}}\cdots\int_{z_2=x_1}^{x_2}\int_{z_1=x_0}^{x_1} \prod_{i=1}^n\prod_{j=0}^{n}|z_i-x_j|^{s_j-1}\prod_{1\le k<l\le n}(z_l-z_k)\,dz_1dz_2\cdots dz_n \nonumber\\ &&\quad =\frac{\Gamma(s_0)\Gamma(s_1)\cdots\Gamma(s_n)}{\Gamma(s_0+s_1+\cdots+s_n)} \prod_{0\le i<j\le n}(x_j-x_i)^{s_i+s_j-1}. \end{eqnarray} For many years it was thought that (\ref{eq:DA}) was itself a new gamma function evaluation of a multi-dimensional integral. However it was to transpire that (\ref{eq:DA}) in fact was first derived in a paper of Dixon \cite{Di05} written over eighty-five years earlier (see \cite{FW08} for the history of how Dixon's paper was rediscovered in the modern era). One line of research generated by the Selberg integral was the study of $q$-generalizations. Askey \cite{As80} conjectured the evaluation, in terms of $q$-gamma functions, of several multi-dimensional Jackson integrals which reduce to the Selberg integral in the limit $q \to 1$. Evans \cite{Ev92,Ev94} showed how two of these could be proved by adopting the strategy of Anderson. This of course required a $q$-generalization of (\ref{eq:DA}). Evans \cite[Theorem 1, (2.5), p.759]{Ev92} derived the sought $q$-generalization as \begin{eqnarray} \label{eq:Evans1} &&\int_{z_n=x_{n-1}}^{x_{n}}\cdots\int_{z_2=x_1}^{x_2}\int_{z_1=x_0}^{x_1} \prod_{i=1}^n\prod_{j=0}^{n}(qz_i/x_j;q)_{s_j-1}\prod_{1\le k<l\le n}(z_l-z_k)\, \,d_qz_1d_qz_2\cdots d_qz_n \nonumber\\ &&\quad =\frac{\Gamma_q(s_0)\Gamma_q(s_1)\cdots \Gamma_q(s_{n})} {\Gamma_q(s_0+s_1+\cdots+s_{n})} \prod_{0\le i<j\le n}x_j(x_i/x_j;q)_{s_j}(qx_j/x_i;q)_{s_i-1}. \end{eqnarray} We will refer to (\ref{eq:Evans1}) as the $q$-Dixon--Anderson integral. Independent of the work of Anderson, Gustafson \cite{Gu90} invented the same strategy of using an auxiliary multi-dimensional integral (nowadays referred to as a type I $q$-hypergeometric integral) to prove a product of $q$-gamma functions evaluation of a different generalization of the Selberg integral (type II $q$-hypergeometric integral relating to the $BC_n$ root system; the original Selberg integral relates to the $A_{n-1}$ root system). To implement this strategy Gustafson had to formulate and prove an appropriate $BC_n$ version of the $q$-Dixon--Anderson integral \cite{Gu92}. But this was prior to the work of Evans, so (\ref{eq:Evans1}) was unknown. Rather the knowledge base of Gustafson included his earlier work \cite{Gu87}, generalizing a result of Milne \cite{Mi85}, to give the product form evaluation of the bilateral sum \begin{equation}\label{MG} \sum_{y_1,\dots,y_n = - \infty}^\infty \prod_{1 \le i < j \le n} \Big ( \frac{z_i q^{y_i} - z_j q^{y_j} }{ z_i - z_j} \Big ) \prod_{i,j = 1}^n \frac{ (a_i z_j/z_i)_{y_j} }{(b_i z_j/z_i)_{y_j} } t^{y_1 + \cdots + y_n}. \end{equation} As noted in \cite{Gu87}, the case $n=1$ of (\ref{MG}) is the definition of the ${}_1 \psi_1$ series, and thus the product form evaluation of (\ref{MG}) corresponds to a multi-dimensional generalization of the Ramanujan ${}_1 \psi_1$ summation theorem. We will refer to (\ref{MG}) as the Milne--Gustafson summation. In this paper we will show that (\ref{eq:Evans1}) and the product form evaluation of (\ref{MG}) are intimately related. The relationship is seen by seeking an explanation for the product expressions from the viewpoint of $q$-difference systems. One hint of a common underpinning comes from (\ref{eq:Evans1}) and (\ref{MG}) both permitting generalizations involving Macdonald polynomials. Thus it was pointed out in \cite{FR05} that the case $s_0 = \cdots = s_n = s$ of (\ref{eq:Evans1}) corresponds to Okounkov's \cite{Ok98} integral formula for the Macdonald polynomials $P_\kappa(x_0,\dots,x_n;q,q^s)$ in the case $\kappa = \emptyset$. On the other hand one viewpoint of (\ref{MG}) is as a multi-dimensional ${}_1\psi_1$ summation associated to the root system $A_{n-1}$ (see e.g.~\cite{MS02} and references therein), and such generalized hypergeometric function identities allow natural generalizations to include Macdonald polynomials \cite{Kan96,BF99}. \par We remark that an elliptic analogue of the Milne--Gustafson summation is due to Kajihara--Noumi \cite{KN03}, and Rosengren \cite{Ro04,Ro06}, independently. In these references a specialization of (\ref{MG}) is generalized to involve elliptic analogues of the $q$-products, and the resulting summation is further more extended to an elliptic analogue of Kajihara's $q$-transformation identity \cite{Kaj04} between a summation over $n$ variables and a summation over $m$ variables. This gives another hint of a relationship between (\ref{eq:Evans1}) and the product form evaluation of (\ref{MG}). Thus in the original paper of Dixon \cite{Di05} the Dixon--Anderson integral (\ref{eq:DA}) is generalized to a transformation identity between an $n$ and an $m$ dimensional multi-dimensional integral. And in proving a conjectured elliptic generalization of the Selberg integral due to van Diejen and Spiridonov \cite{DS01}, Rains \cite{Ra10} has proved a transformation identity between multi-dimensional elliptic integrals, which he has subsequently \cite{Ra09} shown permits under certain limiting operations a reduction to the original transformation identity of Dixon. \par We will discuss three multi-dimensional bilateral series, \begin{enumerate} \item[(1)] the Milne--Gustafson summation formula (Theorem \ref{thm:MG}), \item[(2)] a multi-dimensional bilateral extension of Evans's $q$-Dixon--Anderson integral (Theorem \ref{thm:02}), \item[(3)] the product expression for Gustafson's $A_n$ sum contained in \cite{Gu87}, itself generalizing the Milne--Gustafson summation (Theorem \ref{thm:03main}). \end{enumerate} \noindent The sum of the case (1) connects the sums of (2) and (3) as a hub. As already mentioned, the aim of this paper is to give an explanation for the formulae of product expression of these sums from a view-point of $q$-difference equations. The method for proving these results is consistent with the concepts introduced by Aomoto and Aomoto--Kato in the early 1990's in the series of papers \cite{Ao90,Ao91,Ao94,Ao95-1,AK91,AK93,AK94-1,AK94-2}. Aomoto showed an isomorphism between a class of the Jackson integrals of hypergeometric type, which he called the {\it $q$-analog de Rham cohomology} \cite{Ao90,Ao91}, and a class of theta functions, i.e., holomorphic functions possessing a quasi-periodicity \cite[Theorem 1]{Ao95-1}. This isomorphism indicates that it is essential to analyze both in order to know the structure of $q$-hypergeometric functions, in particular, the meaning of known special formulae. In this paper the process to obtain the holomorphic functions through this isomorphism is called {\it regularization}. When we fix a basis of the class of holomorphic functions as a linear space, an arbitrary function of the space can be expressed as a linear combination of the elements of the specific basis, which he called the {\it connection formula} \cite[Theorem 3]{Ao95-1}. (As its simplest examples, Ramanujan's $_1\psi_1$ summation formula and the $q$-Selberg integral \cite{As80,Ha88,Kad88,Ev92} have been explained. See \cite[Examples 1, 2]{Ao95-1}.) One way to choose a good basis is through its asymptotic behavior upon a limiting process with respect to parameters included in the definition of the Jackson integral of hypergeometric type. % Moreover the asymptotic behavior can be calculated from Jackson integrals possessing appropriate cycles which include their critical points. We call the process to fix the cycles the {\it truncation}. (These cycles are called the {\it characteristic cycles} \cite{AK94-2} or the {\it $\alpha$-stable or $\alpha$-unstable cycles} \cite{Ao94} by Aomoto. The meaning of ``$\alpha$" is mentioned in Section \ref{section:01}. The word {\it truncation} itself is first used by van Diejen in another context \cite{vD97, Ito06-2}.) There is another way to characterize the connection formula. This is by showing that a multi-dimensional bilateral series originating as a general solution of the $q$-difference equation of the Jackson integrals with respect to parameters is expressed as a linear combination of multi-dimensional unilateral series as special solutions, each fixed by their asymptotic behaviors \cite[Theorem (4.2)]{Ao94}. (We can see different examples of $q$-difference equations and the connection formulae from this paper in \cite{Ito08,Ito09,IS08}, and \cite{IS08} explains the Sears--Slater transformation for the very-well-poised $q$-hypergeometric series from the view-point of this paper in the setting of $BC$ type symmetry. In the recent survey article \cite{IF13} we have detailed this program as it applies to Ramanujan's ${}_1 \psi_1$ summation formula and Bailey's very-well-poised ${}_6 \psi_6$ summation, and show how these logically lead to the consideration of higher dimensional extensions such as (2)). We discuss three product formulae for the Jackson integrals corresponding to the sums of (1), (2) and (3) as simple examples of this concept. This paper is organized as follows. After defining basic terminology in Section \ref{section:00}, we first show the product expression of the Milne--Gustafson sum using concepts of {\it truncation}, {\it regularization} and {\it connection formulae} in Section \ref{section:01}. Though the Milne--Gustafson sum can be obtained from our other two examples, we explain it individually, because the Milne--Gustafson sum has simpler structure than the other two, and it is instructive in outlining the concepts of this paper. The subsequent two sections are devoted to explaining Evans's $q$-Dixon--Anderson integral and Gustafson's $A_n$ sum, respectively. Although technically more involved than the Milne--Gustafson sum, the overall strategy will be seen to be the same. Specifically in section \ref{section:02} we introduce a bilateral extension of Evans's $q$-Dixon--Anderson integral and use the method of $q$-difference equations to deduce its evaluation. In section \ref{section:03}, for Gustafson's $A_n$ sum we also use the method of $q$-difference equations to deduce its evaluation. In reading sections 3 to 5, a repetition of the main steps needed to implement the $q$-difference equation method will become apparent. \\ \section{Definition of the Jackson integral} \label{section:00} Throughout this paper, we fix $q$ as $0<q<1$ and use the symbols $(a)_\infty:=\prod_{i=0}^\infty(1-q^i a)$ and $(a)_N:=(a)_\infty/(q^Na)_\infty$. We define $\theta(a)$ by $\theta(a):=(a)_\infty(q/a)_\infty$, which satisfies \begin{equation} \label{eq:00quasi-period} \theta(qa)=-\theta(a)/a. \end{equation} Let $S_n$ be the symmetric group on $\{1,2,\ldots, n\}$. For a function $f(z)=f(z_1,z_2,\ldots,z_n)$ on $(\mathbb{C}^*)^n$, we define action of the symmetric group $S_n$ on $f(z)$ by $$ (\sigma f)(z):=f(\sigma^{-1}(z))=f(z_{\sigma(1)},z_{\sigma(2)},\ldots,z_{\sigma(n)}) \quad\mbox{for}\quad \sigma\in S_n. $$ We say that a function $f(z)$ on $(\mathbb{C}^*)^n$ is {\it symmetric} or {\it skew-symmetric} if $\sigma f(z)=f(z)$ or $\sigma f(z)=(\mbox{{\rm sgn}}\,\sigma )\,f(z)$ for all $\sigma \in S_n$, respectively. We denote by ${\cal A} f(z)$ the alternating sum over $S_n$ defined by \begin{equation} \label{eq:00Af} ({\cal A} f)(z):=\sum_{\sigma\in S_n}(\mbox{{\rm sgn}}\, \sigma)\,\sigma f(z), \end{equation} which is skew-symmetric. \par For $a,b\in \mathbb{C}$, we define \begin{equation} \label{eq:00jac1} \int_a^b f(z)d_qz:=\int_0^b f(z)d_qz-\int_0^a f(z)d_qz, \end{equation} where $$ \int_0^a f(z)d_qz:=(1-q)\sum_{\nu=0}^\infty f(aq^\nu)aq^\nu, $$ which is called the {\it Jackson integral}. As $q\to 1$, $\int_a^b f(z)d_qz\to \int_a^b f(z)dz$ \cite{AAR99}. In this paper we basically use the Jackson integral of multiplicative measure as $$ \int_0^a f(z)\frac{d_qz}{z}=(1-q)\sum_{\nu=0}^\infty f(aq^\nu). $$ Let $\mathbb{N}$ be the set of non-negative integers. For a function $f(z)=f(z_1,\ldots,z_n)$ on $(\mathbb{C}^*)^n$ and an arbitrary point $x=(x_1,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the multiple Jackson integral as \begin{equation} \label{eq:00jac2} \int_0^{\mbox{\small $x$}}f(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} :=(1-q)^n\sum_{(\nu_1,\ldots,\nu_n)\in {\mathbb{N}}^n}f(x_1 q^{\nu_1},\ldots,x_n q^{\nu_n}). \end{equation} In this paper we use the multiple bilateral sum extending the above Jackson integral \begin{equation} \label{eq:00jac3} \int_0^{\mbox{\small $x$}\infty}f(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} :=(1-q)^n\sum_{(\nu_1,\ldots,\nu_n)\in {\mathbb{Z}}^n}f(x_1 q^{\nu_1},\ldots,x_n q^{\nu_n}), \end{equation} which we also call the {\it Jackson integral}. By definition the Jackson integral (\ref{eq:00jac3}) is invariant under the shift $x_i\to qx_i$, $1\le i\le n$. While we can consider the limit $q\to1$ for the Jackson integral (\ref{eq:00jac2}) defined over $\mathbb{N}^n$, the Jackson integral (\ref{eq:00jac3}) defined over $\mathbb{Z}^n$ generally diverges if $q\to1$. However, as we will see later, since the {\it truncation} of the Jackson integral (\ref{eq:00jac3}) is corresponding to the sum (\ref{eq:00jac2}) over $\mathbb{N}^n$, if we need to consider the limit $q\to1$, we switch from (\ref{eq:00jac3}) to (\ref{eq:00jac2}) by the process of the truncation. We will state one of the key lemmas of this paper for deriving $q$-difference equations. For this let $\Phi(z)$ be a symmetric function on $(\mathbb{C}^*)^n$ and for a function $\varphi(z)$, define the function $\nabla_{\!i}\varphi(z)$ ($1\le i\le n$) by \begin{equation} \label{eq:00nabla} (\nabla_{\!i}\varphi)(z):=\varphi(z)-\frac{T_{z_i}\Phi(z)}{\Phi(z)}T_{z_i}\varphi(z), \end{equation} where $T_{z_i}$ means the shift operator of $z_i\to qz_i$, i.e., $T_{z_i}f(\ldots,z_i,\ldots)=f(\ldots,qz_i,\ldots)$. We then have \begin{lem} \label{lem:00nabla=0} For a meromorphic function $\varphi(z)$ on $(\mathbb{C}^*)^n$, if the integral $$ \int_0^{\mbox{\small $x$}\infty}\varphi(z)\Phi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} $$ converges, then \begin{equation} \label{eq:00nabla=0} \int_0^{\mbox{\small $x$}\infty}\Phi(z)\nabla_{\!i}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}=0. \end{equation} Moreover, \begin{equation} \label{eq:00A} \int_0^{\mbox{\small $x$}\infty}\Phi(z){\cal A}\nabla_{\!i}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}=0, \end{equation} where ${\cal A}$ indicates the skew-symmetrization defined in {\rm (\ref{eq:00Af})}. \end{lem} {\bf Proof.} From the definition (\ref{eq:00nabla}) of $\nabla_{\!i}$, (\ref{eq:00nabla=0}) is equivalent to the statement $$ \int_0^{\mbox{\small $x$}\infty}\varphi(z)\Phi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =\int_0^{\mbox{\small $x$}\infty}T_{z_i}\varphi(z)\, T_{z_i}\Phi(z) \frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, $$ if the left-hand side converges. Moreover this equation is just confirmed from the fact that the Jackson integral is invariant under the $q$-shift $z_i\to qz_i$ ($1\le i\le n$). Next we will confirm (\ref{eq:00A}). Using $ \sigma\Phi(z)=\Phi(z) $, we have \begin{eqnarray*} \Phi(z){\cal A}\nabla_{\!i}\varphi(z) &=&\Phi(z)\sum_{\sigma\in S_n}(\mbox{{\rm sgn}}\, \sigma)\,\sigma(\nabla_{\!i}\varphi)(z) =\sum_{\sigma\in S_n}(\mbox{{\rm sgn}}\, \sigma)\,\sigma\Phi(z)\sigma(\nabla_{\!i}\varphi)(z)\\ &=&\sum_{\sigma\in S_n}(\mbox{{\rm sgn}}\, \sigma)\,\sigma\Big(\Phi(z)\nabla_{\!i}\varphi(z)\Big), \end{eqnarray*} so that we obtain $$ \int_0^{\mbox{\small $x$}\infty}\Phi(z){\cal A}\nabla_{\!i}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =\sum_{\sigma\in S}(\mbox{{\rm sgn}}\, \sigma) \int_0^{\mbox{\small $x$}\infty}\sigma\Big(\Phi(z)\nabla_{\!i}\varphi(z)\Big)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} $$ $$ =\sum_{\sigma\in S}(\mbox{{\rm sgn}}\, \sigma) \int_0^{\sigma^{-1}\mbox{\small $x$}\infty}\!\!\!\!\! \Phi(z)\nabla_{\!i}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =\sum_{\sigma\in S}(\mbox{{\rm sgn}}\, \sigma)\,\sigma\!\! \int_0^{\mbox{\small $x$}\infty}\Phi(z)\nabla_{\!i}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, $$ which vanishes from (\ref{eq:00nabla=0}). $\square$ \section{Jackson integral of Milne--Gustafson type} \label{section:01} In this section, we will explain the methods of this paper using the Milne--Gustafson summation formula. \subsection{Definitions and the result} Let $a_1,\ldots,a_n$, $b_1,\ldots,b_n$ and $\alpha$ be complex numbers satisfying \begin{equation} \label{eq:01condition01} |qa_1^{-1}\cdots a_n^{-1}b_1^{-1}\cdots b_n^{-1}|<|q^\alpha|<1. \end{equation} Let $\Phi(z)$ and $\Delta(z)$ be the functions defined by \begin{equation} \label{eq:01Phi1} \Phi(z):=(z_1z_2\cdots z_n)^\alpha\prod_{i=1}^n\prod_{j=1}^{n}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty} \end{equation} and \begin{equation} \label{eq:01Delta1} \Delta(z):=\prod_{1\le i<j\le n}(z_j-z_i). \end{equation} For $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the sum $I(x)$ by \begin{equation} \label{eq:01I(x)1} I(x):=\int_0^{\mbox{\small $x$}\infty}\Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} which converges absolutely under the condition (\ref{eq:01condition01}) (see \cite[Lemma 3.19]{Gu87}, \cite[Lemma 2.5]{Mi86}). We call $I(x)$ the {\it Jackson integral of Milne--Gustafson type}. By definition $I(x)$ is skew-symmetric. \begin{thm}[Milne--Gustafson \cite{Gu87,Mi85}] \label{thm:MG} For an arbitrary $x\in (\mathbb{C}^*)^n$, $I(x)$ is evaluated as \begin{eqnarray} \label{eq:01I(x)} I(x)&=& (1-q)^n \frac{(q)_\infty^n \prod_{i=1}^{n}\prod_{j=1}^{n}(qa_i^{-1}b_j^{-1})_\infty } {(q^\alpha)_\infty(q^{1-\alpha} a_1^{-1}a_2^{-1}\cdots a_{n}^{-1}b_1^{-1}b_2^{-1}\cdots b_{n}^{-1})_\infty }\nonumber\\ &&\times (x_1x_2\cdots x_n)^{\alpha} \frac{\theta(q^{\alpha}x_1x_2\cdots x_n b_1b_2\cdots b_n) }{\prod_{i=1}^n\prod_{j=1}^n\theta(x_ib_j)}\prod_{1\le i<j\le n}x_j\theta(x_i/x_j). \end{eqnarray} \end{thm} {\bf Remark.} If $n=1$, (\ref{eq:01I(x)}) coincides with Ramanujan's $_1\psi_1$ summation theorem. In this sense, (\ref{eq:01I(x)}) is a natural multi-dimensional extension of the $_1\psi_1$ sum. On the other hand, if $q\to 1$, the limiting formula of a special case of (\ref{eq:01I(x)}) coincides with the Dixon--Anderson integral (\ref{eq:DA}) with $x_0=0$, as we will see in Remark after Corollary \ref{cor:01I(a)}. In this sense, the Milne--Gustafson sum is a natural extension of both quantities. \\ The rest of this section is devoted to explaining the Milne--Gustafson summation formula (\ref{eq:01I(x)}) as a connection formula between solutions of a $q$-difference equation, and consequently this leads us to a simple proof. \subsection{$q$-difference equation with respect to $\alpha$} In this subsection we derive the $q$-difference equation with respect to $\alpha$, which $I(x)$ satisfies. We use $I(\alpha;x)$ instead of $I(x)$ to see the $\alpha$ dependence. \begin{lem} \label{lem:rec} The recurrence relation for $I(\alpha;x)$ is given by \begin{equation} \label{eq:01rec1} I(\alpha;x)=\frac{1-q^{\alpha}a_1a_2\cdots a_nb_1b_2\cdots b_n}{a_1a_2\cdots a_n(1-q^\alpha)}I(\alpha+1;x). \end{equation} \end{lem} {\bf Proof.} Since the ratio ${T_{z_1}\Phi(z)}/{\Phi(z)}$ is written as $$ \frac{T_{z_1}\Phi(z)}{\Phi(z)}=q^\alpha\prod_{j=1}^n\frac{1-b_jz_1}{1-q a_j^{-1}z_1}, $$ if we put $ \varphi(z)=z_2^{n-1}z_3^{n-2}\cdots z_n\prod_{j=1}^n(1- a_j^{-1}z_1), $ then, from (\ref{eq:00nabla}) we have $$ \nabla_{\!1}\varphi(z)=z_2^{n-1}z_3^{n-2}\cdots z_n\Big( \prod_{j=1}^n(1- a_j^{-1}z_1) -q^\alpha\prod_{j=1}^n(1- b_jz_1)\Big), $$ so that the skew-symmetrization of the above equation is given by \begin{equation} \label{eq:01Anabla} {\cal A}\nabla_{\!1}\varphi(z)=\Big((-1)^{n-1}(1-q^\alpha)+(-1)^n \frac{1-q^{\alpha}a_1a_2\cdots a_nb_1b_2\cdots b_n}{a_1a_2\cdots a_n}z_1z_2\cdots z_n\Big)\Delta(z). \end{equation} Since $$ I(\alpha+1;x)= \int_0^{\mbox{\small $x$}\infty}z_1z_2\cdots z_n \Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, $$ using (\ref{eq:00A}) of Lemma \ref{lem:00nabla=0} for (\ref{eq:01Anabla}), we obtain the relation (\ref{eq:01rec1}). $\square$ \subsection{Truncation} For the special point $x=a=(a_1,a_2,\ldots,a_n)$, we call $I(a)$ the {\it truncated Jackson integral of Milne--Gustafson type}. By definition $I(a)$ is a sum over $\mathbb{N}^n$, while $I(x)$ is generally a sum over the lattice $\mathbb{Z}^n$. It has the advantage of simplifying the computation of the $\alpha\to +\infty$ asymptotic behavior, as will be seen below. (The lattice $\{(x_1q^{\nu_1},\ldots,x_nq^{\nu_n})\in (\mathbb{C}^*)^n\,;\,(\nu_1,\ldots,\nu_n)\in \mathbb{Z}^n\}$ is called the {\it $q$-cycle} \cite{Ao91} of $I(x)$, while the set $\{(a_1q^{\nu_1},\ldots,a_nq^{\nu_n})\in (\mathbb{C}^*)^n\,;\,(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n\}$ as the support of $I(a)$ is called the {\it $\alpha$-stable cycle} in \cite{Ao94,AK94-2}.) \begin{lem} \label{lem:I(a+N;a)} The asymptotic behavior of $I(\alpha+N;a)$ as $N\to +\infty$ is given by \begin{equation} \label{eq:01asym1} I(\alpha+N;a)\sim (1-q)^n(a_1a_2\cdots a_n)^{\alpha+N}\Delta(a) \prod_{i=1}^n\prod_{j=1}^n\frac{(qa_i/a_j)_\infty}{(a_ib_j)_\infty} \quad(N\to +\infty). \end{equation} \end{lem} {\bf Proof.} Since $(q^{1+\nu_i})_\infty=0$ if $\nu_i<0$, by definition $I(\alpha+N;a)$ is written as \begin{eqnarray} \label{eq:01asym1.5} I(\alpha+N;a)&=&(1-q)^n\!\!\!\!\! \sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n} (a_1a_2\cdots a_nq^{\nu_1+\cdots+\nu_n})^{\alpha+N}\nonumber\\ &&\times\prod_{i=1}^n\prod_{j=1}^{n}\frac{(a_j^{-1}a_iq^{1+\nu_i})_\infty}{(b_ja_iq^{\nu_i})_\infty} \prod_{1\le i<j\le n}(a_jq^{\nu_j}-a_iq^{\nu_i}). \end{eqnarray} Since $(p)_\infty\to 1$ if $p\to 0$, the term $|\prod_{i=1}^n\prod_{j=1}^{n}(a_j^{-1}a_iq^{1+\nu_i})_\infty/(b_ja_iq^{\nu_i})_\infty|$ is bounded for $(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n$ and $N>0$. Then the leading term of the asymptotic behavior of $I(\alpha+N;a)$ as $N\to +\infty$ depends on the maximum value of $|(a_1a_2\cdots a_nq^{\nu_1+\cdots+\nu_n})^{\alpha+N} \prod_{1\le i<j\le n}(a_jq^{\nu_j}-a_iq^{\nu_i})|$ over $(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n$. Since $|\prod_{i=1}^n\prod_{j=1}^{n}(a_j^{-1}a_iq)_\infty/(b_ja_i)_\infty|\ne 0$, the summand in the right-hand side of (\ref{eq:01asym1.5}) corresponding to $(\nu_1,\ldots,\nu_n)=(0,\ldots,0)$ gives the leading term (\ref{eq:01asym1}). $\square$ \\[2pt] \indent From this lemma, with repeated use of Lemma \ref{lem:rec}, we immediately have \begin{cor} \label{cor:01I(a)} The truncated Jackson integral $I(a)$ is expressed as \begin{equation} \label{eq:01I(a)} I(a)= (1-q)^n(a_1a_2\cdots a_n)^{\alpha} \frac{(q)_\infty^n (q^\alpha a_1a_2\cdots a_{n}b_1b_2\cdots b_{n})_\infty } {(q^\alpha)_\infty\prod_{i=1}^{n}\prod_{j=1}^{n}(a_ib_j)_\infty} \prod_{1\le i<j\le n}a_j\theta(a_i/a_j). \end{equation} \end{cor} {\bf Proof.} By repeated use of the recurrence relation (\ref{eq:01rec1}), we have $$I(\alpha;x) =\frac{(q^{\alpha}a_1a_2\cdots a_nb_1b_2\cdots b_n)_N}{(a_1a_2\cdots a_n)^N(q^\alpha)_N}I(\alpha+N;x).$$ If we put $x=a$ and take $N\to +\infty$, we obtain $$ I(\alpha;a) =\frac{(q^{\alpha}a_1a_2\cdots a_nb_1b_2\cdots b_n)_\infty}{(q^\alpha)_\infty}\lim_{N\to \infty}\frac{I(\alpha+N;a)}{(a_1a_2\cdots a_n)^N}, $$ which coincides with the right-hand side of (\ref{eq:01I(a)}) if we use (\ref{eq:01asym1}). $\square$\\ \noindent {\bf Remark.} If $n=1$, (\ref{eq:01I(a)}) coincides with the $q$-binomial theorem which is a special case of Ramanujan's $_1\psi_1$ summation theorem. The truncated Jackson integral $I(a)$ is the sum over $\mathbb{N}^n$ by definition, and the definition of this case coincides with that of the ordinary Jackson integral explained by (\ref{eq:00jac2}). Therefore we can consider the limiting case of (\ref{eq:01I(a)}) as $q\to 1$. If we substitute $\alpha$, $a_j$ and $b_j$ as $\alpha\to s_0$, $a_j\to x_{j-1}$ and $b_j\to q^{s_{j-1}}/x_{j-1}$, respectively, then (\ref{eq:01I(a)}) in the limit $q\to 1$ coincides with the Dixon--Anderson integral (\ref{eq:DA}) with $x_0=0$. \subsection{Regularization and connection formulae} Let ${\cal I}(x)$ and $h(x)$ be the functions defined by \begin{equation} \label{eq:01cal I(x)1} {\cal I}(x)=\frac{I(x)}{h(x)}, \quad h(x)= (x_1x_2\cdots x_n)^{\alpha} \frac{\prod_{1\le i<j\le n}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n}\prod_{j=1}^{n}\theta(b_jx_i)}. \end{equation} We call ${\cal I}(x)$ the {\it regularized Jackson integral} of $I(x)$. Since the trivial poles and zeros of $I(x)$ are canceled out by multiplying together $1/h(x)$ and $I(x)$, we have the following. \begin{lem} \label{lem:h-and-s} The regularization ${\cal I}(x)$ is holomorphic on $(\mathbb{C}^*)^n$ and symmetric. \end{lem} \noindent {\bf Proof.} From the expression (\ref{eq:01Phi1}) of $\Phi(z)$ as integrand of (\ref{eq:01I(x)1}), the function $I(x)$ has the poles lying only in the set $\{x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n\,;\,\prod_{i=1}^n\prod_{j=1}^n\theta(b_jx_i)=0 \}$. Moreover $I(x)$ is divisible by $x_j\theta(x_i/x_j)$ because $I(x)$ is skew-symmetric and invariant under $q$-shift . We therefore obtain $$ I(x)={\cal I}(x)h(x), $$ where ${\cal I}(x)$ is some holomorphic function on $(\mathbb{C}^*)^n$. Since $I(x)$ and $h(x)$ are both skew-symmetric, ${\cal I}(x)$ is symmetric. $\square$\\ From (\ref{eq:01cal I(x)1}) and the quasi-periodicity (\ref{eq:00quasi-period}) of the theta function, we have $$\frac{T_{x_i}h(x)}{h(x)}=-q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n.$$ And since $I(x)$ is an invariant under the $q$-shift $x_i\to qx_i$, the holomorphic function ${\cal I}(x)$ on $(\mathbb{C}^*)^n$ satisfies the $q$-difference equations \begin{equation} \label{eq:01quasi-period} T_{x_i}{\cal I}(x)=-\frac{{\cal I}(x)}{q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n},\quad i=1,2,\ldots,n. \end{equation} Since the set of holomorphic functions on $(\mathbb{C}^*)^n$ satisfying (\ref{eq:01quasi-period}) has the dimension 1 as a $\mathbb {C}$-linear space, we can take $\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)$ as a basis of the linear space. Thus ${\cal I}(x)$ is uniquely expressed as \begin{equation} \label{eq:01cal I(x)2} {\cal I}(x)=C\,\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n), \end{equation} where $C$ is some constant independent of $x$. \begin{lem}[connection formula For arbitrary $x,y\in (\mathbb{C}^*)^n$, the {\it connection formula} between ${\cal I}(x)$ and ${\cal I}(y)$ is written as \begin{equation} \label{eq:01cal I(x)I(y)} {\cal I}(x)=\frac{\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)} {\theta(q^\alpha y_1y_2\cdots y_nb_1b_2\cdots b_n)} {\cal I}(y). \end{equation} In particular, if we set $y=a\in (\mathbb{C}^*)^n$, then \begin{equation} \label{eq:01cal I(x)I(a)} {\cal I}(x)=\frac{\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)} {\theta(q^\alpha a_1a_2\cdots a_nb_1b_2\cdots b_n)} {\cal I}(a). \end{equation} \end{lem} {\bf Proof.} From (\ref{eq:01cal I(x)2}), we immediately have (\ref{eq:01cal I(x)I(y)}). $\square$\\[6pt] {\bf Remark.} If we switch the symbols from ${\cal I}(x)$ to $I(x)$, we have $$ {I}(x)=\frac{h(x)\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)} {h(y)\theta(q^\alpha y_1y_2\cdots y_nb_1b_2\cdots b_n)} {I}(y). $$ In particular, if we set $y=a$ in the above equation, we obtain \begin{equation} \label{eq:01I(x)I(a)} {I}(x)=\frac{h(x)\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)} {h(a)\theta(q^\alpha a_1a_2\cdots a_nb_1b_2\cdots b_n)} {I}(a), \end{equation} which is also the connection formula between a solution $I(x)$ of the $q$-difference equation (\ref{eq:01rec1}) and the special solution $I(a)$ fixed by its asymptotic behavior (\ref{eq:01asym1}) as $\alpha\to +\infty$. In addition, its connection coefficient is written as a ratio of theta functions (i.e., that of $q$-gamma functions), and is of course invariant under the shift $\alpha\to \alpha+1$. Using the evaluation (\ref{eq:01I(a)}) of $I(a)$, the connection formula (\ref{eq:01I(x)I(a)}) exactly coincides with (\ref{eq:01I(x)}) for the Milne--Gustafson sum.\\ Using (\ref{eq:01I(a)}), the constant $C$ in (\ref{eq:01cal I(x)2}) is also calculated explicitly as \begin{equation} \label{eq:01C} C=\frac{{I}(a)} {h(a)\theta(q^\alpha a_1a_2\cdots a_nb_1b_2\cdots b_n)} = \frac{(1-q)^n(q)_\infty^n \prod_{i=1}^{n}\prod_{j=1}^{n}(qa_i^{-1}b_j^{-1})_\infty } {(q^\alpha)_\infty(q^{1-\alpha} a_1^{-1}a_2^{-1}\cdots a_{n}^{-1}b_1^{-1}b_2^{-1}\cdots b_{n}^{-1})_\infty }. \end{equation} From (\ref{eq:01cal I(x)2}) we therefore obtain \begin{prop For an arbitrary $x\in (\mathbb{C}^*)^n$, ${\cal I}(x)$ is expressed as \begin{equation} \label{eq:01cal I(x)3} {\cal I}(x)= (1-q)^n \frac{(q)_\infty^n \prod_{i=1}^{n}\prod_{j=1}^{n}(qa_i^{-1}b_j^{-1})_\infty } {(q^\alpha)_\infty(q^{1-\alpha} a_1^{-1}\cdots a_{n}^{-1}b_1^{-1}\cdots b_{n}^{-1})_\infty }\, \theta(q^{\alpha}x_1\cdots x_n b_1\cdots b_n). \end{equation} \end{prop} Since (\ref{eq:01I(x)}) is equivalent to (\ref{eq:01cal I(x)3}), the Milne--Gustafson summation formula as the expression (\ref{eq:01cal I(x)3}) gives the isomorphism between the sets of $I(x)$ and of theta functions defined by (\ref{eq:01quasi-period}). \\ If we set \begin{equation} \label{eq:01beta} \beta:=1-\alpha_1-\cdots-\alpha_n-\beta_1-\cdots-\beta_n-\alpha, \end{equation} where $\alpha_i$ and $\beta_i$ are given by $a_i=q^{\alpha_i}, b_i=q^{\beta_i}$, after rearrangement, the formula (\ref{eq:01cal I(x)3}) is also expressed as the following Macdonald-type sum, whose value is given by an $x$-independent constant \cite{Ma03,vD97,Ito06-1}. \begin{prop} Under the condition $a_1\cdots a_n b_1\cdots b_nq^{\alpha+\beta}=q$, \begin{eqnarray} \label{eq:01cal I(x)4} &&\int_0^{\mbox{\small $x$}\infty}\frac{\prod_{i=1}^n\prod_{j=1}^n (qa_j^{-1}z_i)_\infty(qb_j^{-1}z_i^{-1})_\infty} {(q^{\beta}a_1\cdots a_n z_1^{-1}\cdots z_n^{-1})_\infty(q^{\alpha}b_1\cdots b_n z_1\cdots z_n)_\infty \prod_{1\le i<j\le n}(qz_i/z_j)_\infty(qz_j/z_i)_\infty}\nonumber\\ && \qquad\qquad \times\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =\frac{(1-q)^n(q)_\infty^n \prod_{i=1}^{n}\prod_{j=1}^{n}(qa_i^{-1}b_j^{-1})_\infty } {(q^\alpha)_\infty(q^\beta)_\infty }. \end{eqnarray} \end{prop} {\bf Proof.} Since $h(x)\theta(q^{\alpha}x_1\cdots x_n b_1\cdots b_n)$ is invariant under the $q$-shift $x_i\to qx_i$, from (\ref{eq:01cal I(x)2}), we have \begin{equation*} \int_0^{\mbox{\small $x$}\infty}\frac{\Phi(z)\Delta(z)} {h(z)\theta(q^{\alpha}z_1\cdots z_n b_1\cdots b_n)}\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =C, \end{equation*} so that \begin{equation*} \int_0^{\mbox{\small $x$}\infty}\frac{\prod_{i=1}^n\prod_{j=1}^n (qa_j^{-1}z_i)_\infty(qb_j^{-1}z_i^{-1})_\infty} {\theta(q^{\alpha}z_1\cdots z_n b_1\cdots b_n) \prod_{1\le i<j\le n}(qz_i/z_j)_\infty(qz_j/z_i)_\infty}\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =C, \end{equation*} which is rewritten as (\ref{eq:01cal I(x)4}) using (\ref{eq:01C}) under the condition (\ref{eq:01beta}). $\square$\\ As a corollary, it is confirmed that the following identity for a contour integral is equivalent to the formula (\ref{eq:01cal I(x)4}) of the special case $x=a\in (\mathbb{C}^*)^n$. \begin{cor} Let $\mathbb{T}^n$ be the the direct product of the unit circle, i.e., $ \mathbb{T}^n:=\{(z_1,\ldots,z_n)\in\mathbb{C}^n \,;\,|z_i|=1\} $. Suppose that $|a_i|<1$, $|b_i|<1$ $(i=1,\ldots, n)$ and $a_1\cdots a_n b_1\cdots b_nq^{\alpha+\beta}=q$. Then \begin{eqnarray*} &&\!\!\!\!\!\! \Big(\frac{1}{2\pi\sqrt{-1}}\Big)^{\!\!n}\frac{1}{n!} \int_{\mathbb{T}^n} \frac{(q^\alpha a_1\cdots a_n z_1^{-1}\cdots z_n^{-1})_\infty(q^\beta b_1\cdots b_n z_1\cdots z_n)_\infty} {\prod_{i=1}^n \prod_{j=1}^n (a_iz_j^{-1})_\infty(b_i z_j)_\infty}\nonumber\\ &&\hskip 80pt \times\prod_{1\le i<j\le n}(z_i/z_j)_\infty(z_j/z_i)_\infty \frac{dz_1}{z_1}\cdots\frac{dz_n}{z_n =\frac{(q^{1-\alpha})_\infty(q^{1-\beta})_\infty}{(q)_\infty^n \prod_{i=1}^{n}\prod_{j=1}^{n}(a_ib_j)_\infty}. \end{eqnarray*} \end{cor} {\bf Proof.} By residue calculation using (\ref{eq:01cal I(x)4}) in the case $x=a\in (\mathbb{C}^*)^n$. $\square$ \subsection{Dual expression of the Jackson integral $I(x)$} For an arbitrary $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$ we specify $x^{-1}$ as \begin{equation} \label{eq:01x-1} x^{-1}:=(x_1^{-1},x_2^{-1},\ldots,x_n^{-1})\in (\mathbb{C}^*)^n. \end{equation} For the point $b=(b_{1},b_{2},\ldots,b_{n})\in (\mathbb{C}^*)^{n}$, if we set $y=b^{-1}$ in the connection formula (\ref{eq:01cal I(x)I(y)}), then we obtain the expression \begin{equation} \label{eq:01cal I(x)I(b)} {\cal I}(x)=\frac{\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)}{\theta(q^\alpha)}{\cal I}(b^{-1}). \end{equation} Since $x=b^{-1}$ is a pole of the function $I(x)$ by definition, $I(b^{-1})$ no longer makes sense. However, the regularization ${\cal I}(b^{-1})$ appearing on the right-hand side of (\ref{eq:01cal I(x)I(b)}) still has meaning as a special value of a holomorphic function. We will show a way to realize the regularization ${\cal I}(b^{-1})$ as a computable object by another Jackson integral. For this purpose, let $\bar\Phi(z)$ and $\bar \Delta(z)$ be the functions specified by \begin{equation} \label{eq:01barPhi} \bar\Phi(z):=(z_1z_2\cdots z_n)^{1-\alpha_1-\cdots-\alpha_n-\beta_1-\cdots-\beta_n-\alpha} \prod_{i=1}^n\prod_{j=1}^n\frac{(qb_j^{-1}z_i)_\infty}{(a_jz_i)_\infty}, \end{equation} where $\alpha_i$ and $\beta_i$ are given by $a_i=q^{\alpha_i}, b_i=q^{\beta_i}$, and \begin{equation} \label{eq:01barDelta} \bar\Delta(z):=\prod_{1\le i<j\le n}(z_i-z_j). \end{equation} For $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the sum $\bar I(x)$ by \begin{equation} \label{eq:01bar I(x)} \bar I(x):=\int_0^{\mbox{\small $x$}\infty} \bar\Phi(z)\bar\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} which converges under the condition (\ref{eq:01condition01}). We call $\bar I(x)$ the {\it dual Jackson integral of $I(x)$}, and call $\bar I(b)$ its {\it truncation}. (The sum $I(x)$ transforms to its dual $\bar I(x)$ up to sign if we interchange the parameters as \begin{equation} \label{eq:01a<-->b} \alpha\leftrightarrow\beta \quad\mbox{and}\quad a_i\leftrightarrow b_i\ (i=1,\ldots,n), \end{equation} where $\beta$ is specified by (\ref{eq:01beta}).) We also define the {\it regularization} $\bar {\cal I}(x)$ of $\bar I(x)$ as \begin{equation} \label{eq:01barh(x)} \bar {\cal I}(x):= \frac{\bar I(x)}{\bar h(x)}, \ \mbox{where}\ \bar h(x)=(x_1x_2\cdots x_n)^{1-\alpha_1-\cdots-\alpha_{n}-\beta_1-\cdots-\beta_{n}-\alpha} \frac{\prod_{1\le i<j\le n}x_i\theta(x_j/x_i)}{\prod_{i=1}^n\prod_{j=1}^n\theta(a_jx_i)}. \end{equation} In the same manner as Lemma \ref{lem:h-and-s}, we can confirm that the function $\bar {\cal I}(x)$ is also holomorphic and symmetric. \begin{lem}[reflective equation] \label{lem:01ref} The connection between $I(x)$ and $\bar I(x)$ is \begin{equation} \label{eq:01ref1} I(x)=\frac{h(x)}{\bar h(x^{-1})}\bar I(x^{-1}),\quad \mbox{where}\quad \frac{h(x)}{\bar h(x^{-1})}= \prod_{i=1}^n\prod_{j=1}^n x_i^{1-\alpha_j-\beta_j} \frac{\theta(qa_j^{-1}x_i)}{\theta(b_jx_i)}, \end{equation} where $x^{-1}$ is specified as in {\rm (\ref{eq:01x-1})}. In other words, the relation between ${\cal I}(x)$ and $\bar {\cal I}(x)$ is \begin{equation} \label{eq:01ref2} {\cal I}(x)=\bar{\cal I}(x^{-1}). \end{equation} \end{lem} {\bf Proof.} From the definitions (\ref{eq:01cal I(x)1}) and (\ref{eq:01barh(x)}) the ratio $h(x)/\bar h(x^{-1})$ is written as in (\ref{eq:01ref1}). Since $ \Delta(z)=(z_1z_2\cdots z_n)^{n-1}\bar\Delta(z^{-1}) $, from (\ref{eq:01Phi1}), (\ref{eq:01Delta1}), (\ref{eq:01barPhi}) and (\ref{eq:01barDelta}), we have \begin{equation} \label{eq:01PD=hhPD} \Phi(z)\Delta(z)= \frac{h(z)}{\bar h(z^{-1})} \bar\Phi(z^{-1})\bar\Delta(z^{-1}). \end{equation} Also since $h(z)/\bar h(z^{-1})$ is invariant under the shift $z_i\to qz_i$, by the definitions (\ref{eq:01I(x)1}) and (\ref{eq:01bar I(x)}) of $I(x)$ and $\bar I(x)$, the connection (\ref{eq:01ref1}) between $I(x)$ and its dual $\bar I(x)$ is derived from (\ref{eq:01PD=hhPD}). $\square$\\ We use $\bar I(\alpha;x)$ instead of $\bar I(x)$ to see the $\alpha$ dependence. From (\ref{eq:01ref1}), the recurrence relation for $\bar I(\alpha;x)$ is completely the same as (\ref{eq:01rec1}) of $I(\alpha;x)$. \begin{lem} \label{lem:rec2} The function $\bar I(\alpha;x)$ also satisfies the recurrence relation {\rm (\ref{eq:01rec1})} of $I(\alpha;x)$, and is rewritten as \begin{equation} \label{eq:01rec2} {\bar I}(\alpha;x)=\frac{1-q^{1-\alpha}} {b_1b_2\cdots b_n(1-q^{1-\alpha}a_1^{-1}a_2^{-1}\cdots a_n^{-1}b_1^{-1}b_2^{-1}\cdots b_n^{-1})} {\bar I}(\alpha-1;x). \end{equation} \end{lem} We saw above that although $I(b^{-1})$ no longer makes sense, its regularization ${\cal I}(b^{-1})$ still has meaning as a special value of a holomorphic function, and ${\cal I}(b^{-1})$ is evaluated by the dual integral $\bar{\cal I}(b)$ via the reflection equation (\ref{eq:01ref2}). Moreover, by definition the regularization $\bar{\cal I}(b)$ itself is calculated using $\bar I(b)$, which is then a truncated Jackson integral. Though we have already known the value of ${\cal I}(b^{-1})$ through connection formula (\ref{eq:01cal I(x)I(a)}) of the case $x=b^{-1}$, the point is that we can calculate ${\cal I}(b^{-1})$ directly from $\bar I(b)$, whose leading term of its asymptotic behavior as $\alpha\to -\infty$ is simply computed as follows. \begin{cor} \label{cor:bar I(a-N;b)} The asymptotic behavior of $\bar I(\alpha-N;b)$ as $N\to +\infty$ is written as \begin{eqnarray} \label{eq:01asym2} \bar I(\alpha-N;b)&\sim& (1-q)^n(b_1\cdots b_n)^{1-\alpha_1-\cdots-\alpha_{n}-\beta_1-\cdots-\beta_{n}-\alpha+N} \nonumber\\ &&\times \bar\Delta(b) \prod_{i=1}^n\prod_{j=1}^n\frac{(qb_i/b_j)_\infty}{(a_ib_j)_\infty} \quad(N\to +\infty). \end{eqnarray} Moreover, by repeated use of {\rm (\ref{eq:01rec2})}, the truncated Jackson integral $\bar I(b)$ is written as \begin{equation} \label{eq:01I(b)} \bar I(b)= \frac{ (1-q)^n(b_1\cdots b_n)^ {1-\alpha_1-\cdots-\alpha_n-\beta_1-\cdots-\beta_n-\alpha} (q)_\infty^n (q^{1-\alpha})_\infty } {(q^{1-\alpha}a_1^{-1}\cdots a_n^{-1}b_1^{-1}\cdots b_n^{-1} )_\infty\prod_{i=1}^{n}\prod_{j=1}^{n}(a_ib_j)_\infty} \prod_{1\le i<j\le n}b_i\theta(b_j/b_i). \end{equation} \end{cor} {\bf Proof.} Using Lemma \ref{lem:rec2}, the arguments are completely parallel to Lemma \ref{lem:I(a+N;a)} and Corollary \ref{cor:01I(a)}. Actually, if we substitute $a_j$, $b_j$ and $\alpha$ in $\Phi(z)$ of (\ref{eq:01Phi1}) by $a_j\to b_j$, $b_j\to a_j$ and $\alpha\to \beta=1-\alpha_1-\cdots-\alpha_n-\beta_1-\cdots-\beta_n-\alpha$, respectively, then $\Phi(z)$ transforms to $\bar\Phi(z)$ in (\ref{eq:01barPhi}), so that we obtain the same result as Corollary \ref{cor:01I(a)} with these substitutions. $\square$\\ From (\ref{eq:01cal I(x)I(y)}) and (\ref{eq:01ref2}), for $x, y\in (\mathbb{C}^*)^{n}$ we have the connection formula between ${\cal I}(x)$ and $\bar{\cal I}(y)$ as $$ {\cal I}(x)=\frac{\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)} {\theta(q^\alpha y_1^{-1}y_2^{-1}\cdots y_n^{-1}b_1b_2\cdots b_n)} \bar{\cal I}(y). $$ In particular, if $y=b$, then we have $$ {\cal I}(x)=\frac{\theta(q^\alpha x_1x_2\cdots x_nb_1b_2\cdots b_n)}{\theta(q^\alpha)} \bar{\cal I}(b). $$ If we switch the symbols from ${\cal I}(x)$ and $\bar{\cal I}(b)$ to $I(x)$ and $\bar I(b)$, respectively, then we obtain \begin{equation} \label{eq:01I(x)I(b)} I(x)=\frac{h(x)\theta(q^\alpha a_1a_2\cdots a_nb_1b_2\cdots b_n)}{{\bar h}(b)\theta(q^\alpha)} {\bar I}(b). \end{equation} We once again obtain the connection formula between a solution $I(x)$ of the $q$-difference equation (\ref{eq:01rec1}) and the special solution $\bar I(b)$ fixed by its asymptotic behavior (\ref{eq:01asym2}) as $\alpha\to -\infty$, as a counterpart of the formula (\ref{eq:01I(x)I(a)}) of the case $\alpha\to +\infty$. The connection formula (\ref{eq:01I(x)I(b)}) with (\ref{eq:01I(b)}) is also another expression for the Milne--Gustafson sum, like the formula (\ref{eq:01I(x)I(a)}). \\ \noindent {\bf Remark.} As we have seen above, we used the integrand $\bar\Phi(z)$ instead of $\Phi(z)$, which coincides with $\bar\Phi(z)$ up to the $q$-periodic factor $h(z)/h(z^{-1})$, and used the set $\{(b_1q^{\nu_1},\ldots,b_nq^{\nu_n})\in (\mathbb{C}^*)^n\,;\,(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n\}$ as the ``$(-\alpha)$-stable cycle" for the dual integral $\bar I(x)$ when we construct a special solution $\bar I(b)$ expressed by (Jackson) integral representation for the $q$-difference equation (\ref{eq:01rec1}) as $\alpha\to -\infty$ . In the classical setting, this process is usually done by taking an imaginary cycle without changing the integrand $\Phi(z)$ under the ordinary integral representation. In the $q$-analog setting Aomoto and Aomoto--Kato used the integral representation without changing the integrand $\Phi(z)$, but instead, they adopted the residue sum on the set $\{(b_1^{-1}q^{-\nu_1},\ldots,b_n^{-1}q^{-\nu_n})\in (\mathbb{C}^*)^n\,;\,(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n\}$ of poles of $I(x)$. They call this cycle the {\it $\alpha$-unstable cycle} \cite{Ao94,AK94-2} of $I(x)$ for the parameter $\alpha$. To carry out this process is called the {\it regularization} in their original paper \cite{Ao90}. We hope our slight changes of terminology does not bring confusion to the reader. \\ \section{Jackson integral of Dixon--Anderson type} \label{section:02} In this section we use $q$-difference equations to show a proof of Evans's summation formula for $q$-Dixon--Anderson integral introducing a multi-dimensional bilateral extension of his sum, which we call the Jackson integral of Dixon--Anderson type. In addition, as a limiting case, the Milne--Gustafson summation formula is deduced from it. In this sense, the Jackson integral of Dixon--Anderson type as a $q$-analog of the integral (\ref{eq:DA}) can also be regarded as a natural multi-variable extension of Ramanujan's $_1\psi_1$ sum . \subsection{Definitions and the results} Let $a_1,a_2,\ldots,a_{n+1}$, $b_1,b_2,\ldots,b_{n+1}$ be complex numbers satisfying \begin{equation} \label{eq:02condition01} q<|a_1a_2\cdots a_{n+1}b_1b_2\cdots b_{n+1}|. \end{equation} Let $\Phi(z)$ be specified by \begin{equation} \label{eq:02Phi1} \Phi(z):=z_1z_2\cdots z_n\prod_{i=1}^n\prod_{j=1}^{n+1}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty}, \end{equation} (cf. (\ref{eq:01Phi1})) and let $\Delta(z)$ be specified by (\ref{eq:01Delta1}). For $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the sum $J(x)$ by \begin{equation} \label{eq:02J(x)1} J(x):=\int_0^{\mbox{\small $x$}\infty}\Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} which converges absolutely under the condition (\ref{eq:02condition01}) (cf.~\cite[Lemma 3.19]{Gu87}, \cite[Lemma 2.5]{Mi86}). We call $J(x)$ the {\it Jackson integral of Dixon--Anderson type}. By definition $J(x)$ is skew-symmetric. \vskip 4mm \par For an arbitrary point $(x_1,x_2,\ldots,x_{n+1})\in (\mathbb{C}^*)^{n+1}$, we set the point $(\widehat{x}_i)$ of $(\mathbb{C}^*)^{n}$ by $$(\widehat{x}_i):=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{n+1})\in (\mathbb{C}^*)^{n}\quad\mbox{for}\quad i=1,2,\ldots,n+1.$$ The main result of this section is the following: \begin{thm}\label{thm:02} For the function $J(x)$ on $(\mathbb{C}^*)^{n}$ as a sum over $\mathbb{Z}^n$, \begin{equation} \label{eq:02J(x)} \sum_{i=1}^{n+1}(-1)^{i-1}J(\widehat{x}_i)= C_0\,\frac{\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1})} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(x_ib_j)} \prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j), \end{equation} where $C_0$ is a constant independent of $(x_1,x_2,\ldots,x_{n+1})\in (\mathbb{C}^*)^{n+1}$, which is explicitly written as \begin{equation} \label{eq:02C} C_0=(1-q)^n \frac{(q)_\infty^n \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty}. \end{equation} \end{thm} {\bf Proof.} From the denominator of integrand (\ref{eq:02Phi1}), we see the poles of $J(x)$ are included in the set of zero points of $\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(x_ib_j)$. Since the sum $\sum_{i=1}^{n+1}(-1)^{i-1}J(\widehat{x}_i)$ is skew-symmetric with respect to the permutation of $\{x_1,x_2,\ldots, x_{n+1}\}$, it is divisible by $\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)$. We therefore obtain \begin{equation} \label{eq:02J(x)02} \sum_{i=1}^{n+1}(-1)^{i-1}J(\widehat{x}_i)=f(x) \frac{\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(x_ib_j)}, \end{equation} where $f(x)=f(x_1,x_2,\ldots,x_{n+1})$ is some holomorphic function on $(\mathbb{C}^*)^{n+1}$. Since the left-hand side of (\ref{eq:02J(x)02}) is an invariant under the $q$-shift $x_i\to qx_i$, taking account of the quasi-periodicity (\ref{eq:00quasi-period}) of the function on the right-hand of (\ref{eq:02J(x)02}), we have that the holomorphic function $f(x)$ must satisfy $$T_{x_i}f(z)=-\frac{f(x)}{x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1}} \quad\mbox{for}\quad i=1,2,\ldots,n+1.$$ This equation has the unique holomorphic solution up to constant, and is written as $$f(x)=C_0\,\theta(x_1x_2\cdots x_{n+1} b_1b_2\cdots b_{n+1}),$$ where $C_0$ is some constant independent of $x_1,x_2,\ldots,x_{n+1}$. Therefore we obtain the expression (\ref{eq:02J(x)}). The explicit evaluation of the constant $C_0$ in (\ref{eq:02C}) will be given in Subsection \ref{subsection:02-5}. $\square$ \vskip 4mm \par We call $J(\widehat{a}_i)$ ($i=1,2,\ldots,n+1$) the {\it truncated} Jackson integral, which is defined as a sum over $\mathbb{N}^n$. As a special case of Theorem \ref{thm:02}, we immediately have the following: \begin{cor}[Evans \cite{Ev92}] \label{cor:02} The {\it truncated} Jackson integral $J(\widehat{a}_i)$ satisfies \begin{equation} \label{eq:02J(a)} \sum_{i=1}^{n+1}(-1)^{i-1}J(\widehat{a}_i)= (1-q)^n\frac{(q)_\infty^n (a_1\cdots a_{n+1}b_1\cdots b_{n+1})_\infty } {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(a_ib_j)_\infty} \prod_{1\le i<j\le n+1}a_j\theta(a_i/a_j). \end{equation} \end{cor} {\bf Remark.} If we substitute $a_j$ and $b_j$ as $a_j\to x_{j-1}$ and $b_j\to q^{s_{j-1}}/x_{j-1}$, respectively, then (\ref{eq:02J(a)}) is directly rewritten by the ordinary iterated Jackson integral (\ref{eq:00jac1}) according to \begin{eqnarray} \label{eq:02evans2} &&\int_{z_n=x_{n-1}}^{x_{n}}\cdots\int_{z_2=x_1}^{x_2}\int_{z_1=x_0}^{x_1} \prod_{i=1}^n\prod_{j=0}^{n}\frac{(qz_i/x_j)_\infty}{(q^{s_j}z_i/x_j)_\infty}\Delta(z) \,d_qz_1d_qz_2\cdots d_qz_n\nonumber\\ &&=(1-q)^n\frac{(q)_\infty^n(q^{s_0+s_1+\cdots+s_{n}})_\infty} {(q^{s_0})_\infty(q^{s_1})_\infty\cdots (q^{s_{n}})_\infty} \prod_{0\le i<j\le n}\frac{x_j\theta(x_i/x_j)}{(x_iq^{s_j}/x_j)_\infty(x_jq^{s_i}/x_i)_\infty}, \end{eqnarray} which exactly coincides with the formula (\ref{eq:Evans1}) established by Evans. Since (\ref{eq:02evans2}) is already proved in \cite{Ev92}, logically speaking, the constant $C_0$ of (\ref{eq:02J(x)}) in Theorem \ref{thm:02} can conversely be evaluated as (\ref{eq:02C}) if we use (\ref{eq:02J(a)}), which is equivalent to (\ref{eq:02evans2}), after putting $(z_1,\ldots,z_{n+1})=(a_1,\ldots,a_{n+1})$ on the equation (\ref{eq:02J(x)}). The argument in \cite{Ev92} to prove (\ref{eq:02evans2}) was done under the restriction on $s_i$ as positive integers. As is already pointed out in \cite{Ev92}, by analytic continuation $s_i$ can be considered as complex numbers after proving (\ref{eq:02evans2}) in the setting $s_i$ are integers. But here we prefer to start without restrictions on the parameters being integers, and we would like to choose another way for the evaluation of $C_0$ in keeping with our viewpoint. Our method then is based on regarding $J(x)$ as a solution of $q$-difference equations fixed by its asymptotic behavior. Thus, in this paper, (\ref{eq:02J(a)}) is consequently obtained as a corollary via Theorem \ref{thm:02}. \\[4pt] The remaining subsections are mainly devoted to the evaluation of the constant $C_0$ as (\ref{eq:02C}) in Theorem \ref{thm:02}, which we will see in Subsection \ref{subsection:02-5}. In addition, considering the dual expression $\bar J(x)$ of the integral $J(x)$, we can regard the Jackson integral of Milne--Gustafson type as a limiting case of that of Dixon--Anderson type. (Consequently, we have a proof of the Milne--Gustafson sum again using the Jackson integral of Dixon--Anderson type, see Corollary \ref{cor:04}.) In this sense, the Jackson integral of Dixon--Anderson type as a $q$-analog of the integral (\ref{eq:DA}) can also be regarded as a natural multi-dimensional extension of Ramanujan's $_1\psi_1$ sum . \\ For the above purposes we first state the $q$-difference equations for the regularization and the dual expression of $J(x)$. \subsection{$q$-difference equations} Let $\bar\Phi(z)$ be specified by \begin{equation} \label{eq:02Phi2} \bar\Phi(z):=(z_1z_2\cdots z_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}}\prod_{i=1}^n\prod_{j=1}^{n+1}\frac{(qb_j^{-1}z_i)_\infty}{(a_jz_i)_\infty}, \end{equation} (cf.~(\ref{eq:01barPhi})) where $\alpha_i$ and $\beta_i$ are given by $a_i=q^{\alpha_i}, b_i=q^{\beta_i}$, and let $\bar\Delta(z)$ be specified by (\ref{eq:01barDelta}). For $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the sum $\bar J(x)$ by \begin{equation} \label{eq:02bar J(x)} \bar J(x):=\int_0^{\mbox{\small $x$}\infty} \bar\Phi(z)\bar\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} which converges under the condition (\ref{eq:02condition01}). We call $\bar J(x)$ the {\it dual Jackson integral of} $J(x)$. By definition $\bar J(x)$ is skew-symmetric. For the specific points $$x=(\,\widehat{b}_i)=(b_1,\ldots,b_{i-1},b_{i+1},\ldots,b_{n+1})\in (\mathbb{C}^*)^{n}, \quad i=1,2,\ldots, n+1,$$ we call $\bar J(\,\widehat{b}_i)$ the {\it truncated} Jackson integral, which is defined as a sum over $\mathbb{N}^n$. Let ${\cal J}(x)$ and $h(x)$ be the functions defined by \begin{equation} \label{eq:02h(z)} {\cal J}(x):=\frac{J(x)}{h(x)},\quad\mbox{where}\quad h(x)=x_1x_2\cdots x_n\frac {\prod_{1\le i<j\le n}x_j\theta(x_i/x_j)} {\prod_{i=1}^n\prod_{j=1}^{n+1}\theta(b_jx_i)}, \end{equation} which we call the {\it regularization of $J(x)$}. We also define the regularization $\bar {\cal J}(x)$ of $\bar J(x)$ according to \begin{equation} \label{eq:02barh(z)} \bar {\cal J}(x):= \frac{\bar J(x)}{\bar h(x)}, \ \mbox{where}\ \bar h(x)=(x_1x_2\cdots x_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \frac{\prod_{1\le i<j\le n}x_i\theta(x_j/x_i)}{\prod_{i=1}^n\prod_{j=1}^{n+1}\theta(a_jx_i)}. \end{equation} Since the trivial poles and zeros of $J(x)$ are canceled out by multiplying together $1/h(x)$ and $J(x)$, the function ${\cal J}(x)$ is holomorphic on $(\mathbb{C}^*)^n$, and ${\cal J}(x)$ is symmetric. In the same manner, the function $\bar {\cal J}(x)$ is also holomorphic and symmetric. \begin{lem}[reflective equation] The connection between $J(x)$ and $\bar J(x)$ is \begin{equation} \label{eq:02ref1} J(x)=\frac{h(x)}{\bar h(x^{-1})}\bar J(x^{-1}),\quad \mbox{where}\quad \frac{h(x)}{\bar h(x^{-1})}= \prod_{i=1}^n\prod_{j=1}^{n+1} x_i^{1-\alpha_j-\beta_j} \frac{\theta(qa_j^{-1}x_i)}{\theta(b_jx_i)}, \end{equation} where $x^{-1}$ is specified as in {\rm (\ref{eq:01x-1})}. In other words, the relation between ${\cal J}(x)$ and $\bar {\cal J}(x)$ is \begin{equation} \label{eq:02ref2} {\cal J}(x)=\bar{\cal J}(x^{-1}). \end{equation} \end{lem} {\bf Proof.} The proof is the same as that of Lemma \ref{lem:01ref} in Section \ref{section:01}. $\square$\\ We now state the $q$-difference equations for $\bar J(x)$ or $\bar{\cal J}(x)$ under the setting $x=(\,\widehat{b}_i)$, $i=1,2,\ldots, n+1$. \begin{prop} \label{lem:02q-diff01} Fix $x=(\,\widehat{b}_i), i=1,2,\ldots, n+1$ and assume $1<|a_1a_2\cdots a_{n+1}b_1b_2\cdots b_{n+1}|$. Then the recurrence relations for $\bar J(x)$ are given by \begin{equation} \label{eq:02rec2} T_{a_j}\bar J(x)=(-a_j)^n\frac{\prod_{i=1}^{n+1}(1-b_i^{-1}a_j^{-1})}{1-\prod_{i=1}^{n+1}b_i^{-1}a_i^{-1}}\bar J(x) \ \ \mbox{and}\ \ T_{b_j}\bar J(x)=(-b_j^{-1})^{n}\frac{\prod_{i=1}^{n+1}(1-a_ib_j)}{1-\prod_{i=1}^{n+1}a_ib_i}\bar J(x) \end{equation} for $j=1,2,\ldots, n+1.$ The recurrence relations for $\bar {\cal J}(x)$ are given by \begin{equation} T_{a_j}\bar {\cal J}(x)=\frac{\prod_{i=1}^{n+1}(1-b_i^{-1}a_j^{-1})}{1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}}\bar {\cal J}(x) \quad\mbox{and}\quad T_{b_j}\bar {\cal J}(x)=\frac{\prod_{i=1}^{n+1}(1-a_i^{-1}b_j^{-1})}{1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}}\bar {\cal J}(x). \label{eq:02rec-bJ(b)} \end{equation} for $j=1,2,\ldots, n+1.$ \end{prop} The remaining this subsection is devoted to the proof of the above proposition. Let $e(c;z)$ be the symmetric polynomial of degree $n$ defined by \begin{equation} \label{eq:02e(c;z)} e(c;z):=\prod_{i=1}^{n}(1-c^{-1}z_i), \end{equation} which has the property that $e(c;z)$ vanishes for $z_i=c$. \begin{lem} \label{lem:q-diff01} Suppose that $\nabla_{\!i}$ in {\rm(\ref{eq:00nabla})} is defined in terms $\bar\Phi$ instead of $\Phi$. If we put $\varphi(z)$ as \begin{equation} \label{eq:02varphi0} \varphi(z)= z_1^{-1}\prod_{i=1}^{n+1}(b_i-z_1)\times\prod_{1\le j<k\le n}(z_k-b_j), \end{equation} then ${\cal A}\nabla_{\!1}\varphi(z)$ is expanded as \begin{equation} \label{eq:02A-nabla-phi0} {\cal A}\nabla_{\!1}\varphi(z)=\Big(c_0+c_1\frac{e(b_1;z)}{z_1z_2\cdots z_n}\Big)\Delta(z), \end{equation} where the constants $c_0$ and $c_1$ are given by \begin{equation} \label{eq:02c0c1} c_0=-b_1^{-1} a_1^{-1}\cdots a_{n+1}^{-1}\prod_{i=1}^{n+1}(1- a_ib_1), \quad c_1=(-b_1)^{n}a_1^{-1}\cdots a_{n+1}^{-1}(1-\prod_{i=1}^{n+1}a_ib_i). \end{equation} \end{lem} {\bf Proof.} From (\ref{eq:02Phi2}), the ratio ${T_{z_1}\bar\Phi(z)}/\bar\Phi(z)$ is written as $$ \frac{T_{z_1}\bar\Phi(z)}{\bar\Phi(z)}=q\prod_{j=1}^{n+1}\frac{a_j^{-1}-z_1}{b_j-q z_1}. $$ From (\ref{eq:00nabla}) modified as specified above, for the function $\varphi(z)$ defined in (\ref{eq:02varphi0}) we have $$ \nabla_{\!1}\varphi(z)= z_1^{-1}\Big( \prod_{i=1}^{n+1}(b_i- z_1) -\prod_{i=1}^{n+1}(a_i^{-1}-z_1)\Big) \prod_{1\le j<k\le n}(z_k-b_j) =\frac{\varphi'(z)}{z_1z_2\cdots z_n} $$ where \begin{eqnarray} \label{eq:02nabla-phi0} \varphi'(z)=\Big( \prod_{i=1}^{n+1}(b_i- z_1) -\prod_{i=1}^{n+1}(a_i^{-1}-z_1)\Big) z_2z_3\cdots z_n\prod_{1\le j<k\le n}(z_k-b_j). \end{eqnarray} Since $ {\cal A}\nabla_{\!1}\varphi(z)={{\cal A}\varphi'(z)}/{z_1z_2\cdots z_n} $, for the purpose of proving (\ref{eq:02A-nabla-phi0}) it suffices to show that \begin{equation} \label{eq:02A-nabla-phi0.5} {\cal A}\varphi'(z)=\big(c_0z_1z_2\cdots z_n+c_1e(b_1;z)\big)\Delta(z). \end{equation} Taking account of the degree of $\varphi'(z)$ as a polynomial of $z$, we can expand the skew-symmetrization ${\cal A}\varphi'(z)$ as \begin{equation} \label{eq:02A-nabla-phi1} {\cal A}\varphi'(z)=\Big(c_0z_1z_2\cdots z_n+\sum_{i=1}^nc_ie(b_i;z)\Big)\Delta(z), \end{equation} where the coefficients $c_i$ $(i=0,1,\ldots,n)$ are some constants. Here we will confirm that $c_i$ vanishes if $i\ge 2$, and $c_0$ and $c_1$ are evaluated as (\ref{eq:02c0c1}). First we take $z_1=b_1, z_2=b_2,\ldots, z_n=b_n$. Then, from (\ref{eq:02nabla-phi0}) and (\ref{eq:02A-nabla-phi1}) with the vanishing property of $e(b_i;z)$, we have two ways of expression of $ {\cal A}\varphi'(z)$, $$ {\cal A}\varphi'(z)=-\prod_{i=1}^{n+1}(a_i^{-1}- b_1)\times b_2b_3\cdots b_n\Delta(z)=c_0b_1b_2\cdots b_n\Delta(z). $$ Thus we obtain $c_0=-b_1^{-1}\prod_{i=1}^{n+1}(a_i^{-1}-b_1) =-b_1^{-1} a_1^{-1}\cdots a_{n+1}^{-1}\prod_{i=1}^{n+1}(1- a_ib_1)$. Next we take $z_1=b_1$, $z_2=b_2,\ldots, z_{n-1}=b_{n-1}$ and $z_n\not\in\{b_1,b_2,\ldots,b_{n+1}\}$. Then, from (\ref{eq:02A-nabla-phi1}) and the vanishing property of $e(a_i;z)$, we have \begin{equation} \label{eq:02A-nabla-phi2} {\cal A}\varphi'(z)=\big(c_0b_1b_2\cdots b_{n-1}z_n+c_n e(b_n;z)\big)\Delta(z). \end{equation} On the other hand, from (\ref{eq:02nabla-phi0}), we have \begin{equation} \label{eq:02A-nabla-phi3} {\cal A}\varphi'(z) =-\prod_{i=1}^{n+1}(a_i^{-1}- b_1)\times b_2b_3\cdots b_{n-1}z_n\Delta(z)=c_0b_1b_2\cdots b_{n-1}z_n\Delta(z). \end{equation} Comparing (\ref{eq:02A-nabla-phi2}) and (\ref{eq:02A-nabla-phi3}), we obtain $c_n=0$. In the same manner, by symmetry of ${\cal A}\varphi'(z)$ corresponding to $\varphi'(z)$ in (\ref{eq:02nabla-phi0}), we also obtain $c_i=0$ if $i\ge 2$. Thus we obtain the expansion (\ref{eq:02A-nabla-phi0.5}). Lastly, we suppose $z_1=0$ and $z_2=b_2,\ldots, z_{n}=b_{n}$ to determine $c_1$ in (\ref{eq:02A-nabla-phi0.5}). Then we have \begin{equation} \label{eq:02A-nabla-phi4} {\cal A}\varphi'(z)=c_1(-b_1^{-1})^nb_2b_3\cdots b_n\prod_{1\le j<k\le n}(b_k-b_j). \end{equation} On the other hand, from (\ref{eq:02nabla-phi0}), we have \begin{equation} \label{eq:02A-nabla-phi5} {\cal A}\varphi'(z)=\Big( \prod_{i=1}^{n+1}b_i -\prod_{i=1}^{n+1}a_i^{-1}\Big) b_2b_3\cdots b_n\prod_{1\le j<k\le n}(b_k-b_j). \end{equation} Comparing (\ref{eq:02A-nabla-phi4}) with (\ref{eq:02A-nabla-phi5}), we obtain $c_1$ as is expressed in (\ref{eq:02c0c1}). $\square$\\ \noindent {\bf Proof of Proposition \ref{lem:02q-diff01}.} We will prove (\ref{eq:02rec2}) for $T_{b_j}$ first. Without loss of generality, it suffices to show that \begin{equation} \label{eq:02rec0} T_{b_1}{\bar J}(x)= (-b_1^{-1})^{n}\frac{\prod_{i=1}^{n+1}(1-a_ib_1)}{1-\prod_{i=1}^{n+1}a_ib_i}{\bar J}(x). \end{equation} Since we have $ T_{b_1}{\bar\Phi}(z)/{\bar\Phi}(z)= \prod_{i=1}^{n}(1-b_1^{-1}z_i)/z_i=e(b_1;z)/z_1\cdots z_n $ by definition, $T_{b_1}{\bar J}(x)$ is expressed by $$ T_{b_1}{\bar J}(x)=\int_0^{\mbox{\small $x$}\infty}\frac{e(b_1;z)}{z_1z_2\cdots z_n}{\bar\Phi}(z){\bar\Delta}(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. $$ Under the condition $x=(\,\widehat{b}_i)$, $i=1,2,\ldots,n+1$, the Jackson integral is truncated, i.e., the support of the Jackson integral in $\mathbb{Z}^n$ is restricted to the fan region $\mathbb{N}^n$. Now we assume the condition $1<|a_1a_2\cdots a_{n+1}b_1b_2\cdots b_{n+1}|$ on parameters. Then the truncated Jackson integral $$ \int_0^{\mbox{\small $x$}\infty}\varphi(z){\bar\Phi}(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} $$ where $\varphi(z)$ is defined by (\ref{eq:02varphi0}) converges absolutely. Therefore, applying (\ref{eq:00A}) in Lemma \ref{lem:00nabla=0} to the fact (\ref{eq:02A-nabla-phi0}) in Lemma \ref{lem:q-diff01}, we obtain the relation $$c_0{\bar J}(x)+c_1T_{b_1}{\bar J}(x)=0,$$ where $c_0$ and $c_1$ are given in (\ref{eq:02c0c1}). This relation coincides with (\ref{eq:02rec0}). Next we will show the $q$-difference equation (\ref{eq:02rec2}) for $T_{a_j}$ of the case $j=1$ in the same manner as above. Since we have $ T_{a_1}{\bar\Phi}(z)/{\bar\Phi}(z)= \prod_{i=1}^{n}(1-a_1z_i)/z_i=e(a_1^{-1};z)/z_1z_2\cdots z_n$, $T_{a_1}{\bar J}(x)$ is expressed by $$ T_{a_1}{\bar J}(x)=\int_0^{\mbox{\small $x$}\infty}\frac{e(a_1^{-1};z)}{z_1z_2\cdots z_n}{\bar \Phi}(z){\bar\Delta}(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. $$ Here if we exchange $b_i$ with $a_i^{-1}$ $(i=1,2,\ldots,n+1)$ in the above proof of (\ref{eq:02rec0}) including that of Lemma \ref{lem:q-diff01}, the way of argument is completely symmetric for this exchange. Therefore (\ref{eq:02rec2}) for $T_{a_1}$ is obtained exchanging $b_i$ with $a_i^{-1}$ on the coefficient of (\ref{eq:02rec0}). From the expression (\ref{eq:02barh(z)}) of $\bar h(x)$, under the condition $x=(\,\widehat{b}_i)$, $i=1,2,\ldots,n+1$, $\bar h(x)$ satisfies \begin{equation*} T_{a_j}\bar h(x)=(-a_j)^n\bar h(x)\quad\mbox{and}\quad T_{b_j}\bar h(x)=\frac{b_j}{b_1b_2\cdots b_{n+1}}\bar h(x) \quad (j=1,2,\ldots, n+1). \end{equation*} Since $\bar {\cal J}(x)={\bar J}(x)/{\bar h}(x)$, from the above equations and (\ref{eq:02rec2}), we therefore obtain (\ref{eq:02rec-bJ(b)}). $\square$ \subsection{Evaluation of the truncated Jackson integral } The main result of this subsection is the evaluation of the regularization of the truncated Jackson integral using the $q$-difference equations (\ref{eq:02rec-bJ(b)}) in Proposition \ref{lem:02q-diff01} and its asymptotic behavior for the special direction of parameters. \begin{thm} \label{thm:02main03} For $x=(\,\widehat{b}_i)$, $i=1,2,\ldots, n+1$, the regularized Jackson integral $\bar{\cal J}(x)$ is evaluated as \begin{equation} \label{eq:02cal J(b)} \bar{\cal J}(\,\widehat{b}_i)= (1-q)^n \frac{(q)_\infty^n \prod_{j=1}^{n+1}\prod_{k=1}^{n+1}(qa_j^{-1}b_k^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty}. \end{equation} \end{thm} {\bf Proof.} Without loss of generality, it suffices to show (\ref{eq:02cal J(b)}) in the case $x=(\,\widehat{b}_{n+1})$, i.e., the case $x=b=(b_1,\ldots,b_n)\in (\mathbb{C}^*)^{n}$. We denote by $C$ the right-hand side of (\ref{eq:02cal J(b)}). Then it is immediate to confirm that $C$ as a function of $a_j$ and $b_j$ satisfies the same $q$-difference equations as (\ref{eq:02rec-bJ(b)}) of $\bar {\cal J}(b)$. Therefore the ratio $\bar {\cal J}(b)/C$ is invariant under the $q$-shift with respect to $a_j$ and $b_j$. Next, for an integer $N$, let $T^N$ be the $q$-shift operator for a special direction defined as $$ T^N: a_i\to q^{-nN}a_i\ (i=1,2,\ldots,n+1);\ b_j \to q^{(n+1)N}b_j\ (j=1,2,\ldots,n);\ b_{n+1}\to q^{-nN}b_{n+1}. $$ Then, by definition $T^N \bar J(b)$ is written as \begin{eqnarray*} &&T^N \bar J(b) =(1-q)^n\sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n} (b_1b_2\cdots b_nq^{\nu_1+\cdots+\nu_n+n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\quad \times\prod_{i=1}^n\Big( \frac{(b_{n+1}^{-1}b_iq^{1+\nu_i+(2n+1)N})_\infty}{(a_{n+1}b_iq^{\nu_i+N})_\infty} \prod_{j=1}^{n} \frac{(b_j^{-1}b_iq^{1+\nu_i})_\infty}{(a_jb_iq^{\nu_i+N})_\infty}\Big) \prod_{1\le i<j\le n}q^{(n+1)N}(b_iq^{\nu_i}-b_jq^{\nu_j}), \end{eqnarray*} so that the leading term of the asymptotic behavior of $T^N \bar J(b)$ as $N\to +\infty$ is given by the term corresponding to $(\nu_1,\ldots,\nu_n)=(0,\ldots,0)$ in the above sum, which is \begin{eqnarray} T^N \bar J(b) &\sim& (1-q)^n(b_1b_2\cdots b_nq^{n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\times\prod_{i=1}^n\prod_{j=1}^{n}(qb_j^{-1}b_i)_\infty \prod_{1\le i<j\le n}q^{(n+1)N}(b_i-b_j) \nonumber\\ &=& (b_1b_2\cdots b_nq^{n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\times (1-q)^n(q)_\infty^n \prod_{1\le i<j\le n}q^{(n+1)N} b_i\theta(b_j/b_i) \quad\quad(N\to +\infty). \label{eq:02TNJ} \end{eqnarray} On the other hand, from (\ref{eq:02barh(z)}), $\bar h(b)C$ is written as \begin{eqnarray*} \bar h(b)C&=&(b_1b_2\cdots b_n) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}}(1-q)^n(q)_\infty^n \prod_{1\le i<j\le n}b_i\theta(b_j/b_i)\\ &&\times \frac{\prod_{i=1}^{n+1}(qa_i^{-1}b_{n+1}^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty \prod_{i=1}^n\prod_{j=1}^{n+1}(a_jb_i)_\infty}, \end{eqnarray*} so that we have \begin{eqnarray} T^N\Big(\bar h(b)C\Big)&=&(b_1b_2\cdots b_nq^{n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ && \times (1-q)^n(q)_\infty^n \prod_{1\le i<j\le n}q^{(n+1)N}b_i\theta(b_j/b_i) \nonumber\\ &&\times \frac{\prod_{i=1}^{n+1}(a_i^{-1}b_{n+1}^{-1}q^{1+2nN})_\infty} {(a_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1}q^{1+nN})_\infty \prod_{i=1}^n\prod_{j=1}^{n+1}(a_jb_iq^N)_\infty}. \label{eq:02TNhC} \end{eqnarray} As we saw, the ratio $\bar {\cal J}(b)/C$ is invariant under the $q$-shift with respect to $a_j$ and $b_j$. Thus $\bar {\cal J}(b)/C$ is also invariant under the $q$-shift $T^N$. Therefore, comparing (\ref{eq:02TNJ}) with (\ref{eq:02TNhC}), we obtain \begin{eqnarray*} \frac{\bar {\cal J}(b)}{C}&=&T^N\frac{\bar {\cal J}(b)}{C}=\frac{T^N\bar J(b)}{T^N\bar h (b)C} = \lim_{N\to +\infty}\frac{T^N\bar J(b)}{T^N\bar h (b)C}\\[2pt] &=&\lim_{N\to +\infty} \frac {(a_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1}q^{1+nN})_\infty \prod_{i=1}^n\prod_{j=1}^{n+1}(a_jb_iq^N)_\infty} {\prod_{i=1}^{n+1}(a_i^{-1}b_{n+1}^{-1}q^{1+2nN})_\infty} \\[2pt] &=&1, \end{eqnarray*} and thus $\bar {\cal J}(b)=C$. $\square$ \subsection{A remark on the relation between $\bar J(b)$ and $\bar I(b)$} As an application of the $q$-difference equations (\ref{eq:02rec2}) for $\bar J(x)$, we can show that the product formula (\ref{eq:01I(b)}) for the Milne--Gustafson sum $\bar I(b)$ in Corollary \ref{cor:bar I(a-N;b)} (or (\ref{eq:01I(a)}) of $I(a)$ in Corollary \ref{cor:01I(a)} by their duality (\ref{eq:01a<-->b}) of parameters) is a special case of (\ref{eq:02cal J(b)}) in Theorem \ref{thm:02main03}. This indicates a way to prove the Milne--Gustafson summation formula from the product formula of the Jackson integral of Dixon--Anderson type. \begin{cor} \label{cor:04} For $b=(b_1,b_2,\ldots,b_n)$, the truncated Jackson integral $\bar J(b)$ of Dixon--Anderson type is expressed as \begin{equation} \label{eq:02JandI} \bar J(b)=\bar I(b)\prod_{i=1}^n\frac{(qa_i^{-1}b_{n+1}^{-1})_\infty}{(b_ia_{n+1})_\infty}, \end{equation} where $\bar I(b)$ is the truncated Jackson integral of Milne--Gustafson type defined by {\rm (\ref{eq:01bar I(x)})} with the setting $\alpha=\alpha_{n+1}+\beta_{n+1}$. In particular, $\bar I(b)$ is expressed as {\rm (\ref{eq:01I(b)})} in Corollary \ref{cor:bar I(a-N;b)}. \end{cor} {\bf Remark.} From (\ref{eq:02JandI}), $\bar I(b)$ is a limiting case of $\bar J(b)$ with the $q$-shift $a_{n+1}\to q^N a_{n+1}$ and $b_{n+1}\to q^{-N} b_{n+1}$ $(N\to +\infty)$. Conversely, the product formula (\ref{eq:02cal J(b)}) of $\bar J(b)$ in Theorem \ref{thm:02main03} is reconstructed from the product formula (\ref{eq:01I(b)}) of $\bar I(b)$ via the connection (\ref{eq:02JandI}). \\ \noindent {\bf Proof.} From (\ref{eq:02rec2}) the recurrence relation of $\bar J(b)$ with respect to the $q$-shift $a_{n+1}\to qa_{n+1}$ and $b_{n+1}\to q^{-1} b_{n+1}$ is written as \begin{equation*} \bar J(b)=T_{b_{n+1}}^{-1}T_{a_{n+1}}\bar J(b)\times \prod_{i=1}^{n}\frac{1-qa_i^{-1}b_{n+1}^{-1}}{1-b_ia_{n+1}}. \end{equation*} By repeated use of this equation we have \begin{equation} \label{eq:02JandI1} \bar J(b)=T_{b_{n+1}}^{-N}T_{a_{n+1}}^N\bar J(b)\times \prod_{i=1}^n\frac{(qa_i^{-1}b_{n+1}^{-1})_N}{(b_ia_{n+1})_N} =\lim_{N\to \infty}T_{b_{n+1}}^{-N}T_{a_{n+1}}^N\bar J(b)\times \prod_{i=1}^n\frac{(qa_i^{-1}b_{n+1}^{-1})_\infty}{(b_ia_{n+1})_\infty}. \end{equation} Moreover, by definition $\displaystyle \lim_{N\to \infty}T_{a_{n+1}}^NT_{b_{n+1}}^{-N}\bar J(b)$ is written as \begin{eqnarray} \lim_{N\to \infty}T_{b_{n+1}}^{-N}T_{a_{n+1}}^N\bar J(b) &=&\lim_{N\to \infty} (1-q)^n\!\!\!\!\! \sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n} (b_1b_2\cdots b_n q^{\nu_1+\cdots+\nu_n})^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \nonumber\\ && \times\prod_{i=1}^n\Big( \frac{(b_{n+1}^{-1}b_iq^{1+\nu_i+N})_\infty}{(a_{n+1}b_iq^{\nu_i+N})_\infty} \prod_{j=1}^{n}\frac{(b_j^{-1}b_iq^{1+\nu_i})_\infty}{(a_jb_iq^{\nu_i})_\infty}\Big) \prod_{1\le i<j\le n}(b_iq^{\nu_i}-b_jq^{\nu_j}) \nonumber\\ &=& (1-q)^n\!\!\!\!\! \sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n} (b_1b_2\cdots b_n q^{\nu_1+\cdots+\nu_n})^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \nonumber\\ && \times\prod_{i=1}^n \prod_{j=1}^{n}\frac{(b_j^{-1}b_iq^{1+\nu_i})_\infty}{(a_jb_iq^{\nu_i})_\infty} \prod_{1\le i<j\le n}(b_iq^{\nu_i}-b_jq^{\nu_j}), \label{eq:02JandI2} \end{eqnarray} which exactly coincides with the definition of $\bar I(b)$ under the setting $\alpha=\alpha_{n+1}+\beta_{n+1}$. From (\ref{eq:02JandI1}) and (\ref{eq:02JandI2}), we therefore obtain (\ref{eq:02JandI}). Next, using (\ref{eq:02JandI}), (\ref{eq:02h(z)}) and (\ref{eq:02cal J(b)}) of Theorem \ref{thm:02main03}, the sum $\bar I(b)$ is conversely calculated as \begin{eqnarray*} \bar I(b)&=&\prod_{i=1}^n\frac{(b_ia_{n+1})_\infty}{(qa_i^{-1}b_{n+1}^{-1})_\infty}\bar J(b) =\prod_{i=1}^n\frac{(b_ia_{n+1})_\infty}{(qa_i^{-1}b_{n+1}^{-1})_\infty}\bar{\cal J}(b)\bar h(b)\\ &=& \prod_{i=1}^n\frac{(b_ia_{n+1})_\infty}{(qa_i^{-1}b_{n+1}^{-1})_\infty} \times(1-q)^n \frac{(q)_\infty^n \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty}\\ &&\quad\times(b_1b_2\cdots b_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \frac{\prod_{1\le i<j\le n}b_i\theta(b_j/b_i)} {\prod_{i=1}^{n}\prod_{j=1}^{n+1}\theta(a_jb_i)} \\ &=&(1-q)^n \frac{(b_1b_2\cdots b_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}}} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \frac{(q)_\infty^n (qa_{n+1}^{-1}b_{n+1}^{-1})_\infty} {\prod_{i=1}^{n}\prod_{j=1}^{n}(a_ib_j)_\infty} \prod_{1\le i<j\le n}b_i\theta(b_j/b_i), \end{eqnarray*} which exactly coincides with (\ref{eq:01I(b)}) of Corollary \ref{cor:bar I(a-N;b)} under the setting $\alpha=\alpha_{n+1}+\beta_{n+1}$. $\square$ \subsection{Evaluation of the constant $C_0$} \label{subsection:02-5} In this subsection we will evaluate the constant $C_0$ in (\ref{eq:02J(x)}) of Theorem \ref{thm:02}. From the expression (\ref{eq:02J(x)}), the constant $C_0$ is written as the sum of functions, \begin{equation} \label{eq:02C-sum1} C_0=\sum_{k=1}^{n+1}(-1)^{k-1}g_k(x_1,x_2,\ldots,x_{n+1}), \end{equation} where $$\displaystyle g_k(x_1,x_2,\ldots,x_{n+1}) :=J(\widehat{x}_k) \frac {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(x_ib_j)} {\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1})\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)} . $ Using (\ref{eq:02h(z)}) and (\ref{eq:02ref2}), we have \begin{eqnarray*} g_k(x_1,x_2,\ldots,x_{n+1})&=& \frac {\bar {\cal J}(\widehat{x}_k^{\,-1})h(\widehat{x}_k)\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(x_ib_j)} {\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1})\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)}\\ &=& \frac {\bar {\cal J}(\widehat{x}_k^{\,-1})x_1\cdots x_{k-1}x_{k+1}\cdots x_{n+1}\prod_{j=1}^{n+1}\theta(x_kb_j)} {\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1}) \prod_{i=1}^{k-1}x_k\theta(x_i/x_k)\prod_{j=k+1}^{n+1}x_j\theta(x_k/x_j)}\\ &=& (-1)^{k-1}\bar {\cal J}(\widehat{x}_k^{\,-1}) \frac {\theta(x_kb_k)} {\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1}) } \prod_{1\le i\le n+1\atop i\ne k}\frac{\theta(x_kb_i)}{\theta(x_k/x_i)}. \end{eqnarray*} Using this and (\ref{eq:02C-sum1}) shows \begin{equation} \label{eq:02C-sum2} C_0=\sum_{k=1}^{n+1} \bar {\cal J}(\widehat{x}_k^{\,-1}) \frac {\theta(x_kb_k)} {\theta(x_1x_2\cdots x_{n+1}b_1b_2\cdots b_{n+1}) } \prod_{1\le i\le n+1\atop i\ne k}\frac{\theta(x_kb_i)}{\theta(x_k/x_i)}. \end{equation} Since $C_0$ is a constant independent of $(x_1,x_2,\ldots,x_{n+1})$, it suffices to calculate the right-hand side of (\ref{eq:02C-sum2}) in the specific case $$ x_1=b_1^{-1},x_2=b_2^{-1},\cdots,x_n=b_n^{-1} \quad\mbox{and}\quad x_{n+1}\not\in \{b_1^{-1},b_2^{-1},\ldots,b_{n+1}^{-1}\}. $$ We now impose this condition. Noting $\theta(x_kb_k)=0$ if $k\ne n+1$ we see from (\ref{eq:02C-sum2}) that $$ C_0=\bar{\cal J}(\,\widehat{b}_{n+1}) \frac {\theta(x_{n+1}b_{n+1})} {\theta(b_1^{-1}b_2^{-1}\cdots b_n^{-1}x_{n+1}b_1b_2\cdots b_{n+1}) } \prod_{i=1}^n\frac{\theta(x_{n+1}b_i)}{\theta(x_{n+1}/b_i^{-1})} =\bar{\cal J}(\,\widehat{b}_{n+1}). $$ Since $\bar {\cal J}(\,\widehat{b}_{n+1})$ has already evaluated as (\ref{eq:02cal J(b)}) in Theorem \ref{thm:02main03}, we obtain $C_0$ as is expressed in (\ref{eq:02C}) of Theorem \ref{thm:02}. $\square$ \section{Jackson integral of Gustafson's $A_n$-type} \label{section:03} In this section we show a proof for the product formula of Gustafson's $A_n$ sum. As is pointed out in \cite{Gu87}, the Milne--Gustafson sum $I(x)$ can also be deduced from Gustafson's $A_n$ sum. \subsection{Definitions and results} Let $a_i, b_i$ $(1\le i\le n+1)$ and $d$ be complex numbers in $\mathbb{C}^*$. In this section we define $\Phi(z)$ and $\Delta(z)$ by \begin{equation} \label{eq:03Phi0} \Phi(z):=\prod_{i=1}^{n+1} \prod_{j=1}^{n+1}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty},\quad \Delta(z):=\prod_{1\le i<j\le n+1}(z_j-z_i), \end{equation} under the condition \begin{equation} \label{eq:03balance} z_1z_2\cdots z_{n+1}=d. \end{equation} (The functions $\Phi(z)$ and $\Delta(z)$ are regarded as a function on $(\mathbb{C}^*)^n$ of $n$ variables putting $z_{n+1}=dz_1^{-1}\cdots z_n^{-1}$ into the definition (\ref{eq:03Phi0}).) For $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$, we define the sum $K(x)$ by \begin{equation} \label{eq:03K(x)1} K(x):=\int_0^{\mbox{\small $x$}\infty}\Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} which converges absolutely under the condition \begin{equation*} \label{eq:03condition01} q<|a_1a_2\cdots a_{n+1}b_1b_2\cdots b_{n+1}|. \end{equation*} (See \cite[Lemma 3.19]{Gu87}.) We call $K(x)$ the {\it Jackson integral of Gustafson's $A_n$-type}. If we set $x=a=(a_1,a_2,\ldots,a_n)$ on (\ref{eq:03K(x)1}), then we call $K(a)$ the {\it truncated} Jackson integral. For an arbitrary $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$ we define the function $h(x)$ by \begin{equation} \label{eq:03h(x)1} h(x):=(-1)^n\frac{\prod_{1\le i<j\le n}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n}\prod_{j=1}^{n+1}\theta(b_jx_i)} \frac{\prod_{i=1}^{n}x_i\theta(dx_1^{-1}\cdots x_n^{-1}/x_i)} {\prod_{j=1}^{n+1}\theta(b_jdx_1^{-1}\cdots x_n^{-1})}. \end{equation} We remark that, if we denote by $x_{n+1}$ the combination of variables $dx_1^{-1}x_2^{-1}\cdots x_{n}^{-1}$, then $h(x)$ is expressed as \begin{equation} \label{eq:03h(x)2} h(x)=\frac{\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(b_jx_i)}. \end{equation} We define the {\it regularization} ${\cal K}(x)$ of the sum $K(x)$ by ${\cal K}(x):=K(x)/h(x)$. Since $h(x)$ is invariant under the $q$-shift $x_i\to qx_i$ $(1\le i\le n)$, the regularization ${\cal K}(x)$ is also expressed as \begin{equation*} {\cal K}(x)=\frac{K(x)}{h(x)} =\int_0^{\mbox{\small $x$}\infty}\frac{\Phi(z)\Delta(z)}{h(z)} \,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation*} so that, for $x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n$ we have a Macdonald-type sum expression (recall (\ref{eq:01cal I(x)4})) of the regularization ${\cal K}(x)$ as \begin{equation} \label{eq:03cal K(x)1} {\cal K}(x)=\int_0^{\mbox{\small $x$}\infty}\frac{\prod_{i=1}^{n+1}\prod_{j=1}^{n+1} (qa_j^{-1}z_i)_\infty(qb_j^{-1}z_i^{-1})_\infty} {\prod_{1\le i<j\le n+1}(qz_i/z_j)_\infty(qz_j/z_i)_\infty}\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, \end{equation} where the variable $z_{n+1}$ in the integrand satisfies the condition (\ref{eq:03balance}) by definition. \begin{lem} \label{lem:03hol} The $q$-periodic function ${\cal K}(x)$ is holomorphic on $(\mathbb{C}^*)^n$, and is consequently a constant independent of $x\in (\mathbb{C}^*)^n$. \end{lem} {\bf Remark.} As we will see in Lemma \ref{lem:03q-diff1} below, ${\cal K}(x)$ satisfies the $q$-difference system of rank 1, which is independent of $x\in (\mathbb{C}^*)^n$. We regard the connection coefficient between a general solution ${\cal K}(x)$ of the system and a special solution ${\cal K}(a)$ as 1, i.e., \begin{equation} \label{eq:03cK(x)=cK(a)} {\cal K}(x)={\cal K}(a), \end{equation} for an arbitrary $x\in (\mathbb{C}^*)^n$, which is just the statement of Lemma \ref{lem:03hol}. In the same way, for $b^{-1}$ as specified in (\ref{eq:01x-1}) we can also consider the equation ${\cal K}(x)={\cal K}(b^{-1})$ as a connection formula.\\[2pt] \noindent {\bf Proof.} We temporarily write $x_{n+1}=dx_1^{-1}x_2^{-1}\cdots x_n^{-1}$. From the expression (\ref{eq:03cal K(x)1}), the function ${\cal K}(x)$ has the poles lying only in the set $\{x=(x_1,x_2,\ldots,x_n)\in (\mathbb{C}^*)^n\,;\,\prod_{1\le i<j\le n+1}\theta(x_i/x_j)=0\}$. Then ${\cal K}(x)$ is written as \begin{equation} \label{eq:03cal K(x)2} {\cal K}(x)=\frac{f(x)}{\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)} \end{equation} where $f(x)$ is some holomorphic function on $(\mathbb{C}^*)^n$. Since ${\cal K}(x)$ and $\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)$ are symmetric and skew-symmetric for permutation of $x_1,x_2,\ldots,x_{n+1}$, respectively, the holomorphic function $f(x)$ is skew-symmetric. Thus $f(x)$ vanishes if $x_i=x_j$. This means that $f(x)$ is divisible by $x_j\theta(x_i/x_j)$. Since the holomorphic function $f(x)$ is divisible by $\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)$, from the expression (\ref{eq:03cal K(x)2}), ${\cal K}(x)$ is holomorphic function on $(\mathbb{C}^*)^n$. By definition of Jackson integrals, the holomorphic function ${\cal K}(x)$ is invariant under the $q$-shift $x_i\to qx_i$ $(1\le i\le n)$. Therefore ${\cal K}(x)$ is a constant independent of $x$. $\square$\\ The aim of this section is the evaluation of ${\cal K}(x)$ as a constant. The main theorem is stated as follows: \begin{thm}[Gustafson \cite{Gu87}] \label{thm:03main} For an arbitrary $x=(x_1,x_2,\cdots,x_n)\in (\mathbb{C}^*)^n$, the regularization ${\cal K}(x)$ is a constant independent of $x$, which is expressed as \begin{equation} \label{eq:03cal K(x)3} {\cal K}(x) =(1-q)^n \frac{(q)_\infty^n(qa_1^{-1}\cdots a_{n+1}^{-1}d)_\infty (qb_1^{-1}\cdots b_{n+1}^{-1}d^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty. \end{equation} In other words, from the expression $K(x)={\cal K}(x)h(x)$, the sum $K(x)$ is expressed as \begin{eqnarray} \label{eq:03K(x)2} K(x) &=&(1-q)^n \frac{(q)_\infty^n(qa_1^{-1}\cdots a_{n+1}^{-1}d)_\infty (qb_1^{-1}\cdots b_{n+1}^{-1}d^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty \nonumber\\ &&\times (-1)^n\frac{\prod_{1\le i<j\le n}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n}\prod_{j=1}^{n+1}\theta(b_jx_i)} \frac{\prod_{i=1}^{n}x_i\theta(dx_1^{-1}\cdots x_n^{-1}/x_i)} {\prod_{j=1}^{n+1}\theta(b_jdx_1^{-1}\cdots x_n^{-1})}. \end{eqnarray} \end{thm} {\bf Proof.} Using (\ref{eq:03cK(x)=cK(a)}), the evaluation of ${\cal K}(x)$ is deduced from the special case ${\cal K}(a)$ of $a=(a_1,a_2,\cdots,a_n)\in (\mathbb{C}^*)^n$, which will be shown in Lemma \ref{lem:03main} below. We will also mention another way to evaluate ${\cal K}(x)$ in Subsection \ref{subsection:03-3}. $\square$\\[5pt] {\bf Remark 1.} If we write $x_{n+1}=dx_1^{-1}x_2^{-1}\cdots x_n^{-1}$, using (\ref{eq:03h(x)2}), the evaluation (\ref{eq:03K(x)2}) of $K(x)$ reads more symmetrical \begin{eqnarray} \label{eq:03K(c)} K(x)&=&(1-q)^n \frac{(q)_\infty^n(qa_1^{-1}\cdots a_{n+1}^{-1}x_1\cdots x_{n+1})_\infty (qb_1^{-1}\cdots b_{n+1}^{-1}x_1^{-1}\cdots x_{n+1}^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \nonumber\\ &&\times \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty \frac{\prod_{1\le i<j\le n+1}x_j\theta(x_i/x_j)} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(b_ix_j)}. \end{eqnarray} It is easy to confirm that (\ref{eq:03K(c)}) exactly coincides with Gustafson's original evaluation \cite{Gu87}. For our context it is very important to clearly distinguish the part independent of $x$ from that dependent on $x$. This is why we use the expressions (\ref{eq:03cal K(x)3}) or (\ref{eq:03K(x)2}) rather than (\ref{eq:03K(c)}). \\[4pt] {\bf Remark 2.} As is pointed out in Gustafson's original paper \cite[p.1593]{Gu87}, the Milne--Gustafson sum $I(x)$ discussed in Section \ref{section:01} is immediately deduced from Gustafson's $A_n$ sum $K(x)$. In this sense, Gustafson's $A_n$ sum is regarded as an extension of the Milne--Gustafson sum.\\[4pt] Though the following lemma is a special case of Theorem \ref{thm:03main}, from the fact (\ref{eq:03cK(x)=cK(a)}), logically it suffices to evaluate ${\cal K}(a)$ instead of proving Theorem \ref{thm:03main}. \begin{lem} \label{lem:03main} For the special point $x=a=(a_1,a_2,\cdots,a_n)$, the truncation ${\cal K}(a)$ is expressed as \begin{equation} \label{eq:03cal K(a)} {\cal K}(a) =(1-q)^n \frac{(q)_\infty^n(qa_1^{-1}\cdots a_{n+1}^{-1}d)_\infty (qb_1^{-1}\cdots b_{n+1}^{-1}d^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty. \end{equation} \end{lem} {\bf Proof.} Subsection \ref{subsection:03-4} will be devoted to the proof of this lemma, computing the asymptotic behavior of the truncated Jackson integral $K(a)$. $\square$ \\[4pt] \noindent {\bf Remark.} Notice that $a_{n+1}$ fixed in the definition of $\Phi(z)$ does not necessarily satisfy the relation $d=a_1a_2\cdots a_{n+1}$. Thus in general, $h(a)$ does not coincide with \begin{equation} \label{eq:03h(a)1} \frac{\prod_{1\le i<j\le n+1}a_j\theta(a_i/a_j)} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}\theta(a_ib_j)}, \end{equation} and the correct expression of $h(a)$ follows by definition as \begin{equation} \label{eq:03h(a)2} h(a)=(-1)^n \frac{\prod_{1\le i<j\le n}a_j\theta(a_i/a_j)} {\prod_{i=1}^{n}\prod_{j=1}^{n+1}\theta(b_ja_i)} \frac{\prod_{i=1}^{n}a_i\theta(da_1^{-1}\cdots a_n^{-1}/a_i)} {\prod_{j=1}^{n+1}\theta(b_jda_1^{-1}\cdots a_n^{-1})}, \end{equation} which will be used later. However, if we set $d=a_1a_2\cdots a_{n+1}$, then $h(a)$ is written as (\ref{eq:03h(a)1}), so that $K(a)={\cal K}(a)h(a)$ is written more simply as $$ K(a) =(1-q)^n \frac{(q)_\infty^{n+1}\prod_{1\le i<j\le n+1}a_j\theta(a_i/a_j)} {\prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(a_ib_j)_\infty}, $$ which is referred as {\it Milne's fundamental theorem of $U(n)$ series} \cite{Mi85} in \cite[(2.2) p.421]{Ro04}. \subsection{$q$-difference equations} In this subsection we show the $q$-difference equations for $K(x)$ with respect to parameters explicitly. \begin{lem} \label{lem:03q-diff1} The recurrence relations for $K(x)$ are given by \begin{eqnarray} T_{a_j}K(x)&=&K(x) \frac{1-d\prod_{i=1}^{n+1}a_i^{-1}} {1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}} \prod_{i=1}^{n+1}(1-b_i^{-1}a_j^{-1}),\label{eq:03rec01}\\ T_{b_j}K(x)&=&K(x)\frac{1-d\prod_{i=1}^{n+1}b_i}{1-\prod_{i=1}^{n+1}a_ib_i}\prod_{i=1}^{n+1}(1-a_ib_j), \label{eq:03rec02} \end{eqnarray} for $j=1,2,\ldots, n+1$. The recurrence relations for ${\cal K}(x)$ are given by \begin{eqnarray} T_{a_j}{\cal K}(x)&=&{\cal K}(x) \frac{1-d\prod_{i=1}^{n+1}a_i^{-1}} {1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}} \prod_{i=1}^{n+1}(1-b_i^{-1}a_j^{-1}), \label{eq:03rec03}\\ T_{b_j}{\cal K}(x)&=&{\cal K}(x) \frac{1-d^{-1}\prod_{i=1}^{n+1}b_i^{-1}} {1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}} \prod_{i=1}^{n+1}(1-a_i^{-1}b_j^{-1}), \label{eq:03rec04} \end{eqnarray} for $j=1,2,\ldots, n+1.$ \end{lem} The rest of this subsection is devoted to proving the above $q$-difference equations. Before proving Lemma \ref{lem:03q-diff1}, we will show two lemmas. In this section, $\Phi(z)$ in (\ref{eq:03Phi0}) is considered under the restriction (\ref{eq:03balance}). Involving this restriction, we need a slight change of the basic lemma (Lemma \ref{lem:00nabla=0}) to Lemma \ref{lem:03nabla=0} stated below in order to suit the setting of this section. For a function $\varphi(z)$ of $z=(z_1,z_2,\ldots,z_n) \in(\mathbb{C}^*)^n$, let $\nabla_{\!i,j}\varphi(z)$ ($1\le i,j\le n+1$) be the functions defined by \begin{equation} \label{eq:03nabla} (\nabla_{\!i,j}\varphi)(z):=\varphi(z)-\frac{T_{z_j}^{-1}T_{z_i}\Phi(z)}{\Phi(z)}T_{z_j}^{-1}T_{z_i}\varphi(z), \end{equation} where $T_{z_j}^{-1}T_{z_i}$ indicates the shift operator with respect to $z_i\to qz_i$ and $z_j\to q^{-1}z_j$ together. (Since we regard $\Phi(z)$ given by (\ref{eq:03Phi0}) as a function of $z=(z_1,z_2,\ldots,z_n) \in(\mathbb{C}^*)^n$ under the condition $z_{n+1}=dz_1^{-1}z_2^{-1}\cdots z_n^{-1}$, if we consider the individual shift $z_i\to qz_i$ ($1\le i\le n$), then the shift $z_{n+1}\to q^{-1}z_{n+1}$ occurs automatically. We formally write the shift of this situation by the symbol $T_{z_{n+1}}^{-1}T_{z_i}$ instead of $T_{z_i}$ for our convenience. In the same way, we use the symbol $T_{z_i}^{-1}T_{z_{n+1}}$ instead of $T_{z_i}^{-1}$.) In this section we use the symbol ${\cal A}$ as the skew-symmetrization with respect to $ S_{n+1}$ on $n+1$ variables $z_1,z_2,\ldots,z_{n+1}$, i.e., ${\cal A}f(z)=\sum_{\sigma\in S_{n+1}}(\mbox{{\rm sgn}}\,\sigma)\,\sigma f(z)$. Then we have \begin{lem} \label{lem:03nabla=0} For a function $\varphi(z)$ of $z=(z_1,z_2,\ldots,z_n) \in(\mathbb{C}^*)^n$, \begin{equation} \label{eq:03nabla=0} \int_0^{\mbox{\small $x$}\infty}\Phi(z)\nabla_{\!i,j}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}=0 \quad (1\le i,j\le n+1), \end{equation} if the integral converges. Moreover, for a function $\varphi(z)=\varphi(z_1,z_2,\ldots,z_{n+1})$ with $z_{n+1}=dz_1^{-1}z_2^{-1}\cdots z_n^{-1}$, \begin{equation} \label{eq:03A1} \int_0^{\mbox{\small $x$}\infty}\Phi(z){\cal A}\nabla_{\!i,j}\varphi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}=0. \end{equation} \end{lem} {\bf Proof.} If $i=n+1$ or $j=n+1$, then $T_{z_i}^{-1}T_{z_{n+1}}=T_{z_i}^{-1}$ or $T_{z_{n+1}}^{-1}T_{z_i}=T_{z_i}$, respectively, so that we can confirm (\ref{eq:03nabla=0}) in the same way as (\ref{eq:00nabla=0}) in Lemma \ref{lem:00nabla=0}. While if $i\ne n+1$ and $j\ne n+1$, then from the definition (\ref{eq:03nabla}) of $\nabla_{\!i,j}$, (\ref{eq:03nabla=0}) is equivalent to $$ \int_0^{\mbox{\small $x$}\infty}\varphi(z)\Phi(z)\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n} =\int_0^{\mbox{\small $x$}\infty}T_{z_j}^{-1}T_{z_i}\varphi(z)\, T_{z_j}^{-1}T_{z_i}\Phi(z) \frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, $$ which is just confirmed from the fact that the Jackson integral is invariant under the $q$-shift $z_i\to qz_i$ ($1\le i\le n$). Next we will confirm (\ref{eq:03A1}). For $\varphi(z)=\varphi(z_1,z_2,\ldots,z_{n+1})$, we have $$ \Phi(z){\cal A}\nabla_{\!i,j}\varphi(z)=\Phi(z)\sum_{\sigma\in S_{n+1}}(\mbox{{\rm sgn}}\,\sigma)\,\sigma\nabla_{\!i,j}\varphi(z) =\sum_{\sigma\in S_{n+1}}(\mbox{{\rm sgn}}\,\sigma)\,\Phi(z)\nabla_{\!\sigma(i),\sigma(j)}\sigma\varphi(z), $$ and thus \begin{eqnarray*} &&\int_0^{\mbox{\small $x$}\infty}\Phi(z){\cal A}\nabla_{\!i,j}\varphi(z) \frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}\\ &&=\sum_{\sigma\in S_{n+1}}(\mbox{{\rm sgn}}\,\sigma) \int_0^{\mbox{\small $x$}\infty}\Phi(z) \nabla_{\!\sigma(i),\sigma(j)}\,\sigma\varphi(z) \frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. \end{eqnarray*} Since $\sigma\varphi(z)$ is also a function on $(\mathbb{C}^*)^n$ under the condition $z_{n+1}=dz_1^{-1}z_2^{-1}\cdots z_n^{-1}$, using (\ref{eq:03nabla=0}), we see all terms in the right-hand side of the above equation are equal to $0$. $\square$\\ For $c\in \mathbb{C}^*$ and $z=(z_1,z_2.\ldots,z_{n+1})\in (\mathbb{C}^*)^{n+1}$ we denote $e(c;z)$ the symmetric polynomial of degree $n+1$ defined by $$ e(c;z):=\prod_{i=1}^{n+1}(1-c^{-1}z_i), $$ (cf.~(\ref{eq:02e(c;z)})) which has a property that $e(c;z)$ vanishes if $z_i=c$ for some $i=1,2,\ldots,n+1$. \begin{lem} \label{lem:03q-diff2} If we put $\varphi(z)$ as \begin{equation} \label{eq:03varphi0} \varphi(z)= \prod_{i=1}^{n+1}(1- a_i^{-1}z_1)(1- b_iz_{n+1}) \times z_2\cdots z_n\prod_{1\le j<k\le n}(z_k-a_j), \end{equation} then ${\cal A}\nabla_{\!1,n+1}\varphi(z) $ is expanded as \begin{equation} \label{eq:03A-nabla-phi0} {\cal A}\nabla_{\!1,n+1}\varphi(z)=\Big(c_0e_0(z)+c_1e(a_1;z)\Big)\Delta(z), \end{equation} where the function $e_0(z)$ is defined by $$ e_0(z):=1-z_1z_2\cdots z_{n+1}a_1^{-1}a_2^{-1}\cdots a_{n+1}^{-1} $$ and the constants $c_0$ and $c_1$ are given by \begin{equation} \label{eq:03c0c1} c_0=(-1)^{n+1}2a_1^nb_1\cdots b_{n+1}\prod_{i=1}^{n+1}(1- a_1^{-1}b_i^{-1}), \quad c_1=(-1)^{n}2a_1^nb_1\cdots b_{n+1}(1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}). \end{equation} In particular, under the condition $z_1z_2\cdots z_{n+1}=d$, the function $e_0(z)$ is a constant and then \begin{equation} \label{eq:03c'0} c'_0:=c_0e_0(z)=(-1)^{n+1} 2a_1^nb_1\cdots b_{n+1}(1-d\,a_1^{-1}a_2^{-1}\cdots a_{n+1}^{-1})\prod_{i=1}^{n+1}(1- a_1^{-1}b_i^{-1}). \end{equation} \end{lem} {\bf Proof.} Since the ratio $T_{z_{n+1}}^{-1}T_{z_1}\Phi(z)/\Phi(z)$ is written as $$ \frac{T_{z_{n+1}}^{-1}T_{z_1}\Phi(z)}{\Phi(z)} =\prod_{j=1}^{n+1}\frac{(1-b_jz_1)(1-a_j^{-1}z_{n+1})}{(1-qa_j^{-1}z_1)(1-q^{-1}b_jz_{n+1})}, $$ for $\varphi(z)$ given by (\ref{eq:03varphi0}), using (\ref{eq:03nabla}) gives \begin{equation} \label{eq:03nabla-phi0} \nabla_{\!1,n+1}\varphi(z)= \Big( \prod_{i=1}^{n+1}(1- a_i^{-1}z_1)(1- b_iz_{n+1}) -\prod_{i=1}^{n+1}(1- b_iz_1)(1- a_i^{-1}z_{n+1})\Big) \times z_2\cdots z_n\prod_{1\le j<k\le n}(z_k-a_j). \end{equation} Let us use $\nabla$ instead of $\nabla_{\!1,n+1}$ for abbreviation. Taking account of the degree of $\nabla\varphi(z)$ as a polynomial of $z$, we can expand the skew-symmetrization ${\cal A}\nabla\varphi(z)$ as \begin{equation} \label{eq:03A-nabla-phi1} {\cal A}\nabla\varphi(z)=\Big(c_0e_0(z)+\sum_{i=1}^nc_ie(a_i;z)+c_{n+1}e(b_{n+1}^{-1};z)\Big)\Delta(z), \end{equation} where the coefficients $c_i$ $(i=0,1,\ldots,n)$ are some constants. Here we will confirm that $c_i$ vanishes if $i\ge 2$, and $c_0$ and $c_1$ are evaluated as (\ref{eq:03c0c1}). First we take $z_1=a_1, z_2=a_2,\ldots, z_n=a_n$ and $z_{n+1}=b_{n+1}^{-1}$. Then, from (\ref{eq:03A-nabla-phi1}) with the vanishing property of $e(a_i;z)$, we have $ {\cal A}\nabla\varphi(z)$ as \begin{eqnarray} \label{eq:03A-nabla-phi1-1} {\cal A}\nabla\varphi(z) &=&c_0e_0(z)\Delta(z) =c_0(1-a_{n+1}^{-1}b_{n+1}^{-1})\prod_{1\le j<k\le n}(a_k-a_j) \prod_{i=1}^n(b_{n+1}^{-1}-a_i)\nonumber\\ &=&(-1)^{n}c_0a_1\cdots a_n\prod_{i=1}^{n+1}(1-a_i^{-1}b_{n+1}^{-1})\prod_{1\le j<k\le n}(a_k-a_j). \end{eqnarray} On the other hand, from (\ref{eq:03nabla-phi0}), we have \begin{equation} \label{eq:03A-nabla-phi1-2} {\cal A}\nabla\varphi(z)= -2\prod_{i=1}^{n+1}(1- b_ia_1)(1- a_i^{-1}b_{n+1}^{-1})\times a_2\cdots a_n\prod_{1\le j<k\le n}(a_k-a_j). \end{equation} Comparing (\ref{eq:03A-nabla-phi1-1}) with (\ref{eq:03A-nabla-phi1-2}), we obtain $c_0$ in the expression (\ref{eq:03c0c1}). Next we take $z_1=a_1$, $z_2=a_2,\ldots, z_{n}=a_{n}$ and $z_{n+1}\not\in\{a_1,a_2,\ldots,a_{n}, b_{n+1}^{-1}\}$. Then, from (\ref{eq:03A-nabla-phi1}) and the vanishing property of $e(a_i;z)$, we have \begin{equation} \label{eq:03A-nabla-phi2} {\cal A}\nabla\varphi(z)=\Big(c_0e_0(z)+c_{n+1} e(b_{n+1}^{-1};z)\Big)\Delta(z). \end{equation} On the other hand, from (\ref{eq:03nabla-phi0}) and (\ref{eq:03A-nabla-phi1-2}), we have \begin{equation} \label{eq:03A-nabla-phi3} {\cal A}\nabla\varphi(z)= -2\prod_{i=1}^{n+1}(1- b_ia_1)(1- a_i^{-1}z_{n+1})\times a_2\cdots a_n\prod_{1\le j<k\le n}(a_k-a_j)=c_0e_0(z)\Delta(z). \end{equation} Comparing (\ref{eq:03A-nabla-phi2}) and (\ref{eq:03A-nabla-phi3}), we obtain $c_{n+1}=0$. Third we take $z_1=a_1$, $z_2=a_2,\ldots, z_{n-1}=a_{n-1}$, $z_{n}\not\in\{a_1,a_2,\ldots,a_{n},b_{n+1}^{-1}\}$ and $z_{n+1}=b_{n+1}^{-1}$. Then, from (\ref{eq:03A-nabla-phi1}) and the vanishing property of $e(a_i;z)$, we have \begin{equation} \label{eq:03A-nabla-phi4} {\cal A}\nabla\varphi(z)=\Big(c_0e_0(z)+c_n e(a_n;z)\Big)\Delta(z). \end{equation} On the other hand, from (\ref{eq:03nabla-phi0}), we have \begin{equation} \label{eq:03A-nabla-phi5} {\cal A}\nabla\varphi(z)=c_0e_0(z)\Delta(z). \end{equation} Comparing (\ref{eq:03A-nabla-phi4}) and (\ref{eq:03A-nabla-phi5}), we obtain $c_n=0$. In the same manner, by symmetry of ${\cal A}\nabla\varphi(z)$ of $\nabla\varphi(z)$ in (\ref{eq:03nabla-phi0}), we also obtain $c_i=0$ if $i\ge 2$. Thus we obtain the expansion (\ref{eq:03A-nabla-phi0}). Lastly, we take $(z_1,z_2,\ldots,z_{n+1})=(b_1^{-1},b_2^{-1},\ldots,b_{n+1}^{-1})=b^{-1}$. From (\ref{eq:03nabla-phi0}), we have ${\cal A}\nabla\varphi(b^{-1})=0$. On the other hand, from (\ref{eq:03A-nabla-phi1}), we have \begin{equation*} {\cal A}\nabla\varphi(z)=\Big(c_0 e_0(b^{-1})+c_1 e(a_1;b^{-1})\Big)\Delta(z) \end{equation*} so that $$c_1=-c_0\frac{e_0(b^{-1})}{e(a_1,b^{-1})} =(-1)^{n} 2a_1^nb_1\cdots b_{n+1}(1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}),$$ which is what we want to show. $\square$\\[10pt] \noindent {\bf Proof of Lemma \ref{lem:03q-diff1}.} We will prove (\ref{eq:03rec01}) for $T_{a_j}$ first. Without loss of generality, it suffices to show that \begin{equation} \label{eq:03rec0} T_{a_1}K(x)=K(x) \frac{1-d\prod_{i=1}^{n+1}a_i^{-1}} {1-\prod_{i=1}^{n+1}a_i^{-1}b_i^{-1}} \prod_{i=1}^{n+1}(1-b_i^{-1}a_1^{-1}). \end{equation} Since we have $ T_{a_1}\Phi(z)/\Phi(z)= \prod_{i=1}^{n+1}(1-a_1^{-1}z_i)=e(a_1;z) $ by definition, $T_{a_1}K(z)$ is expressed by $$ T_{a_1}K(x)=\int_0^{\mbox{\small $x$}\infty}e(a_1;z)\Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. $$ Applying (\ref{eq:03A1}) in Lemma \ref{lem:03nabla=0} to the fact (\ref{eq:03A-nabla-phi0}) in Lemma \ref{lem:03q-diff2}, we obtain the relation $$c'_0K(z)+c_1T_{a_1}K(z)=0,$$ where $c_1$ and $c'_0$ are given in (\ref{eq:03c0c1}) and (\ref{eq:03c'0}). This relation coincides with (\ref{eq:03rec0}). Next we will show the $q$-difference equation (\ref{eq:03rec02}) for $T_{b_j}$ of the case $j=1$ as the same manner as above. Since we have $ T_{b_1}\Phi(z)/\Phi(z)= \prod_{i=1}^{n+1}(1-b_1z_i)=e(b_1^{-1};z)$, $T_{b_1}K(z)$ is expressed by $$ T_{b_1}K(x)=\int_0^{\mbox{\small $x$}\infty}e(b_1^{-1};z)\Phi(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. $$ Here if we exchange $a_i$ with $b_i^{-1}$ $(i=1,2,\ldots,n+1)$ in the above proof of (\ref{eq:03rec0}) including that of Lemma \ref{lem:03q-diff2}, the way of argument is completely symmetric for this exchange. Therefore (\ref{eq:03rec02}) for $T_{b_1}$ is obtained exchanging $a_i$ with $b_i^{-1}$ on the coefficient of (\ref{eq:03rec0}). Lastly we will confirm the $q$-difference equations (\ref{eq:03rec03}) and (\ref{eq:03rec04}) for ${\cal K}(x)$. From (\ref{eq:03h(x)1}), we have $$ T_{a_j}h(z)=h(z) \quad\mbox{and}\quad T_{b_j}h(z)=d(-b_j)^{n+1}h(z). $$ We therefore obtain (\ref{eq:03rec03}) and (\ref{eq:03rec04}) from (\ref{eq:03rec01}) and (\ref{eq:03rec02}), respectively, using the above equations. $\square$ \subsection{A remark on a relation to the Macdonald identity for $A_n^{(1)}$} \label{subsection:03-3} We can use the recurrence relations (\ref{eq:03rec03}) and (\ref{eq:03rec04}) directly to the regularization ${\cal K}(z)$ for its evaluation. Actually, we can immediately obtain Theorem \ref{thm:03main} as a corollary of Lemma \ref{lem:03q-diff1}. \begin{cor} \label{cor:03cK(x)cK0} The sum ${\cal K}(x)$ {\rm (}expressed as {\rm (\ref{eq:03cal K(x)1}))} can be written \begin{equation} \label{eq:03cal K(x)4} {\cal K}(x)= {\cal K}_0(x)\frac{(qa_1^{-1}\cdots a_{n+1}^{-1}d)_\infty (qb_1^{-1}\cdots b_{n+1}^{-1}d^{-1})_\infty} {(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty} \prod_{i=1}^{n+1}\prod_{j=1}^{n+1}(qa_i^{-1}b_j^{-1})_\infty, \end{equation} where ${\cal K}_0(x)$ is the Jackson integral defined by \begin{equation} \label{eq:03cal K0(x)} {\cal K}_0(x):=\int_0^{\mbox{\small $x$}\infty}\frac{1} {\prod_{1\le i<j\le n+1}(qz_i/z_j)_\infty(qz_j/z_i)_\infty}\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. \end{equation} We remark that ${\cal K}_0(x)$ satisfies the Macdonald identity for $A_n^{(1)}$ in the form proved by Milne {\rm (}straightforward rewrite of {\rm \cite[Theorem 1.58]{Mi85}}{\rm )}, i.e., $${\cal K}_0(x)=(1-q)^n(q)_\infty^n.$$ \end{cor} {\bf Proof.} Let $T^N$ be the $q$-shift operator with respect to $ a_i\to q^{-N}a_i$ and $\ b_i\to q^{-N}b_i\ (i=1,2,\ldots,n+1). $ If we set $C$ the right-hand side of (\ref{eq:03cal K(x)4}) without factor ${\cal K}_0(x)$, then we have $T^NC\to1$ $(N\to +\infty)$. From (\ref{eq:03rec03}) and (\ref{eq:03rec04}), ${\cal K}(x)$ and $C$ satisfy the same recurrence relations, so that the ratio ${\cal K}(x)/C$ is invariant under the $q$-shift $T^N$. Therefore we obtain $$ \frac{{\cal K}(x)}{C}=T^{N}\frac{{\cal K}(x)}{C}=\lim_{N\to +\infty}\frac{T^{N}{\cal K}(x)}{T^{N}C} =\lim_{N\to +\infty}T^N{\cal K}(x), $$ where $$\lim_{N\to +\infty}T^N{\cal K}(x) = \lim_{N\to +\infty} \int_0^{\mbox{\small $x$}\infty}\frac{\prod_{i=1}^{n+1}\prod_{j=1}^{n+1} (q^{1+N}a_j^{-1}z_i)_\infty(q^{1+N}b_j^{-1}z_i^{-1})_\infty} {\prod_{1\le i<j\le n+1}(qz_i/z_j)_\infty(qz_j/z_i)_\infty}\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}, $$ which coincides with (\ref{eq:03cal K0(x)}) of ${\cal K}_0(x)$. $\square$\\ Though (\ref{eq:03cal K(x)4}) of Corollary \ref{cor:03cK(x)cK0} shows the connection between ${\cal K}(x)$ and ${\cal K}_0(x)$, it is not our intention to evaluate ${\cal K}(x)$ via this connection using the Macdonald identity. In keeping with our viewpoint our aim is to evaluate ${\cal K}(a)$ directly calculating the asymptotic behavior of the truncated Jackson integral. \subsection{Asymptotic behavior (Proof of Lemma \ref{lem:03main})} \label{subsection:03-4} In this subsection we will give a proof for Lemma \ref{lem:03main}. In order to calculate ${\cal K}(a)$, we deform $\Phi(z)$ to $\tilde\Phi(z)$ according to \begin{equation} \label{eq:03PkP} \Phi(z)=k(z)\tilde\Phi(z), \end{equation} where \begin{equation} \label{eq:03k(z)} k(z):=\prod_{j=1}^{n+1}z_{n+1}^{1-\alpha_j-\beta_j} \frac{\theta(qa_j^{-1}z_{n+1})}{\theta(b_jz_{n+1})} =\prod_{j=1}^{n+1} \frac{\theta(a_jd^{-1}z_1z_2\cdots z_n)} {(d^{-1}z_1z_2\cdots z_n)^{1-\alpha_j-\beta_j}\theta(qb_j^{-1}d^{-1}z_1z_2\cdots z_n)} \end{equation} and \begin{eqnarray} \label{eq:03tPhi} \tilde\Phi(z)&:=& (z_{n+1}^{-1})^{n+1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \nonumber\\ &&\times\prod_{i=1}^n \prod_{j=1}^{n+1}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty}\times \prod_{j=1}^{n+1} \frac{(qb_j^{-1}z_{n+1}^{-1})_\infty}{(a_jz_{n+1}^{-1})_\infty} \nonumber\\ &=& (d^{-1}z_1z_2\cdots z_n)^{n+1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \nonumber\\ &&\times\prod_{i=1}^n \prod_{j=1}^{n+1}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty}\times \prod_{j=1}^{n+1} \frac{(qb_j^{-1}d^{-1}z_1\cdots z_n)_\infty}{(a_jd^{-1}z_1\cdots z_n)_\infty}. \end{eqnarray} Here $\alpha_i$ and $\beta_i$ are given by $a_i=q^{\alpha_i}, b_i=q^{\beta_i}$. Then we define the Jackson integral $\tilde K(x)$ by \begin{equation} \label{eq:03tK} \tilde K(x):=\int_0^{\mbox{\small $x$}\infty} {\tilde\Phi}(z)\Delta(z)\,\frac{d_qz_1}{z_1}\wedge\cdots\wedge\frac{d_qz_n}{z_n}. \end{equation} Since $k(z)$ is invariant under the $q$-shift $z_i\to qz_i$, from (\ref{eq:03PkP}) and (\ref{eq:03tK}), we have \begin{equation} \label{eq:03KkK} K(x)=k(x){\tilde K}(x). \end{equation} We will calculate the asymptotic behavior of ${\tilde K}(x)$ instead of $K(x)$ when $x=a$. Since $\Delta(z)$ is written as $$ \Delta(z)=\prod_{i=1}^n(z_{n+1}-z_i)\prod_{1\le j<k\le n}(z_k-z_j) =\prod_{i=1}^n\frac{1-z_i(d^{-1}z_1\cdots z_{n})}{d^{-1}z_1\cdots z_{n}}\prod_{1\le j<k\le n}(z_k-z_j), $$ from (\ref{eq:03tPhi}), we have \begin{eqnarray*} \label{eq:03Phi2Delta} &&\tilde\Phi(z)\Delta(z)= (d^{-1}z_1z_2\cdots z_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \prod_{1\le j<k\le n}(z_k-z_j)\\ &&\times \prod_{i=1}^n\Bigg( (1-\frac{z_i}{d}z_1z_2\cdots z_{n}) \prod_{j=1}^{n+1}\frac{(qa_j^{-1}z_i)_\infty}{(b_jz_i)_\infty}\Bigg)\times \prod_{j=1}^{n+1} \frac{(qb_j^{-1}d^{-1}z_1\cdots z_n)_\infty}{(a_jd^{-1}z_1\cdots z_n)_\infty}. \end{eqnarray*} For an integer $N$, let $T^N$ be the $q$-shift operator for a special direction, $$ T^N: b_i\to q^{-nN}b_i\ (i=1,2,\ldots,n+1);\ a_j \to q^{(n+1)N}a_j\ (j=1,2,\ldots,n);\ a_{n+1}\to q^{-nN}a_{n+1}. $$ Then, by definition $T^N \tilde K(a)$ is written explicitly as \begin{eqnarray*} &&T^N \tilde K(a) =(1-q)^n\!\!\!\! \sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{N}^n} (d^{-1}a_1a_2\cdots a_nq^{\nu_1+\cdots+\nu_n+n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\times\prod_{i=1}^n\Bigg[ (1-\frac{a_iq^{\nu_i}}{d}a_1a_2\cdots a_{n}q^{\nu_1+\cdots+\nu_n+(n+1)^2N}) \frac{(a_{n+1}^{-1}a_iq^{1+\nu_i+(2n+1)N})_\infty}{(b_{n+1}a_iq^{\nu_i+N})_\infty} \prod_{j=1}^{n}\frac{(a_j^{-1}a_iq^{1+\nu_i})_\infty}{(b_ja_i^{\nu_i+N})_\infty}\Bigg] \nonumber\\ &&\times \frac{\prod_{j=1}^{n+1}(b_{j}^{-1}d^{-1}a_1a_2\cdots a_nq^{1+\nu_1+\cdots+\nu_n+n(n+2)N})_\infty} {(a_{n+1}d^{-1}a_1a_2\cdots a_nq^{\nu_1+\cdots+\nu_n+n^2N})_\infty \prod_{j=1}^{n}(a_jd^{-1}z_1z_2\cdots z_nq^{\nu_1+\cdots+\nu_n+(n+1)^2N})_\infty} \\ &&\times \prod_{1\le i<j\le n}q^{(n+1)N}(a_jq^{\nu_j}-a_iq^{\nu_i}), \end{eqnarray*} so that the leading term of the asymptotic behavior of $T^N \tilde K(a)$ as $N\to +\infty$ is given by the term corresponding to $(\nu_1,\ldots,\nu_n)=(0,\ldots,0)$ in the above sum, i.e., \begin{eqnarray} \label{eq:03TNbK(a)} T^N \tilde K(a) &\sim& (1-q)^n(d^{-1}a_1a_2\cdots a_nq^{n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\times\prod_{i=1}^n\prod_{j=1}^{n}(qa_j^{-1}a_i)_\infty \prod_{1\le i<j\le n}q^{(n+1)N}(a_j-a_i) \nonumber\\ &=& (d^{-1}a_1a_2\cdots a_nq^{n(n+1)N}) ^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \nonumber\\ &&\times (1-q)^n(q)_\infty^n \prod_{1\le i<j\le n}q^{(n+1)N} a_j\theta(a_i/a_j) \quad\quad(N\to +\infty). \end{eqnarray} On the other hand, if we set $C$ as the right-hand side of (\ref{eq:03cal K(a)}) of Lemma \ref{lem:03main}, then we have from (\ref{eq:03h(a)2}) and (\ref{eq:03k(z)}) \begin{eqnarray*} \frac{h(a)C}{k(a)}&=& (d^{-1}a_1a_2\cdots a_n)^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}} \prod_{1\le i<j\le n}a_j\theta(a_i/a_j)\\ && \times\frac{(1-q)^n(q)_\infty^n(qb_1^{-1}\cdots b_{n+1}^{-1}d^{-1})_\infty\prod_{j=1}^{n+1}(qa_{n+1}^{-1}b_j^{-1})_\infty} {(a_1\cdots a_{n+1}d^{-1})_\infty(qa_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1})_\infty\prod_{i=1}^{n}\prod_{j=1}^{n+1}(b_ja_i)_\infty}, \end{eqnarray*} and thus \begin{eqnarray} \label{eq:03TNhCk} &&T^N\frac{h(a)C}{k(a)}= (d^{-1}a_1a_2\cdots a_nq^{n(n+1)N})^{1-\alpha_1-\cdots-\alpha_{n+1}-\beta_1-\cdots-\beta_{n+1}+nN} \prod_{1\le i<j\le n}q^{(n+1)N}a_j\theta(a_i/a_j) \nonumber\\ && \quad\times\frac{(1-q)^n(q)_\infty^n (b_1^{-1}\cdots b_{n+1}^{-1}d^{-1}q^{1+n(n+1)N})_\infty\prod_{j=1}^{n+1}(a_{n+1}^{-1}b_j^{-1}q^{1+2nN})_\infty} {(a_1\cdots a_{n+1}d^{-1}q^{n^2N})_\infty(a_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1}q^{1+nN})_\infty\prod_{i=1}^{n}\prod_{j=1}^{n+1}(b_ja_iq^N)_\infty}. \end{eqnarray} Since ${\cal K}(a)$ and $C$ satisfy the same recurrence relations with respect to $a_i$ and $b_i$, the ratio ${\cal K}(a)/C$ is invariant under the $q$-shift $T^N$. From (\ref{eq:03TNbK(a)}) and (\ref{eq:03TNhCk}), we therefore obtain \begin{eqnarray*} &&\frac{{\cal K}(a)}{C}=T^N\frac{{\cal K}(a)}{C}=\frac{T^N\tilde K(a)}{T^Nh(a)C/k(a)}= \lim_{N\to +\infty}\frac{T^N\tilde K(a)}{T^Nh(a)C/k(a)}\\ &&=\lim_{N\to +\infty} \frac{(a_1\cdots a_{n+1}d^{-1}q^{n^2N})_\infty (a_1^{-1}\cdots a_{n+1}^{-1}b_1^{-1}\cdots b_{n+1}^{-1}q^{1+nN})_\infty \prod_{i=1}^{n}\prod_{j=1}^{n+1}(b_ja_iq^N)_\infty} {(b_1^{-1}\cdots b_{n+1}^{-1}d^{-1}q^{1+n(n+1)N})_\infty\prod_{j=1}^{n+1}(a_{n+1}^{-1}b_j^{-1}q^{1+2nN})_\infty}\\ &&=1, \end{eqnarray*} which completes the proof of Lemma \ref{lem:03main}. $\square$ \subsection*{Acknowledgements} This work was supported by the Australian Research Council and JSPS KAKENHI Grant Number 25400118. {\footnotesize
2,877,628,089,321
arxiv
\section{Introduction and Summary} \label{sec:intro} As a consistent theory of quantum gravity, string/M-theory is expected to provide microscopic understandings of quantum aspects of black holes (BHs). In a celebrated work \cite{Strominger:1996sh}, this hope was realized for extremal black holes in Minkowski spacetime, by reproducing the Bekenstein-Hawking entropy through counting D-brane bound states. Recently, this success was extended to black holes in asymptotically anti-de-Sitter spacetime. In particular, the entropy of magnetically charged supersymmetric black holes in $AdS_4$ supergravity can be explained using the holographic principle \cite{Benini:2015eyy,Benini:2016rke,Hosseini:2016tor,Hosseini:2016ume,Cabo-Bizet:2017jsl,Azzurli:2017kxo,Hosseini:2017fjo,Benini:2017oxt,Toldo:2017qsh,Bobev:2017uzs}. The AdS/CFT correspondence \cite{Maldacena:1997re} says quantum gravity in asymptotically $AdS_{4}$ spacetime should be dual to a conformal field theory (CFT) on the $3$-dimensional (3D) boundary. The field theories of interest have $\mathcal{N}\geq 2$ supersymmetry, and the black hole entropy on the gravity side turns out to be related to the so-called {\em topologically twisted indices} \cite{Gukov:2015sna,Benini:2015noa,Benini:2016hjo,Closset:2016arn} in the dual field theory. They are the partition functions (ptns) on supersymmetric curved backgrounds $\mathcal{M}_{g,p=0}:=\Sigma_g \times S^1$, with an appropriately chosen background magnetic flux coupled to R-symmetry current. The magnetic flux is turned on along the Riemann surface $\Sigma_g$ of genus $g$. The twisted indices can be then computed using the supersymmetric localization technique. Recently, the realm of localizable 3D manifolds has been further extended in \cite{Closset:2017zgf}, and we now have formulae for twisted partition functions $Z_{g,p}$, i.e. the ptn on degree-$p$ $S^1$-bundles $\mathcal{M}_{g,p}$ (see \eqref{M-g,p}) over $\Sigma_g$. In this letter, we study holographic duality for a large class of 3D $\mathcal{N}=2$ SCFTs $T_{N}[M]$, defined in \eqref{T-N-M} (see also \eqref{T-N-M-2}), arising from wrapped M5-branes on closed hyperbolic 3-manifolds $M$. The 3D theory is characterized by $N$, number of M5-branes, and the choice of a 3-manifold $M$. For each $M$, there is an associated AdS$_4$/CFT$_3$ correspondence. The holographic dual of the wrapped M5-brane 3D SCFT was studied in \cite{pernici1985spontaneous,Gauntlett:2000ng}. Since there are infinitely many such 3-manifolds \cite{thurston1979geometry}, the wrapped M5-branes system provides a huge set of AdS$_4$/CFT$_3$ examples. We probe the holography using the twisted ptns. For $p=0$, the ptn becomes a twisted index which counts ground states of M5-branes on $ \Sigma_g \times M$. The counting is holographically dual to the microstates counting for a supersymmetric BH solution interpolating the asymptotic $AdS_4$ and its near-horizon limit $AdS_2\times \Sigma_g$ \cite{Bobev:2017uzs}. % The 3D-3D correspondence \cite{Dimofte:2010tz,Terashima:2011qi,Dimofte:2011ju} provides a novel way of analyzing the 3D $\mathcal{N}=2$ SCFTs. Schematically, the correspondence says % \begin{align} \begin{split} &(\textrm{supersymmetric ptns of 3D $T_{N}[M]$ theory}) \\ &=(\textrm{$SL(N,\mathbb{C})$ Chern-Simons theory invariants on $M$})\;. \end{split} \end{align} Refer to \cite{Yagi:2013fda,Lee:2013ida,Cordova:2013cea,Beem:2012mb,Dimofte:2014zga,Gukov:2016gkn,Gukov:2017kmk,Mikhaylov:2017ngi} for more details on 3D-3D dictionary. Via the correspondence, some supersymmetric quantities of 3D $T_{N}[M]$ theory can be evaluated without relying on a field theoretic description of the 3D SCFT. For example, as summarized in Table~\ref{3d/3d correspondence for twisted ptns}, the twisted partition function $Z_{g,p}$ can be written in terms of basic perturbative invariants of the complex Chern-Simons (CS) theory. These invariants are mathematically well-defined and have been extensively studied in math literature. Combining the 3D-3D dictionary with mathematical results, we obtain the large $N$ behavior of the twisted partition functions contributed from two distinguished Bethe vacua in the $T_{N}[M]$ theory. The Bethe vacua of our interest correspond to two irreducible $SL(N,\mathbb{C})$ flat connections on $M$ in 3D-3D dual complex CS theory. These two flat connections also have a natural interpretation in terms of hyperbolic geometry, see eq~\eqref{two irreducible flat connections}. They also give global minimum and maximum of the absolute value of the fibering operator $\mathcal{F}$ appearing in the ptn computation \eqref{Twisted ptns from sum}, see \eqref{Bounds on F}. We confirm that the large $N$ twisted ptns from the two Bethe-vacua nicely match the on-shell action for the two Bolt-type solutions \cite{Toldo:2017qsh} in the gravity dual respectively. The comparison is summarized in Table~\ref{large N computations}, which is the main point of this letter. \section{3D $T_{N}[M]$ theory } A large class of 3D $\mathcal{N}=2$ SCFTs can be engineered through a twisted compactification of 6-dimensional SCFTs. They are labelled by the internal 3-manifolds $M$: \begin{align} \begin{split} &T_{N}[M] := (\textrm{Effective 3D $\mathcal{N}=2$ SCFT obtained from } \\ &\;\; \textrm{a twisted compactification of 6D $A_{N-1}$ $(2,0)$ theory} \\ &\;\; \textrm{on a 3-manifold $M$)}\;. \label{T-N-M} \end{split} \end{align} For simplicity, we assume $M$ is a closed (compact) hyperbolic 3-manifold without boundary. To preserve supersymmetry, we perform a partial topological twisting along the internal 3-manifold using $SO(3)$ vector subgroup of $SO(5)$ R-symmetry of the 6D theory. The twisting preserves a quarter of supersymmetries and the resulting 3D theory becomes a 3D $\mathcal{N}=2$ SCFT with 4 supercharges. The 6D theory describes the low-energy effective world-volume theory of $N$ coincident M5-branes in M-theory, and the 3D theory can be considered as an effective world-volume theory of $N$ coincident M5-branes wrapped on the compact 3-manifold $M$, i.e. % \begin{align} \begin{split} & \textrm{$N$ coincident M5-branes on $\mathbb{R}^{3}\times M$} \\ & \textrm{in M-theory on $\mathbb{R}^{3} \times (T^*M) \times \mathbb{R}^2$} \\ & \xrightarrow{\textrm{ \;\;\; IR world-volume theory of M5-branes\;\;\; } } \textrm{$T_N[M]$ on $\mathbb{R}^{3}$}\;. \label{T-N-M-2} \end{split} \end{align} % Here $T^*M$ is the cotangent bundle of $M$, which is a local Calabi-Yau. Let us comment on a subtle point in the setup. As emphasized in \cite{Gang:2018wek}, in taking the twisted compactification we need to choose a connected subset of the vacuum moduli-space of the theory defined on $\mathbb{R}^3$, in order to have a genuine 3D SCFT. For a hyperbolic $M$, there is a natural choice (which is actually a single point) which is expected to become a discrete set of vacua when the theory is put on $\mathbb{R}^2\times S^1$. This discrete set of vacua corresponds to a subset of irreducible $SL(N,\mathbb{C})$ flat connections on $M$. A field theoretic construction of the effective 3D gauge theory is proposed in \cite{Gang:2018wek}, extending the beautiful construction in \cite{Dimofte:2011ju,Dimofte:2013iv} for cusped 3-manifolds with at least one torus boundary component, by incorporating gauge theoretical operations corresponding to Dehn filling (removing torus boundaries) operations on 3-manifold. \section{$T_{N}[M]$ on $\mathcal{M}_{g,p}$ for even $p$} We now turn to the case where $T_{N}[M]$ is put on a large class of nontrivial backgrounds $\mathcal{M}_{g,p}$ \cite{Closset:2017zgf}: \begin{align} \begin{split} &\mathcal{M}_{g,p}:= (\textrm{$S^1$-bundle of degree $p$} \\ & \qquad \qquad \textrm{over a Riemmann surface $\Sigma_g$ of genus $g$})\;, \\ & \textrm{i.e. } S^1 \xrightarrow{\; p \;} \mathcal{M}_{g,p} \rightarrow \Sigma_g \;. \label{M-g,p} \end{split} \end{align} The metric can be written as \begin{align} ds^2 = \beta^2 (d \psi - p a(z,\bar{z}))^2 + 2 g_{z \bar{z}} dz d \bar{z}\;, \end{align} where $z,\bar{z}$ are local coordinates on the Riemann surface and $\psi \sim \psi+2\pi$ parameterizes the $S^1$-fiber of length $\beta$. $a$ is a 1-form on $\Sigma_g$ whose curvature $F_a :=da$ is normalized as \begin{align} \frac{1}{2\pi} \int_{\Sigma_g} d a =1\;. \end{align} To preserve some supersymmetry, we turn on the following background gauge field coupled to $U(1)$ R-symmetry. \begin{align} A^{R}= \beta \nu_R (d \psi - p a) + n_R (\pi^* a)\;, \label{background U(1)-R} \end{align} with proper quantization conditions for $(\nu_R, n_R)$ \cite{Toldo:2017qsh}. Here $\pi^* a $ is a 1-form on $\mathcal{M}_{g,p}$ given as the pull-back of $a$ using the projection map $\pi : \mathcal{M}_{g,p} \rightarrow \Sigma_g $. For later comparison with the bolt-type solutions in the supergravity, we follow \cite{Toldo:2017qsh} and choose \begin{align} \nu_R = \frac{1}2\;, \quad n_R = \frac{p}2 + g-1\;, \quad p \in 2 \mathbb{Z}\;. \label{background U(1)-R-2} \end{align} Throughout the letter, we restrict our attention to the choice in \eqref{background U(1)-R-2} and some formulae below may not work for other cases. For example, the large $N$ computation in Table \ref{large N computations} give incorrect answer for the usual round $S^3$ case which is $\mathcal{M}_{g=0,p=1}$. For small $N$ the effective 3d theory $T_{N}[M]$ might witness emergent symmetries in addition to R-symmetry, as pointed out in \cite{Gang:2017lsr,Gang:2018wek}. When $N$ is large enough, on the other hand, there is no accidental symmetry and the $U(1)$ R-symmetry in the IR should be simply inherited from the compact $SO(2)$ subgroup of $SO(5)$ R-symmetry in the 6D theory. It implies that the $U(1)$ R-charge, $R$, should be properly quantized \begin{align} R (\mathcal{O}) \in \mathbb{Z}\;, \; \textrm{for any state $\mathcal{O}$ of $T_{N}[M]$ on $\Sigma_g$}\;. \label{quantization of R} \end{align} The Dirac quantization conditions for the $U(1)$ R-symmetry flux on $\Sigma_g$ are \begin{align} \begin{split} &R(\mathcal{O})\times n_R = R(\mathcal{O})\times \big{(} \frac{p}2 + g-1\big{)} \in \mathbb{Z} \;, \\ &\; \textrm{for any state $\mathcal{O}$ of $T_{N}[M]$ on $\Sigma_g $}\;. \end{split} \end{align} From \eqref{quantization of R}, we see that the Dirac quantizations are always satisfied for even $p$. In summary, for large enough $N$ we can put the 3D $T_{N}[M]$ theory on any $\mathcal{M}_{g \in \mathbb{Z}_{\geq 0},p \in 2\mathbb{Z}}$ with supersymmetry preserving background gauge field, given in \eqref{background U(1)-R} and \eqref{background U(1)-R-2}, coupled to the R-symmetry in the IR. \section{Holographic dual of $T_{N}[M]$ } The gravity dual description is given by the uplift of a certain magnetically charged $AdS_4$ solution in the maximally supersymmetric $D=7$ gauged supergravity \cite{pernici1985spontaneous,Gauntlett:2000ng}. Schematically, the $D=11$ solution is a product of $AdS_4$, hyperbolic 3-manifold $M$, and a squashed 4-sphere $\tilde{S}^4$. Consistency of the truncation from $D=11$ down to minimal ${\cal N}=2, D=4$ gauged supergravity is established in \cite{Donos:2010ax} and it is guaranteed that we may replace the $AdS_4$ part with any nontrivial $D=4$ solution and we still have an exact $D=11$ solution. The computation of holographic free energy can be also first done in $D=4$ setup, and substitute the Newton constant with \cite{Gang:2014ema} \begin{align} G_4 = \frac{3\pi^2}{2N^3 \textrm{vol}(M)}\;. \label{4d G} \end{align} Here, the hyperbolic volume is defined as \begin{align} \begin{split} &\textrm{vol}(M) = (\textrm{hyperbolic volume of $M$}) \\ & :=(\textrm{volume measured in the unique hyperbolic metric}) \;.\nonumber \end{split} \end{align} The hyperbolic metric is normalized as $R_{\mu\nu} = - 2 g_{\mu\nu}$. The Mostow's rigidity theorem \cite{mostow1968quasi} guarantees the uniqueness of the hyperbolic metric and thus the volume is actually a topological invariant. As gravity duals of the boundary theory put on ${\cal M}_{g,p}$, we utilize the supersymmetric AdS-Taub-NUT and bolt solutions constructed in \cite{Martelli:2012sz}. Since these solutions have non-vanishing Maxwell field, which in $D=11$ uplift appears as a twisting of the R-symmetry angle in $\tilde{S}^4$, one might worry about a conflict with the quantization condition $g,p$. But it turns out, since the R-symmetry angle is part of $\tilde{S}^4$ and we have a standard periodicity of $2\pi$, the regularity condition for $D=4$ NUT/Bolt is enough. This is in line with the field theory side discussion, in particular \eqref{quantization of R}. A comment is in order here, in comparison with the uplifts involving Sasaki-Einstein 7-manifolds. In that case, the periodicity of the R-symmetry angle from the regularity of Bolt solution should be compatible with the periodicity condition due to collapsing cycles in the K\"ahler-Einstein base manifold of the Sasaki-Einstein space. The readers are referred to \cite{Toldo:2017qsh} for more details, where the authors considered an explicit example of Sasaki-Einstein manifolds such as $V^{5,2}=SO(5)/SO(3)$. \section{Twisted partition functions of $T_N[M]$ in 3D-3D correspondence } The twisted partition function $Z_{g,p}$ on the $\mathcal{M}_{g,p}$ for general 3D $\mathcal{N}=2$ SCFTs is given as the following finite sum \cite{Gukov:2016gkn,Closset:2017zgf} \begin{align} Z_{g,p} = \sum_\alpha Z^{\alpha}_{g,p} := \sum_\alpha (\mathcal{H}^\alpha)^{g-1} (\mathcal{F}^\alpha)^p\;. \label{Twisted ptns from sum} \end{align} Here $\alpha$ labels the so-called Bethe vacua \cite{Nekrasov:2014xaa} of the 3D theory. It is obtained by extremizing the effective 2d twisted superpotential in the compactification on $\mathbb{R}^2 \times S^1$. The number of vacua is equal to the Witten index \cite{Kim:2010mr,Intriligator:2013lca} of the 3D SCFT. $\mathcal{H}$ and $\mathcal{F}$ are called {\it handle-gluing} and {\it fibering} operators respectively. The explicit forms of $\mathcal{H}$ and $\mathcal{F}$ for any given ultra-violet (UV) Lagrangian are available in \cite{Closset:2017zgf}. Let us emphasize that, the formula \eqref{Twisted ptns from sum} applied to the case of $S^3$ partition function which corresponds to $(g,p)=(0,1)$ is apparently {\it different} from the more familiar Coulmob branch integral expression \cite{Kapustin:2009kz}. But their equivalence is illustrated for a number of examples in \cite{Closset:2017zgf}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|} \hline 3D $T_{N}[M]$ theory & $SL(N, \mathbb{C})$ CS theory \\ \hline \hline Bethe vacuum $\alpha$ & \;\; Irreducible flat connection $\mathcal{A}^{\alpha}$\;\; \\ \hline Handle gluing operator $\mathcal{H}^{\alpha}$ & $\exp (-2 S_1^{\alpha} )$ \\ \hline Fibering operator $\mathcal{F}^{\alpha}$ & $\exp (i S_0^{\alpha}/(2\pi) )$ \\ \hline \end{tabular} \caption {3d-3d dictionaries for basic ingredients in twisted partition function computation. $S^{\alpha}_{n=0,1}$ are perturbative invariants of the complex Chern-Simons theory around a flat-connection $\mathcal{A}^\alpha$, see eq~\eqref{S0-S1}. } \label{3d/3d correspondence for twisted ptns} \end{center} \end{table} Now let us specialize to the $T_{N}[M]$ theories in \eqref{T-N-M}. The twisted ptns for these theories can be analyzed using the 3D-3D dictionaries summarized in Table~\ref{3d/3d correspondence for twisted ptns}. Twisted ptns in 3D-3D correspondence were studied in \cite{Gukov:2016gkn,Gukov:2017kmk}. In the table, the $\{S_n^{\alpha}\}_{n=0}^\infty$ represent terms in the loop expansion of the complex CS partition function around a flat-connection $\mathcal{A}^{\alpha}$ \cite{Gukov:2003na,Dimofte:2009yn,Gukov:2006ze,Gang:2017cwq}: \begin{align} \begin{split} &Z_{\rm CS \; pert}^{\alpha} :=\int \frac{D (\delta \mathcal{A})}{(\textrm{gauge})} e^{- \frac{1}{2\hbar} CS [\mathcal{A}^{\alpha} + \delta \mathcal{A};M]} \\ &\xrightarrow{\textrm{\;\; as $\hbar$ goes to 0}\;\;} \exp \left(\frac{1} \hbar S^{\alpha}_0 + S^{\alpha}_1 + \ldots + \hbar^{n-1} S^\alpha_n + \ldots \right)\;. \end{split} \end{align} The Chern-Simons functional is \begin{align} CS[\mathcal{A},M]:= \int_M \textrm{Tr} (\mathcal{A}\wedge d \mathcal{A} + \frac{2}3 \mathcal{A}^3)\;. \label{S0,S1 and S2} \end{align} Note that the counterpart of ${\cal F}$ and ${\cal H}$ are simply {\it tree level} and {\it one-loop} contributions in perturbation theory! More explicitly, the perturbative coefficients are given as \begin{align} \begin{split} &S_0^{\alpha} = - \frac{1}2 CS [\mathcal{A}^\alpha,M] \;, \\ &S_1^{\alpha} := \frac{1}2 \log \textrm{Tor}_{R={\rm adjoint}}[\mathcal{A}^\alpha, M]\;. \end{split} \label{S0-S1} \end{align} $\textrm{Tor}_R [\mathcal{A}^\alpha, M]$ is the Ray-Singer torsion of an associated vector bundle in a representation $R \in \textrm{Hom}[SL(N,\mathbb{C}) \rightarrow {GL}(V_R)]$ twisted by a flat connection $\mathcal{A}^{\alpha}$. Here $V_R$ is the vector space for representation $R$ and $GL(V_R)$ is the general linear group on the $V_R$. The analytic torsion is defined as follows \cite{ray1971r,Gukov:2006ze,Gukov:2011qp} \begin{align} \textrm{Tor}_{ R} [ \mathcal{A}^\alpha, M] := \frac{[\textrm{det}'\Delta_0 (R, \mathcal{A}^\alpha)]^{3/2}}{[\textrm{det}'\Delta_1 (R, \mathcal{A}^\alpha)]^{1/2}}\;. \end{align} Here $\Delta_n (R, \mathcal{A}^\alpha )$ is a Laplacian acting on $V_R$-valued $n$-form twisted by a flat connection $\mathcal{A}^\alpha$. ${\rm det}' \Delta$ denotes the zeta function regularized determinant of the Laplacian $\Delta$. For the one-loop part, the denominator comes from gauge field fluctuations $\delta \mathcal{A}$ while the numerator comes from the ghosts associated to a gauge fixing \cite{witten1989quantum}. The 3D-3D dictionary in Table~\ref{3d/3d correspondence for twisted ptns} can be derived combining several known results in literatures. The Bethe-vacua (vacua on $\mathbb{R}^2 \times S^1$) of 3D $T_{N}[M]$ theory are in one-to-one correspondence to a subset of irreducible flat-connections on $M$ \cite{Dimofte:2010tz,Gang:2018wek}. According to a dictionary of 3D-3D relation, the asymptotic expansion $Z^{\alpha}_{\rm CS \; pert}$ in \eqref{S0,S1 and S2} is equal to the perturbative expansion of holomorphic block $B^{\alpha} (q)$ \cite{Dimofte:2010tz,Beem:2012mb} associated to the Bethe-vacuum $\alpha$ in the limit $q \rightarrow 1$, \begin{align} \begin{split} &Z^{\alpha}_{\rm CS \; pert} (\hbar) \simeq B^{\alpha} (q:=e^{\hbar}), \\ &\textrm{as an asymptotic expansion in $\hbar \rightarrow 0$}. \end{split} \end{align} For general 3D $\mathcal{N}=2$ theory, the asymptotic expansion coefficients $S_0$ and $S_1$ of holomorphic block are related to the operators $\mathcal{F}$ and $\mathcal{H}$ as given in Table~\ref{3d/3d correspondence for twisted ptns} \cite{Closset:2018ghr,to-appear}. \section{Large N twisted partition functions around two Bethe-vacua and its holographic dual} For every hyperbolic 3-manifold $M$, there are two characteristic irreducible $SL(N, \mathbb{C})$ flat connections $\mathcal{A}^{\rm geom}_N$ and $\mathcal{A}^{\overline{\rm geom}}_N$ which can be constructed from the hyperbolic structure on $M$,% \begin{align} \mathcal{A}^{\rm geom}_N := \rho_N \cdot (\omega + i e)\;, \quad \mathcal{A}^{\overline{\rm geom}}_N := \rho_N \cdot (\omega - i e)\;. \label{two irreducible flat connections} \end{align} Here $\omega$ and $e$ are respectively the spin-connection and vielbein of the unique hyperbolic metric on $M$. They are both locally $so(3)$-valued 1-forms and the complex combinations $\omega \pm i e$ form an $SL(2,\mathbb{C})$ flat-connections on $M$. $\rho_N$ is the $N$-dimensional irreducible representation of $sl(2,\mathbb{C}) = su(2)_{\mathbb{C}}$, and obviously $\rho_N \cdot (\omega \pm i e)$ become also irreducible $SL(N,\mathbb{C})$ flat connections. A crucial property of these two flat-connections is that they take the minimum (maximum) value of $\textrm{Im}[S_0]$ among all $SL(N,\mathbb{C})$ flat connections. Namely, \begin{align} \textrm{Im}[S_0^{\overline{\rm geom}}]<\textrm{Im}[S_0^\alpha]<\textrm{Im}[S_0^{\rm geom}] \;, \label{Bounds on Im[S0]} \end{align} for any flat connection $\mathcal{A}^\alpha$ which is neither $\mathcal{A}^{\overline{\rm geom}}$ nor $\mathcal{A}^{\rm geom}$. Combined with the 3D-3D dictionary in Table~\ref{3d/3d correspondence for twisted ptns}, it is implied \begin{align} |\mathcal{F}^{\rm geom}|<|\mathcal{F}^{\alpha}|<|\mathcal{F}^{\overline{\rm geom}}| \;, \label{Bounds on F} \end{align} for any Bethe-vacuum $\alpha$ which is neither $(\overline{\rm geom})$ nor $({\rm geom})$. Classical actions $S_0^{\alpha}$ for the two connections above can be computed as follows \begin{align} \begin{split} &\textrm{Im}[S_0^{\rm geom}] = -\frac{1}2 \textrm{Im} [CS (\mathcal{A}_{N}^{\rm geom})] = -\frac{1}2 \textrm{Im} [CS (\rho_N \cdot (\omega + i e))] \\ & =-\frac{1}2 \frac{\textrm{Tr}[\rho_N \cdot (T^a) \rho_N \cdot (T^b)]}{\textrm{Tr}[T^a T^b]} \textrm{Im} [CS (\omega + ie)] \\ & = -\frac{1}2 \frac{N^3 -N}{6} \textrm{Im} [CS (\omega + i e)] =\frac{N^3-N}{6} \textrm{vol}(M)\;, \; \;\textrm{and} \\ &\textrm{Im}[S_0^{\overline{\rm geom}}] = - \textrm{Im}[S_0^{\rm geom}]=-\frac{N^3-N}{6} \textrm{vol}(M)\;. \label{large N classical} \end{split} \end{align} In the second line, $T^{a} \;(a=1,2,3)$ are Pauli matrices and $\rho_N \cdot (T^a)$ are generators in the $N$-dimensional irreducible representation. From a simple group theoretical fact \begin{align} \frac{\textrm{Tr}[\rho_N \cdot (T^a) \rho_N \cdot (T^b)]}{\textrm{Tr}[T^a T^b]} = \frac{N^3 -N}6 \;, \end{align} the expected $N^3$-scaling of $T_{N}[M]$ theory follows. In the third line of \eqref{large N classical}, we use the fact that the imaginary part of Chern-Simons functional of $\mathcal{A} = \omega + i e$ is equal to the Einstein-Hilbert action with unit negative cosmology constant up to an overall numerical factor \cite{witten19882+}. The action for the unique hyperboilc metric is twice of the hyperbolic volume of 3-manifold with a minus sign. The large $N$ asymptotic behavior of the 1-loop coefficients, $S_1^{{\rm geom}}$ and $S_1^{\overline{\rm geom}}$, can be analyzed using a following mathematical theorem \cite{muller2012asymptotics}, \begin{align} \begin{split} &\log |\textrm{Tor}_{R=\rho_{2m+1}} [\mathcal{A}^{\rm geom}_{N=2},M]| \\ &\xrightarrow{\textrm{ as $m$ goes to $\infty$ } }-\frac{1}\pi m^2 \textrm{vol}(M) +o (m)\;. \label{A math thm} \end{split} \end{align} Here $\rho_{2m+1}$ is the $(2m+1)$-dimensional irreducible representation of $sl(2,\mathbb{C}) = su(2)_{\mathbb{C}}$. Combining the theorem with the following branching rule, \begin{align} \begin{split} &\big{(}\textrm{adjoint of $sl(N, \mathbb{C})$} \big{)} = \bigoplus_{m=1}^{N-1} \rho_{2m+1} \textrm{ of $sl(2,\mathbb{C})$}\;, \\ & \textrm{when the $sl(2,\mathbb{C})$ is embedded into $sl(N, \mathbb{C})$ via $\rho_N$.} \end{split} \end{align} we have following large $N$ behavior of the 1-loop coefficients \begin{align} \begin{split} &\textrm{Re}[S_1^{\rm geom}] =\frac{1}2 \log |\textrm{Tor}_{R={\rm adjoint}}[\mathcal{A}_N^{\rm geom}, M] | \\ &= \frac{1}2 \sum_{m=1}^{N-1} \log |\textrm{Tor}_{R=\rho_{2m+1}} [\mathcal{A}^{\rm geom}_{N=2},M]| \\ &=- \frac{1}{2\pi} \textrm{vol}(M)\sum_{m=1}^{N-1} m^2 + o(m) = -\frac{N^3+o(N^2)}{6 \pi} \textrm{vol}(M)\;,\;\; \\ &\textrm{and} \\ &\textrm{Re}[S_1^{\overline{\rm geom}}] = \textrm{Re}[S_1^{ \rm geom}]=-\frac{N^3+o(N^2)}{6 \pi} \textrm{vol}(M)\;. \label{large N 1-loop} \end{split} \end{align} Combining the 3D-3D dictionaries in Table~\ref{3d/3d correspondence for twisted ptns} with the large $N$ analysis in \eqref{large N classical} and \eqref{large N 1-loop}, we finally obtain following universal large $N$ behavior of the twisted ptns \eqref{Twisted ptns from sum} \begin{align} \begin{split} &F^{\rm geom}_{g,p} :=-\log |Z^{\rm geom}_{g,p}(T_N[M])| \\ &= \frac{ 4(1-g) N^3+p N^3 }{12\pi} \textrm{vol}(M)+ o(N^2) \;, \\ &F^{\overline{\rm geom}}_{g,p}:=-\log |Z^{\overline{\rm geom}}_{g,p}(T_N[M])| \\ &= \frac{ 4(1-g) N^3 -pN^3 }{12\pi} \textrm{vol}(M)+ o(N^2) \;. \end{split} \end{align} They nicely match the on-shell actions $I^{Bolt_\pm}_{g,p}$ of $Bolt_\pm $ solution in \cite{Toldo:2017qsh}. The large $N$ computations are summarized in Table~\ref{large N computations}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|} \hline M-theory on $AdS_4 \times M \times \tilde{S}^4$ & $SL(N, \mathbb{C})$ CS theory on $M$ \\ \hline \hline $Bolt_+$ solution\; & Flat connection $ \mathcal{A}_N^{\overline{\rm geom}}$ \\ $I^{Bolt_+}_{g,p} = \frac{\pi(4(1-g)-p)}{8G_4} $ & $F^{\overline{\rm geom}}_{g,p} = \frac{ 4(1-g) N^3 -pN^3 }{12\pi} \textrm{vol}(M) $ \\ \hline $Bolt_-$ solution\; & Flat connection $ \mathcal{A}_N^{\rm geom}$ \\ $I^{Bolt_-}_{g,p} = \frac{\pi(4(1-g)+p)}{8G_4} $ & $F^{\rm geom}_{g,p} = \frac{ 4(1-g) N^3 +pN^3 }{12\pi} \textrm{vol}(M) $ \\ \hline \end{tabular} \caption { The M-theory is holographic dual to 3D $T_{N}[M]$ theory while the complex Chern-Simons theory is 3D-3D dual to the $T_{N}[M]$ theory. The 4d Newton constant $G_4$ is given in \eqref{4d G}. } \label{large N computations} \end{center} \end{table} \section{Comparison with large $N$ $S^3_b$-ptn} The prescription to compute the twisted partition function $Z_{g,p}$ for $T_{N}[M]$ through 3D-3D correspondence naturally shares several ingredients with the corresponding computation of a squashed 3-sphere partition function $Z_{b} (T_N [M])$ studied in \cite{Gang:2014qla,Gang:2014ema}. The squashed 3-sphere $S^3_b$ of our interest is a supersymmetric curved background introduced in \cite{Hama:2011ea}, defined as \begin{align} S^3_b = \big{\{} (z,w) \in \mathbb{C}^2 \;:\; b^2 |z|^2 +\frac{1}{b^2}|w|^2=1 \big{\}}\;. \end{align} Setting $b=1$ gives the usual round 3-sphere. According to the 3D-3D relation \cite{Terashima:2011qi,Dimofte:2011ju}, the extreme squashing limit $b \in \mathbb{R}\rightarrow 0$ corresponds to a weakly coupled limit of the Chern-Simons theory. More concretely $Z_b$ is determined by the perturbative invariants $S_n^{\overline{\rm geom}}$ in \eqref{S0,S1 and S2} around the flat connection $\mathcal{A}_N^{\overline{\rm geom}}$ \cite{Bae:2016jpi,Mikhaylov:2017ngi,Gang:2017hbs}, \begin{align} \begin{split} &F_b (T_{N}[M]):=-\log |Z_{b}(T_N[M])| \\ &\xrightarrow{\; \textrm{as $\hbar:= 2\pi i b^2$ goes to $0$}\;} \\ &-\textrm{Re}\bigg{[}\left(\frac{1}{\hbar} S_0^{\overline{\rm geom}} + \ldots+ \hbar^{n-1} S_n^{\overline{\rm geom}} +\ldots \right)\bigg{]} \;. \end{split} \end{align} Combined with the large $N$ behaviors of $S_n^{\overline{\rm geom}}$ for $n=1,2$ in eqs. \eqref{large N classical} and \eqref{large N 1-loop}, we see that the asymptotic expansion is compatible with the gravity dual side of free-energy $I_{b}^{\rm gravity}$ \cite{Gang:2014qla}, \begin{align} \begin{split} &I_{b}^{\rm gravity} = \frac{\pi (b+b^{-1})^2}{8 G_4} = \frac{N^3}{12\pi }(b+b^{-1})^2 \textrm{vol}(M) \\ & \xrightarrow{\; \textrm{as $\hbar:= 2\pi i b^2$ goes to $0$}\;} \\ &\frac{i N^3 \textrm{vol}(M)}{6\hbar} + \frac{N^3 \textrm{vol}(M)}{6\pi} - \frac{i N^3 \textrm{vol}(M)}{24 \pi^2} \hbar \;, \label{Ib-gravity} \end{split} \end{align} up to $o(\hbar^0)$. Motivated from the comparison, it was further conjectured that \cite{Gang:2014qla} \begin{align} \lim_{N \rightarrow \infty} \frac{1}{N^3} S_2^{\overline{\rm geom}} = \frac{i\, \textrm{vol}(M)}{24 \pi^2}\;, \quad \lim_{N \rightarrow \infty} \frac{1}{N^3} S_{n\geq 3}^{\overline{\rm geom}} = 0 \; . \end{align} This conjecture was checked numerically for a number of concrete examples. Now we compare the two large $N$ analysis and see \begin{align} \begin{split} &\lim_{N\rightarrow \infty}\frac{1}{N^3}F_b (T_{N}[M]) \\ &=\lim_{N\rightarrow \infty} \frac{1}{N^3} \left(- \frac{ \textrm{Im}[S^{\overline{\rm geom}}_0]}{2 \pi b^2}-\textrm{Re}[S^{\overline{\rm geom}}_1]+(2\pi b^2) \textrm{Im}[S^{\overline{\rm geom}}_2] \right)\;, \\ & \textrm{and} \\ &\lim_{N\rightarrow \infty}\frac{1}{N^3}F_{g,p}^{\overline{\rm geom}}(T_N[M]) \\ & = \lim_{N\rightarrow \infty} \frac{1}{N^3} \left(2(g-1) \textrm{Re}[S_1^{\overline{\rm geom}}]) +\frac{p}{2\pi} \textrm{Im}[S_0^{\overline{\rm geom}}]\right). \end{split} \end{align} From the comparison, we obtain a general relation of the following form, between the twisted and the squashed $S^3$ partition functions in the large $N$ limit \begin{align} \lim_{N\rightarrow \infty}\frac{F_{g,p}^{\overline{\rm geom}}}{F_{b=1}} = (1-g) -\frac{p}4\;. \label{universal large N relation} \end{align} which holds for every 3D $T_{N}[M]$ theory. To arrive the conclusion, we use following universal relation between perturbative invariants \begin{align} \lim_{N\rightarrow \infty}\frac{\pi \textrm{Re}[S_1^{\overline{\rm geom}}]}{\textrm{Im}[S_0^{\overline{\rm geom}}]} = \lim_{N\rightarrow \infty}\frac{-4\pi^2 \textrm{Im}[S_2^{\overline{\rm geom}}]}{\textrm{Im}[S_0^{\overline{\rm geom}}]}=1\;, \end{align} which follows from the fact \cite{Martelli:2011fu} \begin{align} \begin{split} &\lim_{N\rightarrow \infty} \frac{F_{b}}{F_{b=1}} \\ &= \lim_{N\rightarrow \infty} \frac{-\frac{1}{2\pi}\textrm{Im}[S^{\overline{\rm geom}}_0]b^{-2}-\textrm{Re}[S^{\overline{\rm geom}}_1]+(2\pi)\textrm{Im}[S^{\overline{\rm geom}}_2]b^2}{-\frac{1}{2\pi}\textrm{Im}[S^{\overline{\rm geom}}_0]-\textrm{Re}[S^{\overline{\rm geom}}_1]+(2\pi)\textrm{Im}[S^{\overline{\rm geom}}_2]} \\ &=\frac{1}4 (b^{-2}+ 2+b^2)\;. \nonumber \end{split} \end{align} The same universal relation \eqref{universal large N relation} for $p=0$ was observed in \cite{Hosseini:2016tor,Bobev:2017uzs} for different class of 3D $\mathcal{N}=2$ SCFTs. \section{Conclusion} In this letter, we probe a large class of AdS$_4$/CFT$_3$ associated to M5-branes wrapped on 3-manifolds by computing large $N$ twisted ptns. For M2 and D2-branes and their Chern-Simons-matter theories, the large $N$ computations have been performed already in \cite{Benini:2015eyy,Benini:2016rke,Hosseini:2016tor,Hosseini:2016ume,Cabo-Bizet:2017jsl,Azzurli:2017kxo,Hosseini:2017fjo,Benini:2017oxt,Toldo:2017qsh,Bobev:2017uzs}. A nicer feature of our analysis is that we map the large $N$ analysis to a mathematical problem via 3D-3D correspondence which can be solved from known mathematical results, such as \eqref{Bounds on Im[S0]}, \eqref{large N classical} and \eqref{large N 1-loop}. The results hold for any closed hyperbolic 3-manifold $M$ and one does not need to perform the large $N$ analysis for individual AdS$_4$/CFT$_3$ model associated to each $M$. We can also give a simple explanation why there is a universal relation (see \eqref{universal large N relation}) between the twisted ptns and $S^3$-ptn in the large $N$ limit. Both types of ptns are related to the same perturbative invariants of a complex CS theory through the 3D-3D correspondence in the large $N$ limit. We hope that the improvement made in our analysis may provide a better way of understanding the sub-leading corrections to the large $N$ twisted ptns which might be related to quantum corrections to the Bekenstein-Hawking entropy. \section{Acknowledgments} We would like to thank Seok Kim, Sunjin Choi, Chiung Hwang, Jaewon Song, Masahito Yamazaki and Victor Mikhaylov for interesting discussions on related works. This work was initiated while the authors were visiting APCTP, Pohang for a workshop ``Strings, Branes and Gauge theories", 16-25 July 2018. We thank APCTP for hospitality. The work of DG was supported by Samsung Science and Technology Foundation under Project Number SSTBA1402-08. The research of NK was supported by NRF grant 2015R1D1A1A09059301. \bibliographystyle{ytphys}
2,877,628,089,322
arxiv
\section{Presentation of the Model\label{Presentation of the Model}} The most general form of a translation invariant model for fermions with two-body interactions\ in a cubic box $\Lambda _{L}\doteq \{\mathbb{Z}\cap \left[ -L,L\right] \}^{d}$ ($d$-dimensional crystal) of volume $|\Lambda _{L}|$, $L\in \mathbb{N}_{0}$, is given in momentum space b \begin{equation} \mathrm{H}_{L}^{Full}=\underset{k\in \Lambda _{L}^{\ast },\ \mathrm{s}\in \mathrm{S}}{\sum }\left( \varepsilon _{k}-\mu \right) \tilde{a}_{k}^{\ast \tilde{a}_{k}+\frac{1}{\left\vert \Lambda _{L}\right\vert }\underset{\mathrm s}_{1},\mathrm{s}_{2},\mathrm{s}_{3},\mathrm{s}_{4}\in \mathrm{S}}{\underset k,k^{\prime },q\in \Lambda _{L}^{\ast }}{\sum }}g_{\mathrm{s}_{1},\mathrm{s _{2},\mathrm{s}_{3},\mathrm{s}_{4}}\left( k,k^{\prime },q\right) \tilde{a _{k+q,\mathrm{s}_{1}}^{\ast }\tilde{a}_{k^{\prime }-q,\mathrm{s}_{2}}^{\ast \tilde{a}_{k^{\prime },\mathrm{s}_{3}}\tilde{a}_{k,\mathrm{s}_{4}}\ . \label{hamil general} \end{equation See \cite[Eq. (2.1)]{Metzner}. Here, $\mathrm{S}$ is some finite (spin) set representing the internal degrees of freedom of quantum particles and \Lambda _{L}^{\ast }$ is the reciprocal lattice of quasi-momenta (periodic boundary conditions) associated with $\Lambda _{L}$. The operator $\tilde{a _{k,\mathrm{s}}^{\ast }$ (respectively $\tilde{a}_{k,\mathrm{s}}$) creates (respectively annihilates) a fermion with spin $\mathrm{s}\in \mathrm{S}$ and (quasi-) momentum $k\in \Lambda _{L}^{\ast }$, the function $\varepsilon _{k}$ represents the kinetic energy of a fermion with (quasi-) momentum $k$ and the real number $\mu $ is the chemical potential. The last term of (\re {hamil general}) corresponds to a translation-invariant two-body interaction written in the momentum space. One important example of a fermionic system with long-range interactions is given in the scope of the celebrated BCS theory -- proposed in the late 1950s (1957) to explain conventional type I superconductors. The lattice version of this theory is obtained from (\ref{hamil general}) by taking \mathrm{S}\doteq \{\uparrow ,\downarrow \}$ and imposing \begin{equation*} g_{\mathrm{s}_{1},\mathrm{s}_{2},\mathrm{s}_{3},\mathrm{s}_{4}}\left( k,k^{\prime },q\right) =\delta _{k,-k^{\prime }}\delta _{\mathrm{s _{1},\uparrow }\delta _{\mathrm{s}_{2},\downarrow }\delta _{\mathrm{s _{3},\downarrow }\delta _{\mathrm{s}_{4},\uparrow }f\left( k,-k,q\right) \end{equation* for some function $f$: It corresponds to the so-called (reduced) BCS\ Hamiltonia \begin{equation} \mathrm{H}_{L}^{BCS}\doteq \sum\limits_{k\in \Lambda _{L}^{\ast }}\left( \varepsilon _{k}-\mu \right) \left( \tilde{a}_{k,\uparrow }^{\ast }\tilde{a _{k,\uparrow }+\tilde{a}_{k,\downarrow }^{\ast }\tilde{a}_{k,\downarrow }\right) -\frac{1}{\left\vert \Lambda _{L}\right\vert }\sum_{k,q\in \Lambda _{L}^{\ast }}\gamma _{k,q}\tilde{a}_{k,\uparrow }^{\ast }\tilde{a _{-k,\downarrow }^{\ast }\tilde{a}_{-q,\downarrow }\tilde{a}_{q,\uparrow }\ , \label{BCS Hamilt} \end{equation where $\gamma _{k,q}$ is a positive\footnote The positivity of $\gamma _{k,q}$ imposes constraints on the choice of the function $f$.} function. Because of the term $\delta _{k,-k^{\prime }}$, the interaction of this model has a long-range character, in position space. The simple choice $\gamma _{k,q}=\gamma >0$ in (\ref{BCS Hamilt}) is still physically very interesting since, even when $\varepsilon _{k}=0$, the BCS\ Hamiltonian qualitatively displays most of basic properties of real conventional type I superconductors. See, e.g. \cite[Chapter VII, Section 4 {Thou}. The case $\varepsilon _{k}=0$ is known as the strong coupling limit of the BCS model. The dynamical properties of the BCS\ Hamiltonian $\mathrm{ }_{L}^{BCS}$ with $\gamma _{k,q}=\gamma >0$ can be \emph{explicitly} computed from results of \cite{BruPedra-MFII,BruPedra-MFIII}, but we prefer here to consider another BCS-type model including the Hubbard interaction, this being a much richer example. An important physical fact not taken into account in the BCS theory is the Coulomb interaction between electrons or holes, which can imply strong correlations, like in cuprates with the universally observed Mott transition at zero doping. This problem was of course already addressed in theoretical physics right after the emergence of the Fr\"{o}hlich model and the BCS theory, see, e.g., \cite{Bogoliubov-tolman shirkov}. We present below a model, named here the strong-coupling BCS-Hubbard Hamiltonian, which is rigorously studied at equilibrium in \cite{BruPedra1} in order to understand the possible thermodynamic impact of the Coulomb repulsion on ($s$-wave) superconductivity. An interesting mathematical outcome of \cite{BruPedra1} on the strong-coupling BCS-Hubbard Hamiltonian is the existence of a superconductor-Mott insulator phase transition, like in cuprates which must be doped to become superconductors. The results of \cite{BruPedra1} are based on an \emph{exact} study of the phase diagram of the strong-coupling BCS-Hubbard model defined, in a cubic box $\Lambda _{L}\doteq \{\mathbb{Z}\cap \left[ -L,L\right] \}^{d}$ ($d\in \mathbb{N}$) of volume $|\Lambda _{L}|$ for $L\in \mathbb{N}_{0}$, by the Hamiltonia \begin{equation} \mathrm{H}_{L}\doteq \sum_{x\in \Lambda _{L}}\left( 2\lambda n_{x,\uparrow }n_{x,\downarrow }-\mu \left( n_{x,\uparrow }+n_{x,\downarrow }\right) -h\left( n_{x,\uparrow }-n_{x,\downarrow }\right) \right) -\frac{\gamma } \left\vert \Lambda _{L}\right\vert }\sum_{x,y\in \Lambda _{L}}a_{x,\uparrow }^{\ast }a_{x,\downarrow }^{\ast }a_{y,\downarrow }a_{y,\uparrow } \label{strong coupling ham} \end{equation for real parameters $\mu ,h\in \mathbb{R}$ and $\lambda ,\gamma \geq 0$. The operator $a_{x,\mathrm{s}}^{\ast }$ (resp. $a_{x,\mathrm{s}}$) creates (resp. annihilates) a fermion with spin $\mathrm{s}\in \{\uparrow ,\downarrow \}$ at lattice position $x\in \mathbb{Z}^{d}$, $d=1,2,3,...,$ whereas $n_{x,\mathrm{s}}\doteq a_{x,\mathrm{s}}^{\ast }a_{x,\mathrm{s}}$ is the particle number operator at position $x$ and spin $\mathrm{s}$. They are linear operators acting on the fermion Fock space $\mathcal{F}_{\Lambda _{L}} $, wher \begin{equation} \mathcal{F}_{\Lambda }\doteq \bigwedge \mathbb{C}^{\Lambda \times \{\uparrow ,\downarrow \}}\equiv \mathbb{C}^{2^{\Lambda \times \{\uparrow ,\downarrow \}}} \label{fock} \end{equation for any $\Lambda \subseteq \mathbb{Z}^{d}$ and $d\in \mathbb{N}$. The first term of the right-hand side of (\ref{strong coupling ham}) represents the (screened) Coulomb repulsion as in the celebrated Hubbard model. The second term corresponds to the strong-coupling limit of the kinetic energy, also called \textquotedblleft atomic limit\textquotedblright\ in the context of the Hubbard model. The third term is the interaction between spins and the external magnetic field $h$. The last term is the BCS interaction written in the $x$-space sinc \begin{equation} \frac{\gamma }{\left\vert \Lambda _{L}\right\vert }\sum_{x,y\in \Lambda _{L}}a_{x,\uparrow }^{\ast }a_{x,\downarrow }^{\ast }a_{y,\downarrow }a_{y,\uparrow }=\frac{\gamma }{\left\vert \Lambda _{L}\right\vert \sum_{k,q\in \Lambda _{L}^{\ast }}\tilde{a}_{k,\uparrow }^{\ast }\tilde{a _{-k,\downarrow }^{\ast }\tilde{a}_{q,\downarrow }\tilde{a}_{-q,\uparrow }\ . \label{BCS interactions} \end{equation See (\ref{BCS Hamilt}) with $\gamma _{k,q}=\gamma >0$. This homogeneous BCS interaction should be seen as a long-range effective interaction, the precise mediators of which are not relevant, i.e., they could be phonons, as in conventional type I superconductors, or anything else. \section{Approximating Hamiltonians} The thermodynamic impact of the Coulomb repulsion on s-wave superconductors is analyzed in \cite{BruPedra1}, via a rigorous study of equilibrium and ground states of the strong-coupling BCS-Hubbard Hamiltonian: An Hamiltonian like $\mathrm{H}_{L}$ defines in the thermodynamic limit $L\rightarrow \infty $ a free-energy density functional on a suitable set of states of the CAR algebra of the lattice $\mathbb{Z}^{d}$. See \cite[Section 6.2 {BruPedra1} for more details. Minimizers $\omega $ of the free-energy density are called equilibrium states of the model and, for any $L\in \mathbb{N}_{0}$, the Gibbs states $\omega ^{(L)}$, defined on the algebra \mathcal{B}(\mathcal{F}_{\Lambda _{L}})$ of linear operators acting on the fermion Fock space $\mathcal{F}_{\Lambda _{L}}$ (\ref{fock}) by \begin{equation} \omega ^{(L)}\left( A\right) \doteq \mathrm{Trace}_{\mathcal{F}_{\Lambda _{L}}}\left( A\frac{\mathrm{e}^{-\beta \mathrm{H}_{L}}}{\mathrm{Trace}_ \mathcal{F}_{\Lambda _{L}}}\left( \mathrm{e}^{-\beta \mathrm{H}_{L}}\right) \right) \ ,\qquad A\in \mathcal{B}\left( \mathcal{F}_{\Lambda _{L}}\right) \ , \label{gibbs1} \end{equation at inverse temperature $\beta >0$, converges\footnote In the weak$^{\ast }$ topology.} in the thermodynamic limit $L\rightarrow \infty $ to a well-defined equilibrium state. The important point in such an analysis is the study of a variational problem over complex numbers: By the so-called approximating Hamiltonian method \cit {approx-hamil-method0,approx-hamil-method,approx-hamil-method2} one uses an approximation of the Hamiltonian, which, in the case of the strong-coupling BCS-Hubbard Hamiltonian, is equal to the $c$-dependent Hamiltonia \begin{equation} \mathrm{H}_{L}\left( c\right) \doteq \sum_{x\in \Lambda _{L}}\left( 2\lambda n_{x,\uparrow }n_{x,\downarrow }-\mu \left( n_{x,\uparrow }+n_{x,\downarrow }\right) -h\left( n_{x,\uparrow }-n_{x,\downarrow }\right) -\gamma \left( ca_{x,\uparrow }^{\ast }a_{x,\downarrow }^{\ast }+\bar{c}a_{x,\downarrow }a_{x,\uparrow }\right) \right) \label{Hamiltonian BCS-Hubbard approx} \end{equation with $c\in \mathbb{C}$. The main advantage of using this $c$-dependent Hamiltonian, in comparison with $\mathrm{H}_{L}$, is the fact that it is a sum of shifts of the same on-site operator. For an appropriate choice of (order) parameter $c\in \mathbb{C}$, it leads to the exact thermodynamics of the strong-coupling BCS-Hubbard model, in the limit $L\rightarrow \infty $: At inverse temperature $\beta >0$ \begin{equation} \lim_{L\rightarrow \infty }\frac{1}{\beta \left\vert \Lambda _{L}\right\vert }\ln \mathrm{Trace}_{\mathcal{F}_{\Lambda _{L}}}\left( \mathrm{e}^{-\beta \mathrm{H}_{L}}\right) =\underset{c\in \mathbb{C}}{\sup }\left\{ -\gamma |c|^{2}+\lim_{L\rightarrow \infty }\left\{ \frac{1}{\beta \left\vert \Lambda _{L}\right\vert }\ln \mathrm{Trace}_{\mathcal{F}_{\Lambda _{L}}}\left( \mathrm{e}^{-\beta \mathrm{H}_{L}\left( c\right) }\right) \right\} \right\} \label{var pb} \end{equation and the (exact) Gibbs state $\omega ^{(L)}$ converges\footnote In the weak$^{\ast }$ topology.} to a convex combination\footnote More precisely, it converges to the barycenter of a Choquet measure.} of the thermodynamic limit $L\rightarrow \infty $ of the (approximating) Gibbs state $\omega ^{(L,\mathfrak{d})}$ defined by \begin{equation} \omega ^{(L,\mathfrak{d})}\left( A\right) \doteq \mathrm{Trace}_{\mathcal{F _{\Lambda _{L}}}\left( A\frac{\mathrm{e}^{-\beta \mathrm{H}_{L}\left( \mathfrak{d}\right) }}{\mathrm{Trace}_{\mathcal{F}_{\Lambda _{L}}}\left( \mathrm{e}^{-\beta \mathrm{H}_{L}\left( \mathfrak{d}\right) }\right) \right) \ ,\qquad A\in \mathcal{B}\left( \mathcal{F}_{\Lambda _{L}}\right) \ , \label{gibbs2} \end{equation the complex number $\mathfrak{d}\in \mathbb{C}$ being a solution to the variational problem (\ref{var pb}). Since $\gamma \geq 0$, this can be heuristically be seen from the inequality \begin{equation*} \gamma \left\vert \Lambda _{L}\right\vert \left\vert c\right\vert ^{2} \mathrm{H}_{L}\left( c\right) -\mathrm{H}_{L}=\gamma \left( \mathfrak{c _{0}^{\ast }-\sqrt{\left\vert \Lambda _{L}\right\vert }\bar{c}\right) \left( \mathfrak{c}_{0}-\sqrt{\left\vert \Lambda _{L}\right\vert }c\right) \geq 0\ , \end{equation* where \begin{equation} \mathfrak{c}_{0}\doteq \frac{1}{\sqrt{\left\vert \Lambda _{L}\right\vert } \sum_{x\in \Lambda _{L}}a_{x,\downarrow }a_{x,\uparrow } \label{dynamics approx00} \end{equation (resp. $\mathfrak{c}_{0}^{\ast }$) annihilates (resp. creates) one Cooper pair within the condensate, i.e., in the zero-mode for fermion pairs. This suggests the proven fact \cite[Theorem 3.1]{BruPedra1} that \begin{equation} \left\vert \mathfrak{d}\right\vert ^{2}=\lim_{L\rightarrow \infty }\frac \omega ^{(L)}\left( \mathfrak{c}_{0}^{\ast }\mathfrak{c}_{0}\right) } \left\vert \Lambda _{L}\right\vert } \label{dynamics approx0} \end{equation for any\footnote This implies that any solution $\left\vert d\right\vert $ to the variational problem (\ref{var pb}) must have the same absolute value.} $\mathfrak{d}\in \mathbb{C}$ solution to the variational problem (\ref{var pb}). The parameter [**$\left\vert \mathfrak{d}\right\vert ^{2}$**] is the condensate density of Cooper pairs and so,[** $\mathfrak{d}\neq 0$**] corresponds to the existence of a superconducting phase, which is shown to exist for sufficiently large $\gamma \geq 0$. See also \cite[Figs. 1,2,3]{BruPedra1}. \section{Dynamical Problem in the Thermodynamic Limit} As is usual, a Hamiltonian like the strong-coupling BCS-Hubbard model drives a dynamics in the Heisenberg picture of quantum mechanics: The corresponding time-evolution is, for $L\in \mathbb{N}_{0}$, a continuous group $\{\tau _{t}^{(L)}\}_{t\in {\mathbb{R}}}$ of $\ast $-automorphisms of the algebra \mathcal{B}(\mathcal{F}_{\Lambda _{L}})$ of linear operators acting on the Fermion Fock space $\mathcal{F}_{\Lambda _{L}}$ (see (\ref{fock})), defined by \begin{equation} \tau _{t}^{(L)}(A)\doteq \mathrm{e}^{it\mathrm{H}_{L}}A\mathrm{e}^{-i \mathrm{H}_{L}}\ ,\qquad A\in \mathcal{B}(\mathcal{F}_{\Lambda _{L}}),\ t\in {\mathbb{R}}\ . \label{dynamics full} \end{equation The generator of this time evolution is the linear operator $\delta _{L}$ defined on $\mathcal{B}(\mathcal{F}_{\Lambda _{L}})$ b \begin{equation*} \delta _{L}\left( A\right) \doteq i[\mathrm{H}_{L},A]\doteq i\left( \mathrm{ }_{L}A-A\mathrm{H}_{L}\right) \ ,\qquad A\in \mathcal{B}(\mathcal{F _{\Lambda _{L}})\ . \end{equation* If $\gamma =0$ then it is well-known that the thermodynamic limit of $\{\tau _{t}^{(L)}\}_{t\in {\mathbb{R}}}$ exists as a strongly continuous group \{\tau _{t}\}_{t\in {\mathbb{R}}}$ of $\ast $-automorphisms of the CAR algebra of the infinite lattice. If $\gamma >0$ then the situation is not that obvious. A first guess is to approximate $\{\tau _{t}^{(L)}\}_{t\in \mathbb{R}}}$ by $\{\tau _{t}^{(L,c)}\}_{t\in {\mathbb{R}}}$, where \begin{equation} \tau _{t}^{(L,c)}(A)\doteq \mathrm{e}^{it\mathrm{H}_{L}\left( c\right) } \mathrm{e}^{-it\mathrm{H}_{L}\left( c\right) }\ ,\qquad A\in \mathcal{B} \mathcal{F}_{\Lambda _{L}}),\ t\in {\mathbb{R}}\ , \label{dynamics approx} \end{equation for any $L\in \mathbb{N}_{0}$ and some complex number $c\in \mathbb{C}$. In this case, the linear operator \begin{equation} \delta _{L,c}\left( A\right) \doteq i[\mathrm{H}_{L}\left( c\right) ,A]\ ,\qquad A\in \mathcal{B}(\mathcal{F}_{\Lambda _{L}})\ , \label{generator approx} \end{equation is the generator of the dynamics $\{\tau _{t}^{(L,c)}\}_{t\in {\mathbb{R}}} . A natural choice for $c\in \mathbb{C}$ would be a solution to the variational problem (\ref{var pb}), but what about if the solution is not unique?\ As a matter of fact, as explained in \cite[Section 4.3 {BruPedra-MFII}, in the thermodynamic limit $L\rightarrow \infty $, the finite-volume dynamics $\{\tau _{t}^{(L)}\}_{t\in {\mathbb{R}}}$ does \emph not} converge within the CAR $C^{\ast }$-algebra of the infinite lattice for $\gamma >0$, even if $\mathfrak{d}=0$ would be the unique solution to the variational problem (\ref{var pb})! Observe, moreover, that the variational problem (\ref{var pb}) depends on the temperature whereas the time evolution (\ref{dynamics full}) does not. The validity of the Bogoliubov approximation (\ref{dynamics approx}) with respect to the full dynamics (\ref{dynamics full}) was an open question that Thirring and Wehrl \cite{T1,T2} solve in 1967 for the special case $\mathrm{ }_{L}|_{\mu =\lambda =h=0}$, which is an exactly solvable permutation-invariant model for any $\gamma \in \mathbb{R}$. An attempt to generalize Thirring and Wehrl's results to a general class of fermionic models, including the BCS theory, has been done in 1978 \cite{Hemmen78}, but at the cost of technical assumptions that are difficult to verify in practice.\ This research direction has been strongly developed by many authors until 1992, see \cit {Bona75,Sewell83,Rieckers84,Morchio87,Bona87,Duffner-Rieckers88,Bona88,Bona89,Bona90,Unnerstall90,Unnerstall90b,Unnerstall90-open,Bona91,Duffield1991,BagarelloMorchio92,Duffield-Werner1,Duffield-Werner2,Duffield-Werner3,Duffield-Werner4,Duffield-Werner5 . All these papers study dynamical properties of \emph{permutation-invariant} quantum-spin systems with mean-field interactions. Our results \cit {BruPedra-MFII,BruPedra-MFIII} represent a significant generalization of such previous results to possibly non-permutation-invariant lattice-fermion or quantum-spin systems. To understand what's going on in the infinite-volume dynamics, we now come back to our pedagogical example, that is, the strong-coupling BCS-Hubbard model. \section{Self-Consistency Equations} Instead of considering the Heisenberg picture, let us consider now the Sch \"{o}dinger picture of quantum mechanics. In this case, recall that, at fixed $L\in \mathbb{N}_{0}$, a finite-volume state $\rho ^{(L)}$ is defined by \begin{equation*} \rho ^{(L)}\left( A\right) \doteq \mathrm{Trace}_{\mathcal{F}_{\Lambda _{L}}}\left( \mathrm{d}^{(L)}A\right) \ ,\qquad A\in \mathcal{B}(\mathcal{F _{\Lambda _{L}})\ , \end{equation* for a uniquely defined positive operator $\mathrm{d}^{(L)}\in \mathcal{B} \mathcal{F}_{\Lambda _{L}})$ satisfying $\mathrm{Trace}_{\mathcal{F _{\Lambda _{L}}}(\mathrm{d}^{(L)})=1$ and named the density matrix of $\rho ^{(L)}$. Compare with (\ref{gibbs1}) and (\ref{gibbs2}). At $L\in \mathbb{N _{0}$, the time evolution of any finite-volume state i \begin{equation} \rho _{t}^{(L)}\doteq \rho ^{(L)}\circ \tau _{t}^{(L)}\ ,\qquad t\in \mathbb{R}}\ , \label{rho} \end{equation which corresponds to a time-dependent density matrix equal to $\mathrm{d _{t}^{(L)}=\tau _{-t}^{(L)}(\mathrm{d}^{(L)})$. The thermodynamic limit of (\ref{rho}) for periodic initial states can be explicitly computed, as explained in \cite[Section 4.3.2]{Bru-pedra-MF-IV}. It refers to a \emph{non-linear} state-dependent dynamics related to \emph self-consistency}: By (\ref{fock}), $\mathcal{B}\left( \mathcal{F _{\{0\}}\right) $ can be identified with the set $\mathrm{Mat}(4,\mathbb{C})$ of complex $4\times 4$ matrices, in some orthonormal basis\footnote For instance, $\left( 1,0,0,0\right) $ is the vacuum; $\left( 0,1,0,0\right) $ and $\left( 0,0,1,0\right) $ correspond to one fermion with spin $\uparrow $ and $\downarrow $, respectively; $\left( 0,0,0,1\right) $ refers to two fermions with opposite spins.}. For any continuous family $\omega \doteq (\omega _{t})_{t\in \mathbb{R}}$ of states acting on $\mathcal{B}\left( \mathcal{F}_{\{0\}}\right) $, we define the finite-volume \emph non-autonomous} dynamics $(\tau _{t,s}^{(L,\omega )})_{_{s,t\in \mathbb{R}}}$ by the Dyson-Phillips series \begin{equation*} \tau _{t,s}^{(L,\omega )}\equiv \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\textquotedblleft }\exp \left( \int_{s}^{t}\delta _{L}^{\omega _{u}}\mathrm{d}u\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi \textquotedblright }\doteq \mathbf{1}_{\mathcal{B}(\mathcal{F}_{\Lambda _{L}})}+\sum\limits_{k\in {\mathbb{N}}}\int_{s}^{t}\mathrm{d}t_{1}\cdots \int_{s}^{t_{k-1}}\mathrm{d}t_{k}\delta _{L}^{\omega _{t_{k}}}\circ \cdots \circ \delta _{L}^{\omega _{t_{1}}} \end{equation* acting on $\mathcal{B}(\mathcal{F}_{\Lambda _{L}})$ for any $s,t\in \mathbb{ }$, with $\mathbf{1}_{\mathcal{B}(\mathcal{F}_{\Lambda _{L}})}$ being the identity mapping of $\mathcal{B}(\mathcal{F}_{\Lambda _{L}})$ and where \delta _{L}^{\rho }$ is the generator of the group $\{\tau _{t}^{(L,c)}\}_{t\in {\mathbb{R}}}$, defined by (\ref{generator approx}) for $c=\rho (a_{0,\uparrow }a_{0,\downarrow })$. Compare with (\ref{dynamics approx0}) and (\ref{dynamics approx00}). Note that, for every continuous family $\omega \doteq (\omega _{t})_{t\in \mathbb{R}}$ of on-site (even) states acting on $\mathcal{B}\left( \mathcal{F}_{\{0\}}\right) $, $s,t\in \mathbb{R}$, $L_{0}\in \mathbb{N}_{0}$ and all integers $L\geq L_{0}$, \begin{equation} \tau _{t,s}^{(L,\omega )}\left( A\right) =\tau _{t,s}^{(L_{0},\omega )}\left( A\right) \ ,\qquad A\in \mathcal{B}(\mathcal{F}_{\Lambda _{L_{0}}})\ . \label{idiot} \end{equation It follows that the family $\{\tau _{t,s}^{(L,\omega )}\}_{s,t\in {\mathbb{R }}$ strongly converges in the thermodynamic limit $L\rightarrow \infty $ to a strongly continuous two-parameter family $\{\tau _{t,s}^{\omega }\}_{s,t\in {\mathbb{R}}}$ of $\ast $-aut \ morphisms of the CAR algebra\ of the lattice. With these observations, we are in a position to give the self-consistency equations: By \cite[Eq. (19) {Bru-pedra-MF-IV}, for any fixed initial (even) state $\rho $ on $\mathcal{B \left( \mathcal{F}_{\{0\}}\right) $ at $t=0$, there is a unique family $ \mathbf{\varpi }(t,\rho ))_{t\in \mathbb{R}}$ of (on-site) states acting on \mathcal{B}\left( \mathcal{F}_{\{0\}}\right) $ such that \begin{equation} \mathbf{\varpi }(t,\rho )=\rho \circ \tau _{t,0}^{\mathbf{\varpi }(\cdot ,\rho )}\ ,\qquad t\in {\mathbb{R}}\ . \label{self-consitency} \end{equation Observe that (\ref{self-consitency}) is an equation on a finite-dimensional space, see (\ref{fock}). \section{Infinite-Volume Dynamics of Product States} For simplicity, as initial state (at $t=0$), take a finite-volume produc \footnote The product state $\rho ^{(L)}$ is (well-) defined by $\rho ^{(L)}(\alpha _{x_{1}}(A_{1})\cdots \alpha _{x_{n}}(A_{n}))=\rho (A_{1})\cdots \rho (A_{n}) $ for all $A_{1},\ldots ,A_{n}\in \mathcal{B}\left( \mathcal{F _{\{0\}}\right) $ and all $x_{1},\ldots ,x_{n}\in \Lambda _{L}$ such that x_{i}\not=x_{j}$ for $i\not=j$, where $\alpha _{x_{j}}(A_{j})\in \mathcal{B \left( \mathcal{F}_{\{x_{j}\}}\right) $ is the $x_{j}$-translated copy of A_{j}$ for all $j\in \{1,\ldots ,n\}$.} state $\rho ^{(L)}\doteq \otimes _{\Lambda _{L}}\rho $ associated with an even\footnote Even states are the physically relevant ones. Even means that the expectation value of any odd monomials in $\{a_{0,\mathrm{s}}^{\ast },a_{0 \mathrm{s}}\}_{\mathrm{s}\in \{\uparrow ,\downarrow \}}$ with respect to the on-site state $\rho $ is zero.} state $\rho $ on $\mathcal{B}\left( \mathcal F}_{\{0\}}\right) $. An example of finite-volume product states is given by the approximating Gibbs states (\ref{gibbs2}). Then, in this case, as explained in \cite[Section 4.4]{Bru-pedra-MF-IV}, for any $t\in \mathbb{R}$, $L_{0}\in \mathbb{N}_{0}$ and $A\in \mathcal{B}(\mathcal{F}_{\Lambda _{L_{0}}})$, one has that \begin{equation} \rho _{t}\left( A\right) \doteq \lim_{L\rightarrow \infty }\rho _{t}^{(L)}\left( A\right) =\lim_{L\rightarrow \infty }\rho ^{(\infty )}\circ \tau _{t}^{(L)}\left( A\right) =\rho ^{(\infty )}\circ \tau _{t,0}^{\mathbf \varpi }(\cdot ,\rho )}\left( A\right) \ , \label{eq restrictedsimple} \end{equation with $\rho _{t}^{(L)},\mathbf{\varpi }(\cdot ,\rho )$ being respectively defined by (\ref{rho}) and (\ref{self-consitency}) and where $\rho ^{(\infty )}\doteq \otimes _{\mathbb{Z}^{d}}\rho $ is the (infinite-volume) product state associated with the even state $\rho $ on $\mathcal{B}\left( \mathcal{ }_{\{0\}}\right) $. For any $t\in \mathbb{R}$, the limit state $\rho _{t}$ is again a product state and hence, it is completely determined by its restriction to the single lattice site $(0,\ldots ,0)\in \mathbb{Z}^{d}$, that is, by the on-site state $\mathbf{\varpi }(t,\rho )$ for all $t\in \mathbb{R}$. Below, we give the explicit expressions for the time evolution of the most important physical quantities related to this model, in this situation for any time $t\in \mathbb{R}$: \begin{itemize} \item[(i)] Electron density \begin{equation*} \mathrm{d}(\rho )\doteq \rho \left( n_{0,\uparrow }+n_{0,\downarrow }\right) =\rho _{t=0}\left( n_{0,\uparrow }+n_{0,\downarrow }\right) =\rho _{t}\left( n_{0,\uparrow }+n_{0,\downarrow }\right) \in \lbrack 0,2]. \end{equation*} \item[(ii)] Magnetization density: \begin{equation*} \mathrm{m}(\rho )\doteq \rho \left( n_{0,\uparrow }-n_{0,\downarrow }\right) =\rho _{t=0}\left( n_{0,\uparrow }-n_{0,\downarrow }\right) =\rho _{t}\left( n_{0,\uparrow }-n_{0,\downarrow }\right) \in \lbrack -1,1]. \end{equation*} \item[(iii)] Coulomb correlation density: \begin{equation*} \mathrm{w}(\rho )\doteq \rho \left( n_{0,\uparrow }n_{0,\downarrow }\right) =\rho _{t=0}\left( n_{0,\uparrow }n_{0,\downarrow }\right) =\rho _{t}\left( n_{0,\uparrow }n_{0,\downarrow }\right) \in \lbrack 0,1]. \end{equation*} \item[(iv)] Cooper field and condensate densities \begin{equation*} \rho _{t}\left( a_{0,\downarrow }a_{0,\uparrow }\right) =\sqrt{\mathrm \kappa }(\rho )}\mathrm{e}^{i\left( t\mathrm{\nu }(\rho )+\theta (\rho )\right) }\quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{with}\quad \mathrm{\nu }(\rho )\doteq 2\left( \mu -\lambda \right) +\gamma \left( 1-\mathrm{d}(\rho )\right) \end{equation* and $\mathrm{\kappa }(\rho )\in \lbrack 0,1]$, $\theta (\rho )\in \lbrack -\pi ,\pi )$ such that $\rho \left( a_{0,\downarrow }a_{0,\uparrow }\right) \sqrt{\mathrm{\kappa }(\rho )}\mathrm{e}^{i\theta (\rho )}$. \end{itemize} \noindent See \cite[Lemma 1]{Bru-pedra-MF-IV}. In the special case $\lambda =0$, i.e., without the Hubbard interaction, (i)-(iv) reproduce the results of \cite[Section A]{Bona89} on the strong-coupling BCS\ model, written in that paper as a permutation-invariant quantum-spin model. From Assertions (i)-(iv) observe that we recover the equation of a symmetric \emph{rotor} in classical mechanics: Fix an even on-site state $\rho $. For any $t\in \mathbb{R}$, define the 3D vector $(\Omega _{1}(t),\Omega _{2}(t),\Omega _{3}(t))$ by \begin{equation*} \rho _{t}\left( a_{0,\downarrow }a_{0,\uparrow }\right) =\Omega _{1}(t)+i\Omega _{2}(t)\quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\quad \Omega _{3}\left( t\right) \doteq 2\left( \mu -\lambda \right) +\gamma \left( 1-\rho _{t}\left( n_{0,\uparrow }+n_{0,\downarrow }\right) \right) . \end{equation*} Then, this 3D vector satisfies, for any time $t\in \mathbb{R}$, the following system of ODEs \begin{equation*} \left\{ \begin{array}{l} \dot{\Omega}_{1}\left( t\right) =-\Omega _{3}\left( t\right) \Omega _{2}\left( t\right) \ , \\ \dot{\Omega}_{2}\left( t\right) =\Omega _{3}\left( t\right) \Omega _{1}\left( t\right) \ , \\ \dot{\Omega}_{3}\left( t\right) =0\ \end{array \right. \end{equation* which describes the time evolution of the angular momentum of a symmetric rotor in classical mechanics.\ In fact, by seeing quantum states as elements of a state space in classical mechanics, this dynamics can be written in terms of Poisson brackets, i.e., as some \emph{Liouville's equation} of classical mechanics, as proven in \cite[Corollary 6.11]{BruPedra-MFII} for any translation invariant long-range models. Moreover, \cite{BruPedra-MFII,BruPedra-MFIII} show that long-range dynamics in infinite volume are equivalent to intricate combinations of classical and quantum short-range dynamics, opening new theoretical perspectives, as explained in \cite{Bru-pedra-MF-I}. This phenomenon is a direct consequence of the highly non-local character of long-range, or mean-field, interactions. Assertions (i)-(iv) lead to the exact dynamics of a physical system prepared in a product state at initial time, driven by the strong-coupling BCS-Hubbard Hamiltonian. This set of states is still [**restrictive**] and our results \cite{BruPedra-MFII,BruPedra-MFIII} go beyond this simple case, by allowing us to consider \emph{general periodic states as initial states}, in contrast with all previous results on lattice Fermi, or quantum-spin, systems with long-range, or mean-field, interactions. See \cite[Section 2.6 {Bru-pedra-MF-IV}. \bigskip \noindent \textit{Acknowledgments:} This work is supported by CNPq (308337/2017-4), FAPESP (2017/22340-9), as well as by the Basque Government through the grant IT641-13 and the BERC 2018-2021 program, and by the Spanish Ministry of Science, Innovation and Universities: BCAM Severo Ochoa accreditation SEV-2017-0718, MTM2017-82160-C2-2-P. \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }% \def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }% \def\cents{\hbox{\rm\rlap/c}}% \def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}% \def\vvert{\Vert \def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} % \def\dB{\hbox{{}} \def\mB#1{\hbox{$#1$} \def\nB#1{\hbox{#1} \def\note{$^{\dag}}% \defLaTeX2e{LaTeX2e} \def\chkcompat{% \if@compatibility \else \usepackage{latexsym} \fi } \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \chkcompat \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \ifx\ds@amstex\relax \message{amstex already loaded}\makeatother\endinpu \else \@ifpackageloaded{amstex}% {\message{amstex already loaded}\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\message{amsgen already loaded}\makeatother\endinput} {} \fi \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint }}% \def\diiint{\mathop{\displaystyle \iiint }}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1 } \makeatother \endinput
2,877,628,089,323
arxiv
\section{Introduction and Statement of the Main Result}\label{sec.1} Consider a Schr\"odinger operator \begin{equation}\label{eq:so} [H \psi](x) = - \psi''(x) + V(x) \psi(x) \end{equation} in $L^2(\mathbb{R})$. Associated with such an operator is spectral information, such as the spectrum and the spectral measures. For the most part, the spectral analysis of operators of this kind breaks into two branches, namely direct spectral analysis, where the potential is given and one seeks information about the spectrum and/or the spectral measures, and inverse spectral theory, where information about the spectrum and/or the spectral measures is given and one seeks information about the potential. In very special cases one can obtain two-way results of this kind, where certain classes of potentials are in one-to-one correspondence with certain classes of spectra and/or spectral measures. Results of this special nature are called ``gems of spectral theory'' in Barry Simon's monograph \cite{Simon}. Most results in the spectral theory of Schr\"odinger (or related) operators, however, are not of this special nature, and they are strictly one-way results. That is, for a certain class of potentials one can prove certain spectral properties, or conversely, for a certain class of potentials defined via the spectral properties of the associated operators, one can prove certain statements. Whenever this situation arises, there is a gap in our understanding of the spectral problem at hand, and we do not have a complete characterization of the class of potentials or spectral features that correspond to the class on the other side under consideration. In this case there is a natural interest in closing this gap. \medskip After these general introductory remarks, let us be more concrete. Assuming that $V$ is almost periodic (i.e., the set of its translates is relatively compact in the uniform topology), there is extensive literature on the direct spectral problem, that is, proving statements about the spectrum of the operator $H$ and the type of the spectral measures associated with it. We direct the reader to the recent surveys \cite{D, JM} and the references therein. The appearance of Cantor sets as spectra turns out to be typical, and the spectral measures can be of all possible types, but essentially they have a tendency to be purely absolutely continuous for small potentials or for large energies, and pure point for large potentials and small energies, provided that the potential has sufficient regularity properties (in the highly irregular case, the appearance of purely singular continuous spectral measures is typical). The case of regular small quasi-periodic potentials is quite well understood; see, for example, \cite{DG, E}. Here, indeed the spectrum is a Cantor set and the spectral measures are purely absolutely continuous. It is known due to work of Kotani \cite{K84, K97} and Remling \cite{R07} that, as a consequence of absolute continuity, the operators in question are reflectionless, that is, the boundary values of the diagonal elements of the Green function are purely imaginary Lebesgue almost everywhere on the (absolutely continuous) spectrum. In the converse direction, there has been extensive work on reflectionless Schr\"odinger operators for certain prescribed spectra. Here one fixes a set ${\mathcal{S}}$ and considers the set of all potentials $V$ such that the associated Schr\"odinger operator has spectrum ${\mathcal{S}}$ and is reflectionless on it. The goal is then to find out as much as possible about this class of potentials. Fundamental work in this direction can be found, for example, in \cite{Cr, SY}. It is natural to ask about conditions on ${\mathcal{S}}$ that ensure that all the potentials associated with it are almost periodic and all spectral measures are purely absolutely continuous. It turns out that the following condition (cf.~\cite{Ca}) does the job. \begin{defn} A closed set $$ \mathcal{E} = \mathbb{R} \setminus \bigcup_n (E^-_n,E^+_n) $$ is called homogeneous if there is $\tau > 0$ such that for any $E \in \mathcal{E}$ and any $\sigma > 0$, we have $|(E - \sigma, E + \sigma) \cap \mathcal{E}| > \tau \sigma$. \end{defn} Assuming finite total gap length,\footnote{Finite total gap length means that the sum of the lengths of the bounded gaps of the spectrum is finite.} it was shown by Sodin and Yuditskii \cite{SY} that the homogeneity of ${\mathcal{S}}$ implies the almost periodicity of the associated potentials, and it was shown by Gesztesy and Yuditskii \cite{GY} that the homogeneity of ${\mathcal{S}}$ implies the absolute continuity of the associated spectral measures (see also the more recent paper \cite{PR}). On the other hand, it is known that neither consequence holds without a suitable assumption on ${\mathcal{S}}$, such as for example homogeneity. Namely, Poltoratski and Remling study conditions on the set related to the presence of associated reflectionless measures with non-trivial singular component \cite{PR} and, working out the continuum analog of work by Volberg and Yuditskii \cite{VY}, Damanik and Yuditskii showed that there are sets ${\mathcal{S}}$ such that all associated potentials are not almost periodic \cite{DY}.\footnote{The sets in \cite{DY} are essentially explicit. With a non-explicit set, the statement can also be derived from \cite{A4}.} Apparently, one has a good way of passing to almost periodicity and absolute continuity from either side, that is, one has conditions on potentials that ensure these properties, and one has spectral conditions that ensure these properties as well. Can one find a link between these two rather different sets of results? It is the main purpose of this paper to establish such a link. Namely, we will show that the operators studied in \cite{DG}, which is a direct spectral analysis, have homogeneous spectra. Since they also clearly have finite total gap length (and the operators are reflectionless as pointed out above), this puts the operators studied in \cite{DG} inside the scope of the relevant literature on inverse spectral analysis, and in particular the papers \cite{GY, SY}. We mention in passing that another link of this nature will be established in \cite{DGL1, DGL2}. Namely it is shown there that if ${\mathcal{S}}$ is of the type that arises in \cite{DG}, then all potentials for which the associated operator is reflectionless are again of the type studied in \cite{DG}. In particular, they are all regular and quasi-periodic with the same frequency vector. That is, for these sets ${\mathcal{S}}$, the potentials studied in the inverse spectral theory approach are put inside the scope of the work \cite{DG} on the direct spectral problem. \bigskip Let us now proceed to the statement of the main result of this paper. Let $U(\theta)$ be a real function on the torus $\mathbb{T}^\nu$, $ U(\theta) = \sum_{n \in \mathbb{Z}^\nu \setminus \{ 0 \}} c(n) e^{2 \pi i n\theta}\ , \quad \theta \in \mathbb{T}^\nu. $ Let $\omega = (\omega_1, \dots, \omega_\nu) \in \mathbb{R}^\nu$. Assume that the following Diophantine condition holds, \begin{equation}\label{eq:1PAI7-5-85a} |n \omega| \ge a_0 |n|^{-b_0}, \quad n \in \mathbb{Z}^\nu \setminus \{ 0 \} \end{equation} for some $ 0 < a_0 < 1,\quad \nu < b_0 < \infty. $ Let $V(x) = U(x \omega)$ and consider the Schr\"odinger operator \eqref{eq:so}. Assume that $U$ is real-analytic, that is, the Fourier coefficients $c(n)$ obey \begin{align*} \overline{c(n)} & = c(-n), \quad n \in \mathbb{Z}^\nu \setminus \{ 0 \}, \\ |c(n)| & \le \varepsilon\exp(-\kappa_0|n|), \quad n \in \mathbb{Z}^\nu \setminus \{ 0 \}, \end{align*} with $\varepsilon > 0$, $0 < \kappa_0 \le 1$. Our main result reads as follows: \begin{thmh} There exists $\varepsilon_0 = \varepsilon_0(\kappa_0, a_0, b_0) > 0$ such that for $0 < \varepsilon < \varepsilon_0$, the spectrum of the operator $H$ is homogeneous with $\tau = 1/2$. \end{thmh} The homogeneity of the spectrum is a consequence of detailed quantitative results we can establish for the structure of the gaps of the spectrum. Since the latter results are of independent interest, we state them separately in the following theorem. \begin{thmg} There exists $\varepsilon_0 = \varepsilon_0(\kappa_0, a_0, b_0) > 0$ such that for $0 < \varepsilon < \varepsilon_0$, the gaps in spectrum of the operator $H$ can be labeled as $G_m = (E_m^-, E_m^+)$, $m \in \mathbb{Z}^\nu \setminus \{ 0 \}$, $G_0 = (-\infty, \underline{E})$ so that the following conditions hold: \begin{enumerate}[{\rm (i)}] \item For every $m \in \mathbb{Z}^\nu \setminus \{ 0 \}$, we have $$ E^+_m - E^-_m \le 2 \varepsilon \exp \Big( -\frac{\kappa_0}{2} |m| \Big). $$ \item For every $m, m' \in \mathbb{Z}^\nu \setminus \{ 0 \}$ with $m' \neq m$ and $|m'| \ge |m|$, we have $$ \mathop{\rm{dist}} ([E_m^-, E_m^+], [E_{m'}^-, E_{m'}^+])) \ge a |m'|^{-b}, $$ where $a, b > 0$ are constants depending on $a_0, b_0, \kappa_0, \nu$. \item For every $m \in \mathbb{Z}^\nu \setminus \{ 0 \}$, we have $$ E_m^- - \underline{E}\ge a|m|^{-b}. $$ \end{enumerate} \end{thmg} In the setting described above, Damanik and Goldstein established in \cite{DG} a rather detailed description of the spectrum and the generalized eigenfunctions, which turn out to be of Floquet type. As a consequence of this description and the work of Kotani and Remling mentioned above \cite{K84, K97, R07} it follows that the operator $H$ is indeed reflectionless on its spectrum, which in turn has finite total gap length by the estimates in \cite{DG}. Thus, the only missing piece for one to apply the theory of Gesztesy, Sodin and Yuditskii \cite{GY, SY} is the homogeneity of the spectrum. This missing piece is established by Theorem~H. On the more conceptual level discussed earlier, Theorem~H therefore provides a link between the direct and the inverse spectral theory approach to almost periodicity and absolute continuity. \bigskip The structure of the remainder of the paper is as follows. We prove Theorems~H and G in Section~\ref{sec.2} and discuss natural questions for further study in Section~\ref{sec.3}. \section{Proof of Theorems H and G}\label{sec.2} In this section we prove Theorem~H and G. The main results from \cite{DG} play a crucial role in these proofs. These results, given in Theorems~A and B in \cite{DG} and restated below, describe the spectrum and the generalized eigenfunctions of the operator $H$ with a small analytic quasi-periodic potential $V$ and establish a two-way connection between the decay of the Fourier coefficients of $V$ and the size of the gaps of the spectrum of $H$. Let us recall these results from \cite{DG}. Set \begin{align*} k_n & = -n\omega/2, \quad n \in \mathbb{Z}^\nu \setminus \{0\}, \quad \mathcal{K}(\omega) = \{ k_n : n \in \mathbb{Z}^\nu \setminus \{0\} \}, \\ \mathfrak{J}_n & = ( k_n - \delta(n), k_n + \delta(n) ), \quad \delta(n) = a_0 (1 + |n|)^{-b_0-3}, \quad n \in \mathbb{Z}^\nu \setminus \{0\}, \\ \mathfrak{R}(k) & = \{ n \in \mathbb{Z}^\nu \setminus \{0\} : k \in \mathfrak{J}_n \}, \quad \mathfrak{G} = \{ k : |\mathfrak{R}(k)| < \infty \}, \end{align*} where $a_0,b_0$ are as in the Diophantine condition \eqref{eq:1PAI7-5-85a}. Let $k \in \mathfrak{G}$ be such that $|\mathfrak{R}(k)| > 0$. Due to the Diophantine condition, one can enumerate the points of $\mathfrak{R}(k)$ as $n^{(\ell)}(k)$, $\ell = 0, \dots, \ell(k)$, $1 + \ell(k) = |\mathfrak{R}(k)|$, so that $|n^{(\ell)}(k)| < |n^{(\ell+1)}(k)|$. Set \begin{align*} T_{m}(n) & = m - n ,\quad m, n \in \mathbb{Z}^\nu, \\ \mathfrak{m}^{(0)}(k) & = \{ 0, n^{(0)}(k) \}, \\ \mathfrak{m}^{(\ell)}(k) & = \mathfrak{m}^{(\ell-1)}(k) \cup T_{n^{(\ell)}(k)}(\mathfrak{m}^{(\ell-1)}(k)), \quad \ell = 1, \dots, \ell(k). \end{align*} The following pair of theorems was established in \cite{DG}. \begin{thma} There exists $\varepsilon_0 = \varepsilon_0(\kappa_0, a_0, b_0) > 0$ such that for $0 < \varepsilon < \varepsilon_0$ and $k \in \mathfrak{G} \setminus \frac{\omega}{2}(\mathbb{Z}^\nu \setminus \{0\})$, there exist $E(k) \in \mathbb{R}$ and $\varphi(k) := (\varphi(n;k))_{n \in \mathbb{Z}^\nu}$ such that the following conditions hold: $(a)$ $\varphi(0; k) = 1$, \begin{align*} |\varphi(n ;k)| & \le \varepsilon^{1/2} \sum_{m \in \mathfrak{m}^{(\ell)}} \exp \Big( -\frac{7}{8} \kappa_0 |n-m| \Big), \quad \text{ $n \notin \mathfrak{m}^{(\ell(k))}(k)$}, \\ |\varphi(m;k)| & \le 2 \quad \text{for any $m \in \mathfrak{m}^{(\ell(k))}(k)$.} \end{align*} $(b)$ The function $$ \psi(k, x) = \sum_{n \in \mathbb{Z}^\nu} \varphi(n; k) e^{2 \pi i x (n \omega + k)} $$ is well-defined and obeys $ - \psi''(k,x) + V(x) \psi(k,x) = E(k) \psi(k,x). $ $(c)$ $$ E(k) = E(-k), \quad \varphi(n;-k) = \overline{\varphi(-n; k)}, \quad \psi(-k, x)=\overline{\psi(k, x)}, $$ \begin{equation}\label{eq:6Ekk1EGT11} \begin{split} (k^0)^2 (k - k_1)^2 < E(k) - E(k_1) < 2k (k - k_1) + 2 \varepsilon \sum_{k_1 < k_{n} < k} \delta(n), \quad 0 < k - k_1 < 1/4, \; k_1 > 0, \end{split} \end{equation} where $k^{(0)} := \min(\varepsilon_0, k/1024)$. $(d)$ The spectrum of $H$ consists of the following set, $$ {\mathcal{S}} = [E(0) , \infty) \setminus \bigcup_{m \in \mathbb{Z}^\nu \setminus \{0\} : E^-( k_m) < E^+( k_m)} (E^-( k_m), E^+( k_m)), $$ where $$ E^\pm(k_m) = \lim_{k \to k_m \pm 0, \; k \in \mathfrak{G} \setminus \mathcal{K}(\omega)} E(k), \quad \text{ for $k_m>0$.} $$ \end{thma} \begin{thmb} {\rm (a)} The gaps $(E^-(k_m), E^+(k_m))$ in Theorem~A obey $E^+(k_m) - E^-(k_m) \le 2 \varepsilon \exp(-\frac{\kappa_0}{2} |m|)$. {\rm (b)} Using the notation from Theorem~A, there exists $\varepsilon^{(0)} > 0$ such that if the gaps $(E^-(k_m), E^+(k_m))$ obey $E^+(k_m) - E^-(k_m) \le \varepsilon \exp(-\kappa |m|)$ with $0 < \varepsilon < \varepsilon^{(0)}$, $\kappa > 4 \kappa_0$, then, in fact, the Fourier coefficients $c(m)$ obey $|c(m)| \le \varepsilon^{1/2} \exp(-\frac{\kappa}{2} |m|)$. \end{thmb} We are now in position to prove Theorems~G and H. \begin{proof}[Proof of Theorem~$G$] Consider the $\varepsilon_0 = \varepsilon_0(\kappa_0, a_0, b_0) > 0$ from Theorem~A and label the gaps as in part (d) of Theorem~A. (i) This statement follows from part (a) of Theorem~B. (ii) Recall that $$ |m\omega| > a_0 |m|^{-b_0}, m\neq 0. $$ In what follows we denote by $a_j$ constants depending on $a_0, b_0, \kappa_0, \nu$. Let $m' \neq m$, $|m'| \ge |m|$ be arbitrary. Then, $$ |k_m - k_{m'}| = |(m-m')\omega|/2 \ge a_0 (2 |m'|)^{-b_0}/2 \ge a_1 |m'|^{-b_0}. $$ Assume for instance that $k_{m'} > k_{m} > 0$. Due to \eqref{eq:6Ekk1EGT11} in Theorem~A, we have $$ E^-(k_{m'}) - E^+(k_{m}) > (k^0)^2 (k_{m'} - k_m)^2 \ge (k^0)^2 a_2 |m'|^{-2b_0}, $$ where $$ k^{(0)} := \min (\varepsilon_0, k_{m} /1024) \ge \varepsilon_0 a_3 |m'|^{-b_0}. $$ Thus, $$ E^-(k_{m'}) - E^+(k_{m}) > (k^0)^2 (k_{m'} - k_m)^2 \ge \varepsilon_0^2 a_4 |m'|^{-4 b_0} = a_5 |m'|^{-4 b_0}, $$ which means that $$ \mathop{\rm{dist}} ([E_m^-, E_m^+], [E_{m'}^-, E_{m'}^+]) \ge a |m'|^{-b}. $$ The remaining cases are completely similar. (iii) The proof of this statement is completely similar to the proof of (ii) and we omit it. \end{proof} \begin{proof}[Proof of Theorem~$H$] Let $E \in \mathcal{S}$, $\sigma > 0$. Set $ \mathfrak{C}(E,\sigma)=\{m\neq 0: (E^-_m,E^+_m)\cap (E-\sigma,E+\sigma)\neq \emptyset\}. $ Using the notation from Theorem $A$, assume first that $ (-\infty, \underline{E}) \cap (E - \sigma, E + \sigma) = \emptyset. $ Pick $m_0 = m_0(E, \sigma)$ so that $|m_0| = \min_{m \in \mathfrak{C}(E, \sigma)}|m|$. Note that for any $m \in \mathfrak{C}(E,\sigma)$, we have $ \mathop{\rm{dist}} ([E_m^-, E_m^+], [E_{m_0}^-, E_{m_0}^+]) \le 2 \sigma. $ On the other hand, by part (ii) of Theorem~G, $ \mathop{\rm{dist}} ([E_m^-, E_m^+], [E_{m_0}^-, E_{m_0}^+])\ge a|m|^{-b}, $ where $a, b > 0$ are constants depending on $a_0, b_0, \kappa_0, \nu$. Therefore, $ |m| \ge \alpha \sigma^{-\beta}, $ where $\alpha, \beta > 0$ are constants depending on $a_0, b_0, \kappa_0, \nu$. Due to part (i) of Theorem~G, \begin{equation}\label{eq:3gapslenthsrep1} E^+_{m} - E^-_{m} < 2 \varepsilon \exp \Big( -\frac{\kappa_0}{2} |m| \Big). \end{equation} Thus, \begin{align*} \sum_{m \in \mathfrak{C}(E, \sigma) \setminus \{ m_0 \}} \big| (E^-_{m}, E^+_{m}) \cap (E - \sigma, E + \sigma) \big| & \le \sum_{m \in \mathfrak{C}(E, \sigma) \setminus \{ m_0 \}} E^+_m - E^-_m \\ & \le 2 \varepsilon \sum_{|m| \ge \alpha \sigma^{-\beta}} \exp \Big( -\frac{\kappa_0}{2} |m| \Big) \\ & < \sigma/2, \end{align*} provided $\sigma \le \sigma_0(a_0, b_0, \kappa_0, \nu)$. Note that since $E \in \mathcal{S}$, $E \notin (E_{m_0}^-, E_{m_0}^+)$. Hence, $ \big| (E^-_{m_0}, E^+_{m_0}) \cap (E - \sigma, E + \sigma) \big| \le \sigma. $ Thus, \begin{align*} \big| (E - \sigma, E + \sigma) \cap \mathcal{S} \big| & \ge 2 \sigma - \big| (-\infty, \underline{E}) \cap (E - \sigma, E + \sigma) \big| - \big| (E^-_{m_0}, E^+_{m_0}) \cap (E - \sigma, E + \sigma) \big| \\ & \qquad - \sum_{m \in \mathfrak{C}(E, \sigma) \setminus \{ m_0 \}} \big| (E^-_{m}, E^+_{m}) \cap (E - \sigma, E + \sigma) \big| \\ & > 2 \sigma - 0 - \sigma - \sigma/2 \\ & = \sigma/2, \end{align*} provided $\sigma \le \sigma_0(a_0, b_0, \kappa_0, \nu)$. Now assume that $ (-\infty, \underline{E}) \cap (E - \sigma, E + \sigma) \neq \emptyset. $ Note that for any $m \in \mathfrak{C}(E, \sigma)$, we have $ E_m^- - \underline{E} \le 2 \sigma. $ On the other hand, by part (iii) of Theorem~G, we have $ E_m^- - \underline{E} \ge a |m|^{-b}, $ where $a, b > 0$ are constants depending on $a_0, b_0, \kappa_0, \nu$. Therefore, $ |m| \ge \alpha \sigma^{-\beta}, $ where $\alpha, \beta > 0$ are constants depending on $a_0, b_0, \kappa_0, \nu$. Just as above we may conclude that $ \sum_{m \in \mathfrak{C}(E, \sigma)} \big| (E^-_{m}, E^+_{m}) \cap (E - \sigma, E + \sigma) \big| < \sigma/2, $ provided $\sigma \le \sigma_0(a_0, b_0, \kappa_0, \nu)$. Note that since $E \in \mathcal{S}$, $E \notin (-\infty, \underline{E})$. Hence, $ \big| (-\infty, \underline{E}) \cap (E - \sigma, E + \sigma) \big| \le \sigma. $ Due to \eqref{eq:3gapslenthsrep1}, $ \sum_m (E^+_{m} - E^-_{m}) < C(\kappa_0, \nu) \varepsilon < \sigma_0(a_0, b_0, \kappa_0, \nu)/2, $ provided $0 < \varepsilon < \varepsilon_0 = \varepsilon_0(\kappa_0, a_0, b_0) > 0$. Therefore, for any interval $(E - \sigma, E + \sigma)$ with $E \in \mathcal{S}$ and $\sigma > \sigma_0(a_0, b_0, \kappa_0, \nu)$, we have \begin{align*} \big| (E - \sigma, E + \sigma) \cap \mathcal{S} \big| & \ge 2 \sigma - \big| (-\infty, \underline{E}) \cap (E - \sigma, E + \sigma) \big| - \sum_{m} \big| (E^-_{m}, E^+_{m}) \big| \\ & > 2 \sigma - \sigma - \sigma/2 \\ & = \sigma/2, \end{align*} which shows that the desired estimate holds in the second case as well. This concludes the proof. \end{proof} \section{Some Remarks and Open Problems}\label{sec.3} We have seen that Schr\"odinger operators with small analytic quasi-periodic potentials have homogeneous spectrum. It is natural to ask whether this property persists if $\varepsilon$ is increased somewhat. It is known that, as $\varepsilon$ is increased, the pure absolute continuity of the spectrum does not persist. Indeed for large enough coupling, the spectrum will be pure point with exponentially localized eigenfunctions (i.e., Anderson localization holds) in the lower energy region. This is particularly well understood for the discrete counterpart of this problem, but we expect very strongly that the continuum versions of the statements known in the discrete case indeed do hold. Moreover, judging again by the analogy with the discrete case, the transition from absolute continuity to localization may well go through a critical regime at which the homogeneity of the spectrum breaks down.\footnote{Concretely, for the almost Mathieu operator this transition occurs at coupling $\lambda = 1$, and at this value of the coupling constant, the spectrum has zero Lebesgue measure.} It is of course of interest to compare the values of the coupling constant where we experience a breakdown of absolute continuity and homogeneity, respectively. Let us state one question in this spirit explicitly. \begin{quest}\label{q.1} Suppose that $\varepsilon_1 > 0$ is such that for $0 < \varepsilon < \varepsilon_1$, the spectrum of $H$ is purely absolutely continuous with generalized eigenfunctions of Floquet type for almost every energy in the spectrum. Is it true that for each $0 < \varepsilon < \varepsilon_1$, the spectrum of $H$ is homogeneous? \end{quest} Another interesting research direction is to explore the same issues in the discrete setting. Given that inverse spectral theory plays a role in this study, one needs to study the class of Jacobi matrices, that is, operators $$ [J \psi]_n = a_n \psi_{n+1} + b_n \psi_n + a_{n-1} \psi_{n-1} $$ in $\ell^2(\mathbb{Z})$ with $a_n > 0$ and $b_n \in \mathbb{R}$. The inverse spectral theory aspects were worked out by Sodin and Yuditskii as well \cite{SY2}. The connection between absolute continuity and reflectionlessness follows from the work of Kotani \cite{K97} and Remling \cite{R11}. There is a very large number of results on the direct spectral problem for Jacobi matrices with almost periodic coefficients, we refer the reader again to the recent surveys \cite{D, JM} and the references therein. In fact, there are more results in the discrete setting than in the continuum setting. For example, Avila's global theory for analytic quasi-periodic one-frequency potentials \cite{A1, A2, A3} currently exists only in the discrete setting. This suggests that one should try to establish discrete versions of \cite{DG, DGL1, DGL2} and of the present paper. By far the most heavily studied discrete Schr\"odinger operator with a quasi-periodic potential is the almost Mathieu operator, that is, the Jacobi matrix with $a_n = 1$ and $b_n = 2 \lambda \cos (2 \pi [n\omega + \theta])$ with $\lambda > 0$, $\omega \in \mathbb{R} \setminus \mathbb{Q}$ and $\theta \in \mathbb{R}$. It is known that the spectrum of this operator does not depend on $\theta$, and may therefore be denoted by $\sigma_{\lambda,\omega}$, and has Lebesgue measure $\mathrm{Leb}(\sigma_{\lambda,\omega}) = 4 | 1 - \lambda |$. Moreover, all spectral measures are purely absolutely continuous for $\lambda < 1$ and purely singular for $\lambda \ge 1$. There is a rather large number of papers contributing to these statements, compare \cite{D, JM}. In particular, the operator (has spectrum of positive Lebesgue measure and) is reflectionless when $\lambda < 1$. On the other hand it is not known whether there are any parameter values for which the spectrum of this operator is homogeneous. Given that almost everything about this operator is known due to extensive investigation over many decades, this absence of understanding is somewhat unsettling. Let us state this explicitly as a question. \begin{quest}\label{q.2} For which values of $\lambda$ is the spectrum of the almost Mathieu operator homogeneous? \end{quest} Working out the discrete analog of \cite{DG} and the present paper as suggested above could potentially solve this problem in the regime of small $\lambda$. We intend to address this in a forthcoming paper. Note that by Aubry duality, the spectrum at $\lambda$ is homogeneous if and only if it is homogeneous at $1/\lambda$. Thus, we expect homogeneity to hold at least for $\lambda$ sufficiently small and sufficiently large. However, homogeneity will not hold for all $\lambda > 0$. At the critical value $\lambda = 1$, the spectrum has zero Lebesgue measure and hence homogeneity fails for trivial reasons. It is far from clear at this point whether homogeneity will hold for all $\lambda < 1$ or whether it breaks down at some smaller threshold. In this context the work of Helffer and Sj\"ostrand \cite{HS} may be relevant. In that paper, the gap structure for the case of critical coupling $\lambda = 1$ and suitable frequencies $\omega$ was studied in great detail. If it is possible to extend their analysis to non-critical $\lambda$, this may allow one to address the homogeneity issue in the almost critical regime. This appears to be quite difficult, however. \section*{Acknowledgment} We thank Bernard Helffer, Svetlana Jitomirskaya and Johannes Sj\"ostrand for useful conversations about the homogeneity question for the almost Mathieu operator.
2,877,628,089,324
arxiv
\section{Introduction} Relativistic ejections (jets) are a common consequence of accretion processes onto stellar-mass black holes. In the low/hard state (LHS) and in the quiescent state of black hole candidates (BHCs) a compact, steady jet is on. The jet is highly quenched in the high/soft state (HSS) of BHCs (see \cite[Fender 2010]{Fender2010} for a review).\\ \cite[Corbel et al. (2003)]{Corbel2003} and \cite[Gallo et al. (2003)]{GFP2003} found that the radio luminosity of many BHCs in the LHS correlates over several orders of magnitude with the X-ray luminosity. They proposed that a correlation of the form $L_X \propto L_{R}^{0.58\pm0.16}$ (where $L_X$ and $L_R$ are the X-ray and radio luminosities) could be universal and also valid for sources in quiescence (\cite[Gallo et al. 2006]{Gallo2006}). This relation describes a coupling between the accretion processes and the ejection mechanisms. A similar correlation also hold between the the X-ray and the optical/infrared (IR) luminosities (Russell et al. 2006).\\ However, in the past few years, several ``radio quiet'' outliers have been found (\cite[Xue \& Cui 2007]{Xue_Cui2007}; \cite[Gallo 2007]{Gallo2007}). These sources seem to feature similar X-ray luminosities to other BHCs but are characterized by a radio emission that, given a certain X-ray luminosity, is fainter than expected from the radio/X-ray correlation. It is possible that a correlation with similar slope but lower normalization than the other BHCs could describe this discrepancy, at least in a few sources (e.g. \cite[Soleri et al. 2010]{Soleri2010}). If confirmed, this would suggest that some other parameters might be tuning the accretion-ejection coupling, allowing accretion flows with similar radiative efficiency to produce a broad range of outflows.\\ \cite[Casella \& Pe'er (2009)]{PG_Asaf09} suggested that different values of the jet magnetic field can cause a quenching of the radio emission, without influencing the energy output in the X rays. \cite[Fender et al. (2010)]{FGR2010} showed that, if our measures of the spin and the estimates of the jet power are correct, the spin does not play any role in powering jets from BHCs. In this conference Proceedings we investigate whether there is a connection between the values of some binary parameters and properties of the outburst of 17 BHCs (listed in Figure \ref{fig:toy_model}, left-hand panel) and the compact steady-jet power. We will follow the approach presented in \cite[Fender et al. (2010)]{Fender2010} to use the normalizations of the radio/X-ray and IR/X-ray correlations as a proxy for the jet power. The data used to calculate the normalizations are from \cite[Gallo et al. (2003)]{GFP2003}, \cite[Gallo et al. (2006)]{Gallo2006}, \cite[Gallo (2007)]{Gallo2007}, \cite[Russell et al. (2006)]{Russell2006}, \cite[Russell et al. (2007)]{Russell2007} and \cite[Soleri et al. (2010)]{Soleri2010}. We develop a jet-toy model to study whether de-boosting effects can explain the scatter around the radio/X-ray correlation. We also compare the ``radio quiet'' BHCs to the accreting neutron star (NS) X-ray binaries. \section{BHC properties and jet power} Since the accretion disc occupies $\sim 70\%$ of the Roche lobe of the black hole, we calculated the size of the Roche lobe of the accretor as a measure of the disc size. Figure \ref{fig:toy_model} (left-hand panel) shows the radio and near-IR normalizations as a function of the size of the Roche lobe of the black hole and the orbital period of the binary. To test whether there is any correlation between the jet power and these two orbital parameters, we calculated the Spearman rank correlation coefficients. The values of the correlation coefficients $\rho$, as well as the null hypothesis probabilities (the probability that the data are not correlated), are reported in Table \ref{tab:log_spearman}. Clearly no correlation is present. \begin{figure}[tb] \begin{center} \includegraphics[width=7.5cm]{normalisations_VS_orbit_param.ps} \includegraphics[width=5.5cm]{Gallo_boosting_scatter.ps} \caption{{\it Left-hand panel:} Radio and near-IR normalizations as a function of the orbital period and the size of the Roche lobe of the BHCs. Our BHC sample is listed in the inset. The inset also shows a key to the symbols. {\it Right-hand panel:} Values of the radio luminosity expected from our toy model for 10 viewing angles, in Eddington units. See the left-hand panel for a key to the symbols.} \label{fig:toy_model} \end{center} \end{figure} We also investigate the dependence of the radio and near-IR normalizations on the inclination between the jet axis and the line of sight. We will refer to this angle $i$ as either inclination or viewing angle. Casella et al (2010) recently showed that compact-steady jets from BHCs can have rather high bulk Lorentz factor $\Gamma >2$. This result suggests that de-boosting effects can become important, not only at high viewing angles. In our analysis we are assuming that the X-ray emission is un-beamed (but see Fender 2010 and references therein). To test if a correlation exists, we calculated the Spearman coefficients $\rho$. We show them in Table \ref{tab:log_spearman}. In the case of the near-IR normalizations, we obtained $\rho \sim -0.9$, with a probability for the null hypothesis of $\sim 2 \%$. This suggests that there is an anticorrelation between the inclination angle and the near-IR normalization. However, the lack of data points (only 7) might have biased this result. During an outburst, BHCs usually show a transition to the HSS. However, some sources spend the whole outburst in the LHS (or in the LHS and in the intermediate states), without transiting to the soft states (see \cite[Belloni 2009]{Belloni2009} and references therein). We investigated whether the type of outburst (LHS only or with a transition to the soft states) affects the jet power, but we could not find any obvious dependence. \section{A jet-toy model} Here we define a jet-toy model. The aim is to test whether a dependence of the bulk Lorentz factor of the jet $\Gamma$ on the accretion powered X-ray luminosity might qualitatively describe the scatter around the radio/X-ray correlation and the ``radio quiet'' BHCs population. We will consider a $\Gamma$ Lorentz factor that becomes larger than $\sim 1.4$ above $\sim 0.1 \%$ of the Eddington luminosity $L_{Edd}$. This assumption is based on the fact that compact steady jets are usually thought to be mildly relativistic ($\Gamma \leq 2$ but see \cite[Casella et al. 2010]{Casella2010}) in the LHS while major relativistic ejections ($\Gamma \geq 2$) are tentatively associated with the transition from the hard to the soft states. We considered a uniform distribution of 10 values of $\mbox{cos}\, i$ between 0 and 1. Figure \ref{fig:toy_model} (right-hand panel) illustrates the results from our toy model: it clearly results in a distribution in the ($L_X,L_R$) plane which broadens at higher luminosities. The toy model can qualitatively reproduce the scatter around the radio/X-ray correlation. \begin{table} \begin{center} \caption{Spearman rank correlation test for our data points. The number of data points, as well as the probabilities for the null hypothesis are reported.} \label{tab:log_spearman} {\scriptsize \begin{tabular}{l c c c} \hline Normalization & Number of points & Spearman coefficient $\rho$ & Probability for the null hypothesis (\%) \\ \multicolumn{4}{c}{{\bf Size of the Roche lobe}} \\ radio & 13 & 0.5 & 11.0 \\ near IR & 7 & 0.3 & 43.0 \\ \multicolumn{4}{c}{{\bf Orbital period}} \\ radio & 15 & 0.2 & 54.2 \\ near IR & 7 & 0.4 & 33.2 \\ \multicolumn{4}{c}{{\bf Inclination angle}} \\ radio & 13 & -0.2 & 44.7 \\ near IR & 7 & -0.9 & 2.4 \\ \hline \end{tabular} } \end{center} \end{table} \section{Comparison with neutron stars} NSs are known to be fainter in radio than BHCs, given a certain X-ray luminosity, by a factor $\gtrsim 30$ (see e.g. \cite[Migliari \& Fender 2006]{MigliariFender2006}). This difference in radio power can be reduced to a factor $\gtrsim 7$ if a mass correction from the fundamental plane of black hole activity (\cite[Merloni et al. 2003]{Merloni2003}) is applied. We will now compare the ``radio quiet'' BHCs to the population of NSs that have been detected in radio. Our sample of NSs includes the same data points as in \cite[Migliari \& Fender (2006)]{MigliariFender2006} with the addition of points from recent observations of Aql~X-1, 4U~0614-091 and IGR~J00291+5934 (\cite[Tudose et al. 2009]{Tudose2009}, \cite[Migliari et al. 2010]{Migliari2010} and \cite[Lewis et al. 2010]{Lewis2010}, respectively). To test whether the ``radio quiet'' BHCs and the NSs are statistically distinguishable in the ($L_X,L_R$) plane, we performed a two-dimensional Kolmogorov-Smirnov (K-S) test. The K-S test shows that the probability that the ``radio quiet'' BHCs and the NSs are statistically indistinguishable (i.e. the probability of the null hypothesis) is different from 0, despite being small ($P \sim 0.13 \%$), if a mass correction is applied. If we do not apply a mass correction, the probability of the null hypothesis is consistent with 0: the two groups constitute two different populations. \section{Conclusions} We examined three characteristic parameters of BHCs and the properties of their outbursts to test whether they regulate the energy output in the jet. If our estimates of the jet power are correct, both the orbital period and the size of the accretion disc are not related to the radio and near-IR jet power. We could also not find any association between the jet power and the type of outburst (with or without a transition to the HSS). We did not find any association between the viewing angles and the jet power inferred from radio observations. The jet power obtained from near-IR measurements decreases when the inclination angle increases. This result could favour a scenario in which the jet decelerates moving from the IR-emitting to the radio-emitting part.\\ We defined a jet-toy model in which the jet Lorentz factor becomes larger than $\sim 1$ above $0.1 \% \, L_{Edd}$. The model results in a distribution in the ($L_X,L_R$) plane which broadens at high luminosities. However, the model has several limitations, for instance it can not reproduce the measured inclination angles of the BHCs in our sample.\\ We finally compared the ``radio quiet'' BHCs to the NSs. A two-dimensional K-S test can not completely rule out the possibility that the two families are statistically indistinguishable in the ($L_X,L_R$) plane, if a mass correction is applied. This result suggests that some ``radio quiet'' BHCs could actually be NSs; alternatively it suggests that some BHCs feature a disc-jet coupling similar to NSs.
2,877,628,089,325
arxiv
\section{Introduction Computational game theory studies the algorithmic nature of conflicting entities and establishes {\em equilibria}: A state of balance that minimises the negative effects among players. The field has attracted much attention in the recent 10-15 years due to applications in multi-agent systems, electronic markets and social networks \cite{PC2001,RTV2007,SK2008}. In this paper, we investigate the problem of software architecture design from a game theory perspective. In particular, we provide a novel model, called {\em decomposition game}, which captures interactions among software requirements and derives a software architecture through equilibria. The architecture of a software system lays out its basic composition. For softwares become larger, quality attributes such as performance, reliability, usability and security, play an increasingly important role. It has been a common belief that architecture design heavily influences the quality attributes such as performance, reliability, usability and security of a software system \cite{B07}. A major objective of architecture design is therefore the assurance of non-functional requirements through compositional decisions. In other words, we need to answer the following question: {\em What architecture best fulfills the desirable software requirements?} There is, however, usually no ``perfect'' architecture that fulfills every requirement. For example, performance and security are both key non-functional requirements, which may demand fast response time to the users, and the application of a sophisticated encryption algorithm, respectively. These two requirements are in intrinsic conflict, as a strong focus of one will negatively impact the fulfilment of the other. A main task of the software architect, therefore, is to balance such ``interactions'' among requirements, and decide on appropriate tradeoffs among such conflicting requirements. While it is a common practice to decide on software architecture designs through the designers' experiences and intuition, formal approaches for architecture design are desirable as they facilitate standardisation and automation of this process, providing rigorous guidelines, allowing automatic analysis and verifications \cite{GD2003}. Notable formal methods in software architecture include a large number of formal architecture description languages (ADL), which are useful tools in communicating and modeling architectures. However, as argued by \cite{WH2005}, industry adoptions of ADL are rare due to limitations in usability and formality. Other algorithmic methods for software architecture design include employing hierarchical clustering algorithms to decompose components based on their common attributes \cite{LXZ2007}, as well as quantifying tradeoffs between requirements \cite{AADHMH2011}. In this paper, we propose to use computational game theory as a mathematical foundation for conceptualising software architecture designs from requirements. Our motivation comes from the following two lines of research: \paragraph{(1). Attribute driven design (ADD)}: ADD is a systematic method for software architecture design. The method was invented by Bass, Klein and Bachmann in \cite{BLMF2002} and subsequently updated and improved through a sequence of works \cite{G01,WR2006}. The goal is to assist designers to analyse quality attribute tradeoffs and provide design suggestions and guidance. Inputs to ADD are functional and non-functional requirements, as well as design constraints; outputs to ADD are conceptual architectures which outline coarse-grained system compositions. The method involves a sequence of well-defined steps that recursively decompose a system to components, subcomponents, and so on. These steps are not algorithmic: They are meant to be followed by system designers based on their experience and understanding of design principles. As mentioned by the authors in \cite{BLMF2002}, an ongoing effort is to investigate rigorous approaches in producing conceptual architectures from requirements, hence enabling automated design recommendation under the ADD framework. To this end, we initiate a game-theoretic study to formulate the interactions among software requirements so that a conceptual architecture can be obtained in an algorithmic way. \paragraph{(2). Coalition game theory}: A coalition game is one where players exercise collaborative strategies, and competition takes place among coalitions of players rather than individuals. In ADD, we can imagine each requirement is ``handled'' by a player, whose goal is to set up a coalition with others to maximise the collective payoff. The set of coalitions then defines components in a system decomposition which entails a software architecture. This fits into the language of coalition games. However, the usual axioms in coalition games specify super-additivity and monotonicity, that is, the combination of two coalitions is always more beneficial than each separate coalition, and the payoff increases as a coalition grows in size. Such assumptions are not suitable in this context as combination of two conflicting requirements may result in a lower payoff. Hence a new game model is necessary to reflect the conflicting nature of requirements. In this respect, we propose that our model also enriches the theory of coalition games. \paragraph*{\bf Our contribution.} We provide a formal framework which, following the ADD paradigm \cite{BLMF2002}, recursively decomposes a system into sub-systems; the final decomposition reveals design elements in a software architecture. The basis of the framework is an algorithmic realisation of ADD. A crucial task in this algorithmic realisation is {\em system decomposition}, which derives a rational decomposition of an attribute primitive. We model system decomposition using a game, which we call {\em decomposition game}. The game takes into account {\em interactions} between requirements, which express the positive (enhancement) or negative (canceling) effects they act on each other. A {\em solution concept} (equilibrium) defines a rational decomposition, which is based on the notions of {\em cohesion} and {\em expansion-freedom}. We demonstrate that any such game has a solution, and a solution may not be unique. We also investigate algorithms that compute solutions for the decomposition game. Finding cohesive coalitions with maximal payoff turns out to be NP-hard (Thm.~\ref{thm:NPComplete}). Hence we propose a relaxed notion of {\em $k$-cohesion} for $k\geq 1$, and present a polynomial time algorithm for finding a $k$-cohesive solution of the game (Thm.~\ref{thm:algo}). To demonstrate the practical significance our the framework, we implement the framework and perform a case study on a real-world Cafeteria Ordering System. \paragraph*{\bf Paper organisation.} Section~\ref{sec:ADD} introduces the formal ADD framework. Section~\ref{sec:game} discusses decomposition game and its solution concept. Section~\ref{sec:algorithm} presents algorithms for solving decomposition games. Section~\ref{sec:case} presents the case study. Section~\ref{sec:related} discusses related works and finally Section~\ref{sec:conclusion} concludes with future works. \section{Algorithmic Attribute Driven Design (ADD) Process} \label{sec:ADD} ADD is a general framework for transforming software requirements into a {\em conceptual software architecture}. Pioneers of this approach introduced it through several well-formed, but informally-defined concepts and steps \cite{BLMF2002,WR2006}. A natural question arises whether it can be made more algorithmic, which provides unbiased, mathematically-grounded outputs. To answer this question, one would first need to translate the original informal descriptions to a mathematical language. \vspace*{-0.9cm} \subsection{Software Requirements and Constraints} \paragraph*{\bf Functional requirements.} Functional requirements are specifications of what tasks the system perform (e.g. ``the system must notify the user once a new email arrives'') A functional requirement does not stand alone; often, it acts with other functional requirements to express certain combined functionality (e.g. ``the user should log in before making a booking''). Thus, a functionality may depend on other functionalities. We use a partial ordering $(\mathsf{F}, \prec)$ to denote the functional requirements where each $r\in \mathsf{F}$ is a functional requirement, and $r_1\prec r_2$ denotes that $r_1$ depends on $r_2$. Note that $\prec$ is a transitive relation. \paragraph*{\bf Non-functional requirements.} Non-functional requirements specify the desired quality attributes; ADD uses {\em general scenarios} and {\em scenarios} as their standard representations. A general scenario is a high-level description on what it means to achieve a non-functional requirement \cite{BLMF2002}. For example, the general scenario ``\emph{A failure occurs and the system notifies the user; the system continues to perform in a degraded manner}'' refers to the availability attribute. There has been an effort to document all common general scenarios; a rather full list is given in \cite{G01}. Note that a general scenario is vaguely-phrased and is meant to serve as a template for more concrete ``instantiations'' of quality attributes. Such ``instantiations'' are called scenarios. More abstractly, we use a pair $(\mathsf{S}, \approx)$ to denote the non-functional requirements where $\mathsf{S}$ is a set of scenarios and $\approx$ is an equivalence relation on $\mathsf{S}$, denoting the {\em general scenario relation}: $q_1\approx q_2$ means that $q_1$ and $q_2$ instantiates the same general scenario. \paragraph*{\bf Design constraints.} Design constraints are factors that must be taken into account and enforce certain design outcomes. A design constraint may affect both functional and non-functional requirements. More abstractly, we use a collection of sets $\mathsf{C}\subseteq 2^{\mathsf{F}\cup \mathsf{S}}$ to denote the set of design constraints, where each set $c\in \mathsf{C}$ is a design constraint. Intuitively, if two requirements $r_1,r_2$ belong to the same $c\in \mathsf{C}$, then they are constrained by the same design constraint $c$. \paragraph*{\bf Derived Functionalities.} The enforcement of certain quality attributes may lead to additional functionalities. For example, to ensure availability, it may be necessary to add extra functionalities to detect failure and automatically bypass failed modules. Hence we introduce a {\em derivation relation} $\hookrightarrow\subseteq \mathsf{S} \times \mathsf{F}$ such that $r\hookrightarrow s$ means the functional requirement $s$ is derived from the scenario $r$. \subsection{Attribute Primitives} \begin{figure} \centering \caption{\label{fig:requirement}\small Example~\ref{exp:requirement}: The requirements, constraints and their relations.} \includegraphics[width=6cm]{definition.png \end{figure The intentional outcome of ADD describes the {\em design elements}, i.e., subsystems, components or connectors. It is important to note that the goal of ADD is not the complete automation of the design process, but rather, to provide useful guidance. Thus, the conceptual view reveals only the organisational structure but not the concrete design. An attribute primitive is a set of design elements that collaboratively perform certain functionalities and meet one or more quality requirements; it is also the minimal combination with respect to these goals \cite{BLMF2002}. Examples of attribute primitives include data router, firewall, virtual machine, interpreter and so on. ADD prescribes a list of attribute primitives together with descriptions of their properties and side effects (such as in \cite{G01}). Hence, ADD essentially can be viewed as assigning the right attribute primitives to the right requirement combinations. Note also that an attribute primitive may be broken down further. \begin{definition}[Attribute primitive] An {\em attribute primitive} is a tupl $$\mathcal A=(\mathsf{F}, \mathsf{S}, \mathsf{C}, \prec,\approx, \hookrightarrow)\vspace*{-0.2cm}$$ where $\mathsf{F}$ is a set of functional requirements, $\mathsf{S}$ is a set of scenarios, $\mathsf{C}\subseteq 2^{\mathsf{F}\cup \mathsf{S}}$ is a set of design constraints, $\prec$ is the dependency relation on $\mathsf{F}$, $\approx$ is the general scenario relation of $\mathsf{S}$, and $\hookrightarrow\subseteq \mathsf{S}\times \mathsf{F}$ is a derivation relation. \end{definition} % Let $\mathcal A=(\mathsf{F}, \mathsf{S}, \mathsf{C}, \prec,\approx,\hookrightarrow)$ be an attribute primitive. We also need the following definition: \begin{itemize} \item A {\em requirement} of $\mathcal A$ is an element in the set $\mathsf{R}\coloneqq\mathsf{F}\cup \mathsf{S}$. \item For $r\in \mathsf{F}$, the {\em dependency set} of $r$ is the set $f(r)\coloneqq\{r'\in \mathsf{F}\mid r\preceq r'\}$. \item For $r\in \mathsf{S}$, the {\em general scenario} of $r$ is the set $g(r)\coloneqq\{r'\in \mathsf{S}\mid r\approx r'\}$, i.e., the $\approx$-equivalence class of $r$. \item For $r\in \mathsf{R}$, the {\em constraints} of $r$ is the set $c(r)\coloneqq\{t\in \mathsf{C}\mid r\in t\}$. \item For $r\in \mathsf{S}$, the {\em derived set} of $r$ is $d(r) \coloneqq \{s\in \mathsf{F}\mid r\hookrightarrow s\}$, and for $s\in \mathsf{F}$, let $d^{-1}(s) \coloneqq \{r\in \mathsf{S} \mid r\hookrightarrow s\}$ \end{itemize} % \begin{definition}[Design element] A {\em design element} of $\mathcal A$ is a subset $D\subseteq \mathsf{R}$. An {\em decomposition} of $\mathcal A$ is a sequence of design elements $ \vec{D}=(D_1, D_2,\ldots D_k)$ where $k\geq 1$, $\bigcup_{1\leq i\leq k} D_k = \mathsf{R}$, and each $D_i\cap D_j=\varnothing$ for any $i\neq j$. \end{definition}\vspace*{-0.2cm} \begin{example}\label{exp:requirement} Fig.~\ref{fig:requirement} shows an attribute primitive $\mathcal A=(\mathsf{F}, \mathsf{S}, \mathsf{C}, \prec,\approx,\hookrightarrow)$\vspace*{-0.2cm} \begin{itemize} \item $\mathsf{F} = \{f_1, f_2, f_3\}$ and $\mathsf{S} = \{q_1, q_2, q_3\}$ are the requirements \item $\mathsf{C} = \{c_1, c_2\}$ where $c_1=\{q_1,q_3\}, c_2=\{q_1\}$ \item $f_1 \prec f_2$, $q_1 \approx q_2$, $q_1 \hookrightarrow f_1$, $q_1 \hookrightarrow f_2$ ,$q_2 \hookrightarrow f_1$, $q_2 \hookrightarrow f_2$, $q_3 \hookrightarrow f_3$. \end{itemize \end{example} \subsection{The ADD Procedure Essentially ADD provides a means for system decomposition: The entire system is treated as an attribute primitive, which is the input. At each step, the procedure decomposes an attribute primitive $\mathcal A$ by identifying a decomposition $(D_1,D_2,\ldots,D_k)$. The process then maps each resulting design element $D_i$ to an attribute primitive $\mathcal A_i=({\mathsf{F}}_i, {\mathsf{S}}_i, \mathsf{C}_i, \prec_i, \approx_i, \hookrightarrow_i)$, which contains all elements in $D_i$ and may require some further requirements and constraints. Hence we require that $D_i\subseteq {\mathsf{F}}_i \cup {\mathsf{S}}_i$ and $\prec_i$, $\approx_i$, $\mathsf{C}_i$, $\hookrightarrow_i$ are consistent with $\prec$, $\approx$, $\mathsf{C}$ and $\hookrightarrow$ on $D_i$, resp.; in this case we say that $\mathcal A_i$ is {\em consistent} with $D_i$. Thus the attribute primitive $\mathcal A$ is decomposed into $k$ attribute primitives $\mathcal A_1,\mathcal A_2,\ldots,\mathcal A_k$. On each $\mathcal A_i$ where $1\leq i\leq k$, the designer may choose to either terminate the process, or start a new step recursively to further decompose $\mathcal A_i$. See Procedure~\ref{prc:ADD}. \begin{algorithm} \caption{$\mathsf{ADD}(\mathcal A)$ (General Plan)} \label{prc:ADD} \begin{algorithmic}[1] \State $(D_1,D_2,\ldots,D_k)\gets \mathsf{Decompose}(\mathcal A)$ // compute a rational decomposition of $\mathcal A$ \For{$1\leq i\leq k$} \State $\mathcal A_i \leftarrow $ an primitive attribute consistent with $D_i$ \If{$\mathcal A_i$ needs further decomposition} \State $\mathsf{ADD}(\mathcal A_i)$ \EndIf \EndFor \end{algorithmic} \end{algorithm We point out that the ADD procedure, as presented by its original proponents, involves numerous additional stages other than the ones described above \cite{WR2006}. The reason we choose this over-simplified description is that we believe these are the steps that could be rigorously presented, and they abstractly capture in a way most of the steps mentioned in the original informal description. The $\mathsf{Decompose}(\mathcal A)$ operation produces a rational decomposition $(D_1,\ldots,D_k)$ of the input attribute primitive $\mathcal A$ that satisfies the requirements of $\mathcal A$. We also note that $\mathsf{Decompose}(\mathcal A)$ amounts to a crucial step in the ADD process, as the decomposition determines to a large extend how well the quality attributes are met. This step is also a challenging one as interactions among quality attributes create potential conflicts. Thus, in the next section, we define a game model which allows us to automate the $\mathsf{Decompose}(\mathcal A)$ operation. \section{Decomposition Games}\label{sec:game} \subsection{Requirement Relevance}\label{subsec:interaction} The $\mathsf{Decompose}(\mathcal A)$ procedure looks for a rational decomposition that meets the requirements in $\mathcal A$ as much as possible. Let $\mathcal A=(\mathsf{F}, \mathsf{S}, \mathsf{C}, \prec,\approx,\hookrightarrow)$ be an attribute primitive. Relevance between requirements are determined by the relations $\prec,\approx,\hookrightarrow$ and the constraint set $\mathsf{C}$. In the following the {\em Jaccard index} $J(S_1,S_2)$ measures the similarity between two sets $S_1,S_2$ with \[ J(S_1,S_2) = \frac{|S_1\cap S_2|}{|S_1\cup S_2|}. \] Intuitively, the relevance of a requirement $r$ to other requirements is influenced by the ``links'' between $r$ and the functional, the non-functional requirements, as well as design constraints. \begin{definition}[Relevance] Two requirements $r_1,r_2\in \mathsf{R}$ are {\em relevant} if \begin{itemize} \item $r_1,r_2\in \mathsf{F}$, and either $d^{-1}(r_1)\cap d^{-1}(r_2)\neq \varnothing$ (derived from some common scenario), or $f(r_1)\cap f(r_2)\neq \varnothing$ (relevant through dependency), or $c(r_1)\cap c(r_2)\neq \varnothing$ (share some common design constraints). \item $r_1,r_2\in \mathsf{S}$, and either $r_1\approx r_2$ (instantiate the same general scenario), or $d(r_1)\cap d(r_2)\neq \varnothing$ (jointly derives some functionality) or $c(r_1)\cap c(r_2)\neq \varnothing$. \item $r_1\in \mathsf{F}$, $r_2\in \mathsf{S}$, and either $f(r_1) \cap d(r_2) \neq \varnothing$ ($r_1$ depends on a requirement that is derived from $r_2$), or $c(r_1)\cap c(r_2) \neq \varnothing$. \end{itemize} \end{definition} If two requirements are relevant, their relevance depends on overlaps between their derived sets, dependency sets and constraints. If two requirements are not relevant, then we regard them as having a negative relevance $\lambda<0$, which represents a ``penalty'' one pays when two irrelevant requirements get in the same design element. \begin{definition} We define the {\em relevance index} $\sigma(r_1,r_2)$ of $r_1\neq r_2\in \mathsf{R}$ as follows \begin{enumerate} \item if two functional requirements $r_1,r_2\in \mathsf{F}$ are relevant, then\vspace*{-0.2cm} \[ \sigma(r_1,r_2) = \alpha J(d^{-1}(r_1),d^{-1}(r_2)) + \beta J(f(r_1),f(r_2)) + \gamma J(c(r_1),c(r_2));\vspace*{-0.2cm} \] \item if two scenarios $r_1,r_2\in \mathsf{S}$ are relevant, then\vspace*{-0.2cm} \[ \sigma(r_1,r_2) = \beta J(d(r_1),d(r_2)) + \gamma J(c(r_1),c(r_2));\vspace*{-0.2cm} \] \item If $r_1\in \mathsf{F}$ and $r_2\in \mathsf{S}$ are relevant, then\vspace*{-0.2cm} \[ \sigma(r_1,r_2) = \sigma(r_2,r_1)= \beta J(f(r_1), d(r_2)) + \gamma J(c(r_1),c(r_2));\vspace*{-0.2cm} \] \item otherwise, $\sigma(r_1,r_2)=\lambda$ \end{enumerate}\vspace*{-0.1cm} The constants $\alpha,\beta,\gamma$ are positive real numbers, that represent weights on the overlaps in $d_1,d_2$'s generated sets, dependency sets and constraints, respectively. We require $\alpha\!+\!\beta\!+\!\gamma\!=\!1$. \end{definition} For simplicity, we do not include these constants in expressing the function $\sigma$, and all subsequent notions that depend on $\sigma$ (thus saving us from writing ``$\sigma(r_1,r_2,\alpha,\beta,\gamma,\lambda)$''). \begin{example}\label{exp:relevance} Continue from $\mathcal A$ in Example~\ref{exp:requirement}. To emphasise the non-functional requirements we give a larger weight to $\alpha$, setting $\alpha = 0.5$, $\beta=0.4$, $\gamma=0.1$. We also set $\lambda=-0.5$. Then $\sigma(r_1,r_2)\!=\!0.4\!\times\!\frac{2}{2}\!=\!0.4$ for any $(r_1,r_2)\!\in\!\{(q_1,q_2),(q_3,f_3)\}\!\cup\!(\{q_1,q_2\}\!\times\!\{f_1,f_2\})$; $\sigma(q_1, q_3)\!=\!0.1\!\times\!\frac{1}{2}\!=\!0.05$; $\sigma(f_1, f_2)\!=\!0.5 \times \frac{2}{2} + 0.4 \times \frac{2}{2} = 0.9$; and relevance between any other pairs is $-0.5$. Fig.~\ref{fig:relevance}(a) illustrates the (positive) relevance in a weighted graph. \end{example} \subsection{Decomposition Games} \label{subsec:game We employ notions from coalition games to define what constitutes a {\em rational} decomposition. In a coalition game, players cooperate to form coalitions which achieve certain collective payoffs \cite{MO2008}.\vspace*{-0.1cm} \begin{definition}[Coalition game] A {\em coalition game} is a pair $(P,\nu)$ where $P$ is a finite set of players, and each subset $D\subseteq P$ is a {\em coalition}; $\nu: 2^P \to \mathbb{R}$ is a {\em payoff function} associating every $D\subseteq P$ a real value $\nu(D)$ satisfying $\nu(\varnothing)=0$.\vspace*{-0.1cm} \end{definition} This provides the set up for decompositions: \ Imagine a coalition game consisting of $|\mathsf{R}|$ agents as players, where each agent is in charge of a different requirement. The players form coalitions which correspond to sets of requirements, i.e., design elements. The payoff function would associate with every coalition a numerical value, which is the payoff gained by each member of the coalition. Therefore, an equilibrium of the game amounts to a decomposition with the right balance among all requirements -- this would be regarded as a rational decomposition. It remains to define the payoff function. Naturally, the payoff of a coalition is determined by the {\em interactions} among its members. Take $r_1,r_2\in D$. If one of $r_1,r_2$ is a functional requirement, then their interaction is defined by their relevance index $\sigma(r_1,r_2)$, as higher relevance means a higher level of interaction. Suppose now both $r_1,r_2$ are scenarios (non-functional). Then the interaction becomes more complicated, as a quality attribute may enhance or defect another quality attribute. In \cite[Chapter 14]{WKJ2013}, the authors identified effects acting from one quality attribute to another, which is expressed by a {\em tradeoff matrix} $T$ \begin{itemize} \item $T$ has dimension $m\times m$ where $m$ is the number of general scenarios \item For $ i\neq j \in \{1,\ldots,m\}$, the $(i,j)$-entry $T_{i,j}\in \{-1,0,1\}$. \end{itemize Let $g_1,g_2,\ldots,g_m$ be general scenarios. $T_{i,j}=1$ (resp. $T_{i,j}=-1$) means $g_1$ has a positive (resp. negative) effect on $g_2$, $T_{i,j}=0$ means no effect. E.g., the tradeoff matrix defined on six common quality attributes is {\[ \begin{array}{c|cccccc} & \text{\scriptsize \ Perfo.\ } & \text{\scriptsize Modif.\ } & \text{\scriptsize Secur.\ } & \text{\scriptsize Avail.\ } & \text{\scriptsize Testa.\ } & \text{\scriptsize Usabi.\ } \\ \hline \text{\scriptsize Performance} &0 & -1 & 0 & 0 & 0 & -1 \\ \text{\scriptsize Modifiability} &-1 & 0 & 0 & 1 & 1 & 0 \\ \text{\scriptsize Security} &-1 & 0 & 0 & 1 & -1 & -1 \\ \text{\scriptsize Availability} &0 & 0 & 0 & 0 & 0 & 0 \\ \text{\scriptsize Testability} &0 & 1 & 1 & 1 & 0 & 1 \\ \text{\scriptsize Usability} &-1 & 0 & 0 & 0 & -1 & 0 \end{array \]} Note that the matrix is not necessarily symmetric: The effect from $g_1$ to $g_2$ may be different from the effect from $g_2$ to $g_1$. For example, an improvement in system performance may not affect security, but increasing security will almost always adversely impact performance. we assume that the matrix $T$ is given prior to ADD; this assumption is reasonable as there is an effective map from any general scenario to the main quality attribute it tries to capture. We use this tradeoff matrix to define the interaction between two scenarios in $\mathsf{S}$. \begin{definition}[Coalitional relevance] For a coalition $D\subseteq \mathsf{R}$ and $r\in D$, the {\em coalitional relevance} of $r$ in $D$ is the total relevance from $r$ to all other requirements in $D$, i.e., $\rho(r,D)=\sum_{s\in D, s\neq r} \sigma(r,s). \end{definition} \begin{definition}[Effect factor] For scenarios $r_1,r_2$ in the same coalition $D$, the {\em effect factor} from $r_1$ to $r_2$ expresses the effect of $r_1$ towards $r_2$, i.e. \[ \varepsilon(r_1,r_2,D) = \begin{cases} -|\rho(r_1,D)| & \text{ if $T(g(r_1),g(r_2))=-1$} \\ 0 & \text{ if $T(g(r_1),g(r_2))=0$} \\ \rho(r_1,D) & \text{ if $T(g(r_1),g(r_2))=1$}\vspace*{-0.2cm} \end{cases} \] \end{definition} We are now ready to define the interaction between two scenarios $r_1,r_2\in \mathsf{R}$. \begin{definition}[Interaction] Let $r_1\!\neq\!r_2\!\in\!\mathsf{R}$ be requirements. The {\em interaction} between $r_1,r_2$ is simply the relevance $\sigma(r_1,r_2)$ if one of $r_1,r_2$ is functional; otherwise (both $r_1,r_2$ are non-functional), it is the sum of their effect factors, i.e., \vspace*{-0.2cm \[ \text{the {\em interaction} } \nu(r_1,r_2,D) \coloneqq \begin{cases} \sigma(r_1,r_2) & \text{ if $\{r_1,r_2\}\cap \mathcal F\neq \varnothing$}\\ \varepsilon(r_1,r_2,D)+\varepsilon(r_2,r_1,D) & \text{ otherwise \end{cases} \] The {\em coalition utility} $\nu(D)$ of any coalition $D\subseteq \mathsf{R}$ is defined as the sum of interactions among all pairs of requirements in the coalition, i.e.,\vspace*{-0.2cm} \[ \nu(D) = \sum_{r_1\neq r_2\in D} \nu(r_1,r_2,D \] \end{definition \begin{definition}[Decomposition games (DG)] Let $\mathcal A=(\mathsf{F}, \mathsf{S}, \mathsf{C},\prec, \approx, \hookrightarrow)$ be an attribute primitive. The {\em DG} $G_\mathcal A$ is the coalition game $(\mathsf{F}\cup \mathsf{S}, \nu)$ where $\nu:2^{\mathsf{F}\cup \mathsf{S}}\to \mathbb{R}$ is the coalition utility function. \end{definition} \begin{figure \begin{center}\caption{\label{fig:relevance} \small (a) Weights on the edges are relevance (function $\sigma$) between requirements in Example~\ref{exp:relevance}; the diagram omits the negative weighted pairs. \ (b) The decomposition $\{S_1,S_2\}$ is a solution with $\nu(S_1)=2.5$, $\nu(S_2)=0.4$. The coalition $C$ has $\nu(C)=-1$} \end{center} \includegraphics[width=12.0cm]{relevance.png} \end{figure} \begin{example}[Coalition Utility]\label{exp:utility} Continue the setting in Example~\ref{exp:relevance}. Let the general scenarios be $g_1=\{q_1,q_2\}$ and $g_2=\{q_3\}$. We assume matrix $T$ specifies $T(g_1,g_2)=1$ and $T(g_2,g_1)=-1$. Consider the coalition $C=\{q_1,q_3,f_3\}$. We have: \[ \rho(q_1,C)=0.05-0.5=-0.45; \text{ and }\rho(q_3,C)=0.4+0.05=0.45. \] So $\varepsilon (q_1,q_3,C)=-0.45\times 1=-0.45$ and $\varepsilon(q_3,q_1,C)=0.45\times (-1)=-0.45$. Thus $\nu(q_1, q_3, C) =-0.45-0.45= -0.9$. Therefore, $\nu(C) = \sigma(q_1,f_3)+\sigma(q_3,f_3)+(-0.9)=(-0.5)+0.4+(-0.9) = -1$ but $\nu(C \setminus \{q_1\}) = \nu(\{q_3,f_3\})=\sigma(q_3,f_3)=0.4$; See Fig.~\ref{fig:relevance}(b). As it has turned out, despite the fact that matrix $T$ indicates $q_1$ will act positively to $q_3$, and that $q_1,q_3$ have a positive (0.05) relevance, adding $q_1$ into the coalition of $\{q_3,f_3\}$ drastically decreases the coalition utility. \end{example} \subsection{Solution Concept}\label{subsec:solution} We point out some major differences between decomposition and typical coalition games: Firstly, in coalition game theory, one normally assumes the axioms of superadditivity ($\nu(D_1\cup D_2) \geq \nu(D_1)+\nu(D_2)$) and monotonicity ($D_1\subseteq D_2 \Rightarrow \nu(D_1)\leq \nu(D_2)$) which would obviously not hold for decomposition as players may counteract with each other, reducing their combined utility. Secondly, the typical solution concepts in coalition games (such as Pareto optimality, and Shapely value) focus on distribution of payoffs to each individual player assuming a grand coalition consisting of all players. In decomposition such a grand coalition is normally not desirable and the focus is on the overall payoff of each coalition $D$, rather than the individual requirements. The above differences motivate us to consider a different solution concept of DG $G_\mathcal A$. At any instance of the game, the players form a decomposition $(D_1,D_2,\ldots,D_k)$. We assume that the players may perform two collaborative strategies: \begin{enumerate} \item {\em Merge strategy}: Two coalitions may choose to merge if they would obtain a higher combined payoff. \item {\em Bind strategy}: Players within the same coalition may form a sub-coalition if they would obtain a higher payoff. \end{enumerate} \begin{example}[A Dilemma]\label{exp:dilemma} We present an example demonstrating the dynamics of a DG $G_\mathcal A$. This example shows a real-world dilemma: As a coalition pursues higher utility through expansion (merging with others), it may be better to choose a ``less-aggressive'' expansion strategy over the ``more-aggressive'' counterpart, even though the latter clearly brings a higher payoff. Assume the following situation (which is clearly plausible in an attribute primitive): \begin{itemize} \item $\mathsf{R}=\{d_1,d_2,d_3,d_4\}$ where $\mathsf{S}=\{d_1,d_4\}$ and $d_1\not\approx d_4$. \item We set $\sigma(\{d_1, d_2\})=\sigma(\{d_1, d_3\})=\sigma(\{d_2,d_3\})=0.1$, and $\sigma(\{d_2, d_4\})=0.5$. \item The tradeoff matrix indicates $T(g(d_1), g(d_4))=0$, $T(g(d_4), g(d_1))=-1$. \item And, $d_1$ and $d_4$ are irrelevant, namely $\sigma(d_1, d_4)=\lambda = -0.7$. \vspace*{-0.2cm} \end{itemize} Suppose we start with the decomposition $\{S=\{d_1,d_2\}, \{d_3\}, \{d_4\}\}$. Then $\nu(S)=\rho(d_1, d_2,S)=\nu(d_1, d_2,S)=0.1$. Coalition $S$ has two merge strategies: \begin{itemize} \item[(1)] For $S_1 = S\cup \{d_3\}$: $\nu(d_1, d_2,S_1) = \sigma(d_1, d_2) = 0.1$, $\nu(d_1, d_3,S_1)\!=\!\sigma(d_1, d_3)\!=\!0.1$, $\nu(d_2, d_3, S_1)\!=\!\sigma(d_2, d_3)\!=\!0.1$. Thus $\nu(S_1)\!=\!0.3$. \item[(2)] For $S_2 = S\cup \{d_4\}$: $\nu(d_1,d_4,S_2)\!=\!\varepsilon(d_4, d_1,S_2)\!=\!-0.7\!+\!0.5\!=\!-0.2$, $\nu(d_1, d_2,S_2)\!=\!\sigma(d_1, d_2)\!=\!0.1$ , $\nu(d_2, d_4,S_2)\!=\!\sigma(d_2, d_4)\!=\!0.5$. Hence $\nu(S_2)\!=\!0.1\!-\!0.2\!+\!0.5\!=\!0.4$ \end{itemize} Merging with $\{d_4\}$ clearly results in a higher payoff for the combined coalition. However, if this merge happens, as $\nu\left(\{d_2, d_4\}\right)=0.5>\nu(S_2)=0.4$, $d_2$ and $d_4$ would choose to bind together, hence leaving $S_2$. This would be undesirable if $d_1$ is a critical non-functional requirement for $d_2$. \end{example} Example~\ref{exp:dilemma} shows that a solution concept would be a decomposition where no ``expansion'' nor ``crumbling'' occur to any coalition. Formally, we define the following solution concepts: \begin{definition}[Solution] Let $\vec{D}=(D_1,\ldots,D_k)$ be a decomposition of $\mathcal A$. \begin{enumerate} \item A coalition $D\subseteq \mathsf{R}$ is {\em cohesive} if for all $C\subseteq D$, $\nu(C) < \nu(D)$; $\vec{D}$ is {\em cohesive} if so is every $D_i$. \item A coalition $D_i$ is {\em expansion-free} with respect to $\vec{D}$ if $\max\{\nu(D_i),\nu(D_j)\}>\nu(D_i\cup D_j)$; $\vec{D}$ is {\em expansion-free} if so is every $D_i$. \end{enumerate} A {\em solution} of a DG is a decomposition that is both cohesive and expansion-free. \end{definition} \begin{example}[Solution]\label{exp:solution} Continue from Example~\ref{exp:utility}, the utilities for \[ S_1 = \{q_1, q_2, f_1, f_2\} \text{ \qquad and \qquad } S_2 = \{q_3, f_3\} \text{ \qquad are }\colon\] \begin{itemize} \item $S_1$: $\nu(q_1, q_2,S_1)= 0$, $\nu(q_1, f_1,S_1)=\nu(q_1, f_2,S_1)=\nu(q_2, f_1,S_1)=0.4$, \\ $\nu(q_1, f_2,S_1)=0.4$, $\nu(f_1, f_2,S_1)=0.9$. Thus $ \nu(S_1)=0.4\times 4+0.9=2.5$ \item $S_2$: $w_(q_3, f_3,S_2)=0.4$. Thus $\nu(S_2)=0.4$ \end{itemize} Both $S_1$ and $S_2$ are cohesive. Furthermore, we have $\nu(q_1,q_3,\mathsf{R})\!=\!0.75\!-\!1.05\!=\!-0.3$ and $\nu(q_2,q_3,\mathsf{R})=0.2-1.05=-0.85$. Thus $\nu(\mathsf{R})\!=\!2.9\!-\!0.5\!\times\!6\!-\!0.85\!-\!0.3\!=\!-1.45$. Consequently, $\{S_1,S_2\}$ is also expansion-free, and is thus a solution of the game. \end{example} A solution of a DG $G_\mathcal A$ corresponds to a rational decomposition of the attribute primitive $\mathcal A$. As shown by Thm.~\ref{thm:existence}, any attribute primitive admits a solution, and rather expectedly, a solution may not be unique. \begin{theorem}[Solution Existence]\label{thm:existence} There exists a solution in any DG $G_\mathcal A$. \end{theorem} \begin{proof} We show existence of a solution by construction. Let $(D_1,D_2,\ldots,D_k)$ be a longest sequence such that for any $i=1,\ldots,k$, $D_i$ is a minimal coalition with maximal utility in $\mathsf{R}\setminus \{D_1,\ldots,D_{i-1}\}$ (i.e., $\forall D\subseteq \mathsf{R}\setminus \{D_1,\ldots,D_{i-1}\}: \nu(D_1)\geq \nu(D)$ and $\forall D\subseteq D_1: \nu(D_1)>\nu(D)$). We claim that $\vec{D}=(D_1,\ldots,D_k)$ is a solution in $G_\mathcal A$. Indeed, for any $1\leq i\leq k$, any proper subset of $D_i$ would have payoff strictly smaller than $\nu(D_i)$ by minimality of $D_i$. Thus $\vec{D}$ is cohesive. Moreover, if $\nu(D_i\cup D_j)>\min\{\nu(D_i),\nu(D_j)\}$ for some $i\neq j$, then $D_{\min\{i,j\}}$ does not have maximal utility in $\mathsf{R}\setminus\{D_1,\ldots,D_{\min\{i,j\}-1}\}$. Hence $\vec{D}$ is expansion-free. \qed \end{proof} \begin{proposition}\label{prp:unique} The solution of a DG may not be unique. \end{proposition} \begin{proof} Let $\mathcal A=(\mathsf{F},\mathsf{S},\mathsf{C}, \prec,\approx,\hookrightarrow)$ be an attribute primitive where $\mathsf{S}=\varnothing$ and $\mathsf{F}=\{d_1,d_2,\ldots,d_6\}$. We may define $\mathsf{C}, \prec,\approx,\hookrightarrow$ in such a way that \begin{itemize} \item For all $\{i,j\}\subseteq \{1,2,3,4\}$ and $\{i,j\}\subseteq \{4,5,6\}$, $i\neq j \Rightarrow \nu(\{d_i,d_j\})=0.1$ \item For all $i\in \{1,2,3\}$, $j\in \{5,6\}$, $\nu(\{d_i,d_j\})=-0.1$ \end{itemize} Consider $\vec{C}\!=\!\{C_1\!=\!\{d_1,d_2,d_3\},C_2\!=\!\{d_4,d_5,d_6\}\} \text{ and } \vec{D}\!=\!\{D_1\!=\!\{d_1,d_2,d_3,d_4\},$ $D_2\!=\!\{d_5,d_6\}\}$. Note that $\nu(C_1)=0.3$ and $\nu(C_2) = 0.3$; $\vec{C}$ is cohesive and $\vec{C}$ is expansion-free as $\nu(\mathsf{F})=0.3=\nu(C_1)$. Note also that $\nu(D_1)=0.6$ and $\nu(D_2)=0.1$; $\vec{D}$ is cohesive and $\vec{D}$ is expansion-free as $\nu(D_1)>\nu(\mathsf{F})$\qed \end{proof} \section{Solving Decomposition Games} \label{sec:algorithm} Based on our game model, the operation $\mathsf{Decompose}(\mathcal A)$ in Procedure~\ref{prc:ADD} is reduced to the following $\mathsf{DG}$ problem: \qquad INPUT: An attribute primitive $\mathcal A=(\mathsf{F},\mathsf{S}, \mathsf{C},\prec,\approx,\hookrightarrow)$ \qquad OUTPUT: A solution $\vec{D}=(D_1,D_2,\ldots,D_k)$ of the game $G_\mathcal A$ \noindent Here, we measure computational complexity with respect to the number of requirements in $\mathsf{F}\cup \mathsf{S}$. The proof of Theorem~\ref{thm:existence} already implies an algorithm for solving the $\mathsf{DG}$ problem: check all subsets of $\mathsf{R}$ to identify a minimal set with maximal utility; remove it from $\mathsf{R}$ and repeat. However, it is clear that this algorithm takes exponential time. We will demonstrate below that a polynomial-time algorithm for this problem is, unfortunately, unlikely to exist. We consider the decision problem $\mathsf{DG\_D}$: {\em Given $\mathcal A$ and a number $w>0$, is there a solution $\vec{D}$ of $G_\mathcal A$ in which the highest utility of a coalition reaches $w$?} Recall that the payoff function $\nu$ of $G_\mathcal A$ is defined assuming constants $\alpha,\beta,\gamma>0$ and $\lambda<0$. The theorem below holds assuming $\lambda<-\gamma$. \begin{theorem}\label{thm:NPComplete} The $\mathsf{DG\_D}$ problem is NP-hard \end{theorem} \begin{proof} The proof is via a reduction from the maximal clique problem, which is a well-known NP-hard problem. Given an undirected graph $H=(V, E)$, we construct an attribute primitive $\mathcal A$ such that any cohesive coalition in $G_\mathcal A$ reveals a clique in $H$. Suppose $V=\{1,2,\ldots,n\}$. The requirements of $\mathcal A$ consist of $n^2$ scenarios: $\mathsf{R}=\mathsf{S}\coloneqq \{a_{i,i'}\mid 1\leq\! i\!\leq\! n, 1\leq\! i'\!\leq\! n\}$. In particular, all requirements are non-functional. We define an edge relation $E'$ on $\mathsf{S}$ such that \begin{enumerate} \item $(i,j)\in E$ iff $(a_{i,i'},a_{j,j'})\in E'$ for some $1\leq i' \leq n$ and $1\leq j'\leq n$ \item If $(a_{i,i'},a_{j,j'})\in E'$ then $(a_{i,i''},a_{j,j''})\notin E'$ for any $(i'',j'')\neq (i',j')$. \item Any $a_{i,i'}$ is attached to at most one edge in $E'$. \end{enumerate} Note that such a relation $E'$ exists as any node $i\in V$ is only connected with at most $n-1$ other nodes in $H$. Intuitively, a set of requirements $A_i=\{a_{i,1},\ldots,a_{i,n}\}$ serves as a ``meta-node'' and corresponds to the node $i$ in $H$. In constructing $\mathcal A$, we may define the general scenarios in such a way that \begin{itemize} \item $T(g(a_{i,j_1}), g(a_{i,j_2}))=0$ for any $1\leq i\leq n$ and $j_1\neq j_2$. \item $T(g(a_{i_1,j_1}), g(a_{i_2,j_2}))=-1$ for any $(i_1,i_2)\notin E$. \item $T(g(a_{i_1,j_1}), g(a_{i_2,j_2}))=1$ for any $(a_{i_1,j_1},a_{i_2,j_2})\in E'$ \item $T(g(a_{i_1,j_1}), g(a_{i_2,j_2}))=0$ for any $(i_1,i_2)\in E$ but $(a_{i_1,j_1},a_{i_2,j_2})\notin E'$ \end{itemize} For every $1\leq i\leq n$ and $1\leq j<j'\leq n$, put in a constraint $c_{i}(j,j')=\{a_{i,j}, a_{i,j'}\}$. Thus the relevance between $a_{i,j}$ and $a_{i,j'}$ is \vspace*{-0.1cm} \[ \sigma(a_{i,j},a_{i,j'})=\frac{|c(a_{i,j})\cap c(a_{i,j'})|}{|c(a_{i,j})\cup c(a_{i,j'})|}=\frac{\gamma}{2(n-1)} \ Furthermore if $i\neq i'$, then for any $j,j'$ we set $\sigma(a_{i,j},a_{i,j'})=\lambda$. Suppose $U=\{i_1,\ldots,i_\ell\}$ induces a complete subgraph of $H$. We define the {\em meta-clique coalition} of $U$ as \[ D_U=\bigcup_{1\leq j\leq \ell} A_{i_j} \] By the above definition, for any $1\!\leq\!s\!<\!t\!\leq\!\ell$, take $j,j'$ such that $(a_{i_s,j},a_{i_t,j'})\in E'$. \begin{align*} w(i_s,i_{t},D_U) & = \varepsilon(a_{i_s,j},D_U) + \varepsilon(a_{i_t,j'},D_U) \\ & = \rho(a_{i_s,j},D_U) + \rho(a_{i_t,j'},D_U)\\ & = (n-1)\times \frac{\gamma}{2(n-1)} + (n-1)\times \frac{\gamma}{2(n-1)} =\gamm \end{align*} Thus $\nu(D_U)=\frac{n(n-1)\gamma}{2}$. Taking out any element from $D_U$ results in a strict decrease in utility, and hence $D_U$ is cohesive. Now take any coalition $D\subseteq \mathsf{R}$ that contains two requirements $a_{i,i'}$, $a_{j,j'}$ such that $(i,j)\notin E$. Let $s=|A_i\cap D|$ and $t=|A_j\cap D|$. Note also that $\sigma(a_{j,j'},a_{i,i''}) = \lambda$ for any $a_{i,i''}\in A_i\cap D$. Therefore we have\vspace*{-0.1cm} \[ \nu(D)- \nu(D\setminus \{a_{j,j'}\}) \leq \gamma+2 w(a_{j,j'},a_{i,i'},D)\times s \leq \gamma+2\lambda +\gamma = 2(\lambda+\gamma)<0 \]\vspace*{-0.1cm} The last inequality above is by assumption that $\lambda<-\gamma$. Thus $D$ is not cohesive. By the above argument, a coalition $D\subseteq \mathsf{R}$ is cohesive in $G_\mathcal A$ iff $D$ is the meta-clique coalition $D_U$ for some clique $U$ in $H$. Furthermore, a decomposition $\vec{D}=(D_1,D_2,\ldots,D_k)$ is a solution in $G_\mathcal A$ iff $V$ can be partitioned into sets $U_1,\ldots,U_k$ where each $U_i$ is a clique, and $D_i=D_{U_i}$ for all $1\!\leq\!i\!\leq\!k$. In particular, $H$ has a clique with $\ell$ nodes if and only if $G_\mathcal A$ has a solution that contains a coalition whose utility reaches $\frac{\ell(\ell-1)\gamma}{2}$. This finishes the reduction.\qed \end{proof} Theorem~\ref{thm:NPComplete} shows that, in a sense, identifying a ``best'' solutions in a DG $G_\mathcal A$ is hard. The main difficulty comes from the fact that one would examine all subsets of players to find an optimal cohesive coalition. This calls for a relaxed notion of a solution that is computationally feasible. To this end we introduce the notion of {\em $k$-cohesive coalitions}. Fix $k\in \mathbb N$ and enforce this rule: Binding can only take place on $k$ or less players. That is, a coalition $C$ is {\em $k$-cohesive} whenever $\nu(C)$ is greater than the utility of any subsets with at most $k$ players.\vspace*{-0.1cm} \begin{definition} Fix $k\in \mathbb N$. In a DG $G_\mathcal A = (\mathsf{F} \cup \mathsf{S}, \nu)$, we say a coalition $D \subset \mathsf{F} \cup \mathsf{S}$ is {\em $k$-cohesive} if $\nu(D') < \nu(D)$ for all $D' \subset D$ with $|D'| \leq k$. An decomposition $\vec{D}$ is {\em $k$-cohesive} if every coalition in $\vec{D}$ is $k$-cohesive; if $\vec{D}$ is also expansion-free, then it is a {\em $k$-cohesive solution} of the game $G_\mathcal A$. \end{definition} \paragraph*{\textit{Remark.}} In a sense, the value $k$ in the above definition indicates a level of {\em expected cohesion} in the decomposition process. A higher value of $k$ implies less restricted binding within any coalition, which results in higher ``sensitivity'' of design elements to conflicts. In a software tool which performs ADD based on DG, the level $k$ may be used as an additional parameter. \begin{algorithm} \caption{\label{proc:kcombine}$\mathsf{DGame}(\mathcal A,k)$} \begin{algorithmic}[1] \INPUT Attribute primitive $\mathcal A$, $k>0$ \OUTPUT Attribute Decomposition $\vec{D}$ \State $\vec{D} \leftarrow \mathsf{Cohesives}(\mathcal A, k)$ \State $\mathsf{Combine}\leftarrow \mathsf{true}$ \While{$\mathsf{Combine}$} \State $\mathsf{Combine}\leftarrow \mathsf{false}$ \For{$(D,D') \in \vec{D}^2$, $D\neq D'$} \If{$\nu(D' \cup D) > \nu(D)$ and $\nu(D' \cup D) > \nu(D')$} \State $D \leftarrow D' \cup D$ and remove $D'$ from $\vec{D}$ \State $\mathsf{Combine}\leftarrow \mathsf{true}$ \EndIf \EndFor \EndWhile \State {\bf return} $\vec{D}$ \end{algorithmic} \end{algorithm} \vspace*{-1cm} \begin{algorithm} \caption{\label{prc:kCohesive}$\mathsf{Cohesive}(\mathcal A, k)$} \begin{algorithmic}[1] \INPUT Attribute primitive $\mathcal A$, $k>0$ \OUTPUT Attribute Decomposition $\vec{D}$ \State $\vec{D} \leftarrow [\ ]$, $R \leftarrow \mathsf{F} \cup \mathsf{S}$ \While{$|R| > 0$} \State $S \leftarrow \mathsf{max}(R,k)$ // compute a maximally $k$-cohesive coalition \State $R \leftarrow R\setminus S$ \State $\vec{D} \leftarrow \mathtt{[}\vec{D},S\mathtt{]}$ \EndWhile \State {\bf return} $\vec{D}$ \end{algorithmic} \end{algorithm} Let $R$ be a set of requirements. A coalition $D$ is called {\em maximally $k$-cohesive} in $R$ if $|D|\leq k$, $D$ is $k$-cohesive and $\nu(D)\geq \nu(D')$ for any $D'\subseteq R$. Suppose the operation $\mathsf{max}(R,k)$ computes a maximally $k$-cohesive set in $R$. The algorithm $\mathsf{DGame}(\mathcal A,k)$ (Proc.~\ref{proc:kcombine}), which uses $\mathsf{Cohesive}(\mathcal A, k)$ (Proc.~\ref{prc:kCohesive}) as a subroutine, computes a $k$-cohesive solution of $G_\mathcal A$. Note that the $\mathsf{Cohesive}(\mathcal A,k)$ operation maintains a list $\vec{D}$, which when returned, denotes a decomposition. Note also that the returned $\vec{D}=(D_1,\ldots,D_m)$ satisfies the following condition: \vspace*{-0.2cm} \[ \forall 1\leq i\leq m: D_i \text{ is maximally $k$-cohesive in $D_{i}\cup \cdots \cup D_m$} \] We call this $\vec{D}$ a {\em maximally $k$-cohesive decomposition}. \begin{lemma}\label{lem:ksolution} Suppose $\vec{D}$ is a maximally $k$-cohesive decomposition. Take any $1\!\leq\!i\!<\!j\!\leq\!n$. If $\nu(D_i\cup D_j)>\max\{\nu(D_i),\nu(D_j)\}$ then $D_i\cup D_j$ is $k$-cohesive. \end{lemma} \begin{proof} Let $S_i=\bigcup_{i\leq j\leq m} D_j$ for any $i=1,\ldots, m$. Suppose $\nu(D_i\cup D_j)>\max\{\nu(D_i),\nu(D_j)\}$ for $1\leq i<j\leq m$. By assumption $D_i$ is maximally $k$-cohesive in $S_i$. For any finite set $U\subseteq D_i\cup D_j\subseteq S_i$ such that $|U|\leq k$, we have $\nu(U)\leq \nu(D_i) < \nu(D_i\cup D_j)$. Hence $D_i\cup D_j$ is also $k$-cohesive. \qed \end{proof} \begin{theorem}\label{thm:algo} Given an attribute primitive $\mathcal A$, the $\mathsf{DGame}(\mathcal A,k)$ algorithm computes a $k$-cohesive solution of the decomposition game $G_\mathcal A$ in time $O(n^{k})$, where $n$ is the number of requirements in $\mathcal A$. \end{theorem} \begin{proof} The $\mathsf{DGame}(\mathcal A,k)$ algorithm calls $\mathsf{Cohesive}(\mathcal A,k)$ to produce a maximally $k$-cohesive decomposition $\vec{D}$, and then performs several iterations to ``combine'' the coalitions in $\vec{D}$. By Lemma~\ref{lem:ksolution}, the decomposition $\vec{D}$ after each iteration is $k$-cohesive. There is a point when for all $D,D'\in \vec{D}$ we have $\nu(D\cup D') \leq \max\{\nu(D),\nu(D')\}$. At this moment, the {\bf while}-loop will terminate and $\vec{D}$ is expansion-free. The time complexity is justified as there are $O(n^k)$ subsets of $\mathsf{F}\cup \mathsf{S}$ with size $\leq k$. Thus computing a maximally $k$-cohesive decomposition takes time $O(n^{k})$. \qed \end{proof} \section{Case Study: Cafeteria Ordering System} \label{sec:case} \begin{wrapfigure}{r}{5.5cm}\vspace*{-1.1cm} \caption{\label{fig:cafe} \small Interactions between requirements in the COS \cite{WKJ2013}. Blue edges indicate positive interactions and red edges indicate negative interactions.} \includegraphics[width=6cm]{relationships.png} \vspace*{-1cm} \end{wrapfigure} To demonstrate applicability of our game model in real-world, we build a DG for a cafeteria ordering system (COS). A COS permits employees of a company to order meals from the company cafeteria online and is a module of a larger cafeteria management system. The requirements of the project have been produced through a systematic requirement engineering process and is well-documented (See full details from \cite[Appendix C]{WKJ2013}). Since COS is a subsystem within a larger system, the requirements also incorporate interfaces with other subsystems of the overall system. The initial attribute primitive has 60 requirements with $|\mathsf{S}|=11$, $|\mathsf{F}|=49$ and 7 design constraints. Non-functional requirements conflict with each other, e.g., the general scenario $\mathsf{USE}$ conflicts with the general scenario $\mathsf{PER}$. Also the requirements exhibit some complex relationships, e.g. $\mathsf{SEC1} \hookrightarrow\mathsf{Order.Pay.Deduct}$. We demonstrate the complicated interactions among requirements using a complete graph where nodes are all requirements in $\mathsf{R}=\mathsf{S}\cup \mathsf{F}$; see Fig.~\ref{fig:cafe}. The edges are in two colours: $(r_1,r_2)$ gets blue if $w(r_1,r_2, \mathsf{R})>0$ and red if $w(r_1,r_2,\mathsf{R})<0$. {\em (For completeness, we include descriptions of constraints and requirements in the APPENDIX.)} We run the $\mathsf{DGame}(\mathcal A,k)$ algorithm to identify a $k$-cohesive solution for different levels $k$ of expected cohesion. In order to clearly identify sub-components, we give a higher penalty $\lambda$ between conflicting requirements: $\alpha=0.4$, $\beta=0.3$, $\gamma=0.3$, $\lambda=-1.3$. We choose $k\in \{1,\ldots,7\}$. As argued above, setting a higher value of $k$ should in principle improve the quality of the output decomposition, although this also means a longer computation time. We implement our algorithm using Java on a laptop with Intel Core i7-3630QM CPU 2.4GHz 8.0GB RAM. The running time for different values of $k$ is: 503 milliseconds for $k=3$ and approximately $1140$ seconds for $k=6$. \begin{table}\caption{\footnotesize Resulting $3$- and $6$-cohesive solutions, ordered by payoff values.} {\scriptsize \centering \begin{tabular}{|c|c|c|} \hline & $3$-Cohesive Solution & $6$-Cohesive Solution \\ \hline Coalition 0& \begin{tabular}{@{}c@{}} AVL1 ROB1 SAF1 SEC(1,2,4) USE(1,2)\\ Order.Confirm Order.Menu.Data \\Order.Deliver.(Select,Location)\\ Order.Pay Order.Place Order.Retrieve \\ Order.Units.Multiple UI2 UI3 \end{tabular} & \begin{tabular}{@{}c@{}} AVL1 ROB1 SAF1 SEC(1,2,4) PER(1,2,3)\\ USE(1,2) Order.Confirm Order.Deliver \\Order.Deliver.(Select,Location) \\ Order.Menu.Date Order.Pay \\ Order.Retrieve Order.Place Order.Units \\Order.Units.Multiple UI2 UI3 \end{tabular} \\\hline Coalition 1& \begin{tabular}{@{}c@{}}PER(1,2,3) Order.Units.TooMany \\Order.Deliver.(Times,Notimes)\\ Order.Place.(Cutoff,Data,Register,No)\\ Order.Pay.(OK,NG) Order.Done.Failure \\Order.Confirm.(Prompt,Response,More)\end{tabular}& \begin{tabular}{@{}c@{}}Order.pay.(Deliver,Pickup,Deduct)\\Order.Done.Patron SI2.2 SI2.3 \end{tabular} \\\hline Coalition 2& \begin{tabular}{@{}c@{}}Order.pay.(Deliver,Pickup,Deduct)\\Order.Done.Patron SI2.2 SI2.3 \end{tabular} & \begin{tabular}{@{}c@{}} Order.Units.TooMany \\Order.Deliver.(Times,Notimes)\\ Order.Place.(Cutoff,Data,Register,No)\\ Order.Pay.(OK,NG) Order.Done.Failure \\Order.Confirm.(Prompt,Response,More)\end{tabular} \\ \hline Coalition 3& \begin{tabular}{@{}c@{}} Order.Menu Order.Unit Order.Done\\Order.Done.(Menu,Times,Cafeteria) \\Order.Done.(Store,Inventory)\\Order.Deliver Order.Menu.Available\\Order.Confirm.Display\\Order.Pay.Method SI1.3 SI2.5 CI2\end{tabular} & \begin{tabular}{@{}c@{}} Order.Done\\Order.Done.(Menu,Times,Cafeteria) \\Order.Done.(Store,Inventory)\\ SI1.3 SI2.1 SI2.4 SI2.5 CI1 CI2\end{tabular} \\ \hline Coalition 4 & SI1.1 SI1.2 & Order.Menu.Available SI1.1 SI1.2 \\\hline \end{tabular} } \end{table} \paragraph*{\bf Cohesion level $k= 3$.} The $3$-cohesive solution consists of 5 coalitions. An examination at the requirements in each coalition reveals: {\em Coalition 0} relates to usability and ensures availability of user interactions; it apparently corresponds to a user interface module. {\em Coalition 1} is performance-oriented and is separated from the usability requirements; it thus corresponds to a back-end module that handles all the internal operations. {\em Coalition 2} deals with the payroll system outside COS and defines a controlling interface from COS to payroll. {\em Coalition 3} consists of several functional requirements that control life cycle of the COS. {\em Coalition 4} is an interface to access the inventory system outside COS. It is clear that this solution separates the control, user inputs and computation modules, and fits the MVC (Model-View-Controller) architectural pattern. In addition, there is a design constraint that requires the use of Java and Oracle database engine. So, we instantiate the design elements as in Fig.~\ref{fig:difficulty3}. \begin{figure} \begin{center} \caption{\label{fig:difficulty3} {\footnotesize The $3$-cohesive solution. \emph{Coalition 0}: Java Spring framework uses server page as user interface and provides a powerful encryption infrastructure (Spring Crypto Module). Server page is suitable for implementing interactive user interface. {\em Coalition 1}: Enterprise Java Bean (EJB) is a middleware (residing in the application server) used to communicate between different components. It provides rich features for processing HTTP requests. {\em Coalition 2}: The COS uses a package solution from corresponding payroll system. {\em Coalition 3}: A servlet is a controller in Java application server which separates business logic from control. {\em Coalition 4}: A web service interface outside COS.}} \includegraphics[width=11cm]{difficulty3_instantiation.png} \end{center} \end{figure} \paragraph*{\bf Cohesion level $k=6$.} The $6$-cohesive solution also contains five coalitions, with a similar structure as the $3$-cohesive counterpart. There are, nevertheless, several important differences: Firstly, the performance ($\mathsf{PER}$) scenarios now belong to coalitions 0. This means that some performance-related computation is moved to the front-end. This is reasonable as this lightens the computation load of the back-end and thus improving performance and availability. Secondly, the functional requirement $\mathsf{Order.Menu.Available}$ is moved to coalition 4, which is the interface between COS and the inventory system. This requirement specifies that the menu should only display those food items that are available in inventory. Instead of server page, we use scripting to reduce the server's computation load. This can be achieved by changing the front-end to a JavaScript oriented designs. The main difficulty lies in that we need to put extra effort when using JavaScript to communicate with web server (such as AJAX) in order to ensure usability, performance and security. We instantiate design elements as in Fig.~\ref{fig:difficulty6}. \begin{figure} \begin{center}\caption{\label{fig:difficulty6} \footnotesize The $6$-cohesive solution. {\em Coalition 0} uses JavasScript as a front end for user interface. It also takes some computation for sever in order to achieve better performance. {\em Coalition 1} is an interface for accessing the payroll system. {\em Coalition 2} ensures the business logic in COS. {\em Coalition 3} coordinates input from front end (coalition 0) to back end (coalition 1). {\em Coalition 4} is an interface for accessing the inventory system.} \includegraphics[width=11cm]{difficulty6_instantiation.png} \end{center} \end{figure} \section{Related Work}\label{sec:related} Bass, Klein and Bachman introduce ADD as a general framework for developing conceptual architecture in \cite{BLMF2002}. They argue that non-functional requirements should drive decision making throughout the entire design process. Furthermore, as non-functional requirements often provide a high-level view of a software system, ADD should follow an iterative decomposition process. They further improve their method in \cite{WR2006} by clarifying how ADD is carried out in a real life project. The technique of evaluating tradeoff between quality attributes in these mentioned works is largely empirical-based. Kazman et. al. first investigate tradeoff between quality attributes in \cite{KR1998}. They collect and analyse design elements that affect multiple quality attributes. This approach aims to mitigate risks residing in a software architecture and refine design through this process. The study is further investigated in \cite{ZL2005} which gives a quantitative tradeoff analysis for non-functional requirements, and prioritises non-functional requirements during the ADD process. In \cite{AADHMH2011}, a different approach is provided to elicit non-functional requirements. The work follows the paradigm in \cite{WR2006} but generates more detailed and concrete designs; it computes tradeoff between non-functional requirements based on relationships between non-functional and functional requirements. There are also other algorithmic methods for software architecture design from the perspective of component decomposition. For example, in \cite{LXZ2007}, the authors propose a hierarchical clustering algorithm to decompose functional requirements and non-functional requirements. They label each component with a set of attributes and identify similarities between components based on their common attributes. Hence their approach does not put emphasis on the enhancement and conflict between attributes. \section{Conclusion and Future work} \label{sec:conclusion} The use of computational games in software architecture design is a novel technique aimed to contribute to this line of direction. We proposed a game-based approach that, not only builds on established software architecture research (ADD), but is also shown --- through a case study --- to provide reasonable design guidelines to a real world application. We suggest that this framework would be useful in the following: \begin{itemize} \item Designing a software system that involves a large number of functionalities and quality attributes, which will result in a complicated architecture design \item Designing a software system that hinges on the satisfaction of certain core quality attributes \item Evaluating and analysing the rationale of an architecture design in a formal way; identifying potential risks with a design. \end{itemize} It is noted that the framework described here assumes the completion of requirement analysis. In real life requirements are usually identified as the software is implemented (e.g. the agile software development methodology). It would thus be interesting to develop a dynamic version of the game model, which supports architectural design using incremental refinements. Another future work is to develop a mechanism which maps coalitions generated by the algorithm to appropriate attribute primitives. This would then lead to a full automation of the ADD process linking requirements to conceptual architecture designs.
2,877,628,089,326
arxiv
\section{Introduction} \label{Section:Introduction} The {\em core of a graph} $G$, $C(G)$, was first introduced by Dulmage and Mendelsohn \cite{dulmage1958coverings,dulmage1959coverings} as part of their theory on decomposition of bipartite graphs. If $T$ is a tree and $T=C(T)$, then $T$ is a {\em BC-tree}, also known as the {\em block-cutpoint-tree} or the {\em bicolorable tree}, introduced by Harary, Plummer, and Prins \cite{har67,harary1966block}. A subtree of $T$ that is also a BC-tree is called as a {\em BC-subtree} of $T$. It is known that a BC-tree, generally denoted by $T_{BC}$, possesses the interesting condition that the distance between any two leaves is even. In relation to a wide variety of subjects, $T_{BC}$ has a unique minimum cover and $T_{BC}$ is the block-cutpoint-tree of some connected graphs. Two connected graphs have the same block-graph and the same cutpoint-graph if and only if they have the same block-cutpoint-tree. Barefoot \cite{barefoot2002block} showed that a tree has a BC-tree partition if and only if it does not have a perfect matching. Recently, various concepts and algorithms related to BC-tree partitions were also presented. For instance, there exists a maximum proper partial 0-1 coloring in BC-tree such that the edges colored by 0 form a maximum matching \cite{mkr06}. It has promising potentials not only in the field of mathematics \cite{heath1999stack,AGGL2009}, but also in information science \cite{KeithPaton1971,EM2006,doerr2008directed,christou2012computing} and chemistry \cite{nakayama1983,barnard1991comparison}. The number of subtrees of a tree, first studied in \cite{sze05}, is one of the graph invariants of trees that received much attention. Particularly, the extremal trees among certain category of trees that minimize or maximize this number have been vigorously explored. As a related concept, the number of subtrees containing at least some leaf has also been studied and shown to be related to the bound of ``acceptable residue configurations'' \cite{knu}. The analogue of such studies on BC-subtrees, however, seems to have eluded our attention. Corresponding to the number of substructures or distances, various ``middle parts'' of a tree have received attention as an effort to understand the difference and similarity between different graph invariants. See for instance \cite{sze05,adam,barefoot,jordan}. The introduction of BC-subtrees inspires the concept of {\it BC-subtree-core} as the set of vertices contained in most BC-subtrees. In Section \ref{sec:leafnumber}, we provide constructive algorithms that generate BC-trees with given order and number of leaves (whenever possible). In Section \ref{sec:Extremaltrees}, we consider the extremal trees with respect to the number of BC-subtrees or leaf-containing BC-subtrees. The focus is then turned to the ``middle part'' of a tree with respect to the number of BC-subtrees in Section~\ref{sec:mid}. We conclude with some comments and questions in Section \ref{Sec:Conclusion}. \section{Constructing BC-trees with given number of leaves} \label{sec:leafnumber} Harary and Plummer \cite{har67} established the upper and lower bounds for the cardinality of the core of any graph $G$ with $p$ vertices that has a non-empty core. The authors further illustrated that for any integer $r$ with in the above bounds, there exists a graph $G$ with $p$ vertices and $r$ ``lines'' (edges) in its core and presented the specific constructions. Restricting our attention to BC-trees, first note that all leaves of a BC-tree belong to the same of the two independent sets of this bipartite graph (since the distance between any pair of leaves is even). Let a {\em $l$-BC-tree} denote a BC-tree with $l$ leaves, it is easy to see the following. \begin{proposition} There is no 2-BC-tree on $p$ vertices if $p$ is even. \label{theorem:no-2-BC-tree} \end{proposition} The next observation asserts that a BC-tree cannot have exactly two internal vertices. \begin{proposition} There exists no $(p-2)$-BC-tree on $p$ vertices. \label{theorem:no-p-2-BC-tree} \end{proposition} \begin{proof} Assume (for contradiction) that there exists a $(p-2)$-BC-tree $T_{BC}$ on $p$ vertices. Then all other vertices are leaves adjacent to one of the two internal vertices. The distance between two such leaves with different internal neighbors is 3. \end{proof} Consequently, Propositions~\ref{theorem:no-2-BC-tree} and \ref{theorem:no-p-2-BC-tree} immediately imply the following. \begin{corollary} Let $T$ be a BC-tree of order $p\geq 3$ and $l(T)$ the number of leaves of $T$. Then $l(T) \neq p-2$ and \begin{itemize} \item $2\leq l(T)\leq p-1$ when p is odd; \item $3\leq l(T)\leq p-1$ when p is even. \end{itemize} \label{corollary:leaf-bounds} \end{corollary} In fact, the conditions in Corollary~\ref{corollary:leaf-bounds} are also sufficient for the existence of a BC-tree. The constructive algorithms are provided below and illustrated in Fig.~\ref{fig:constructbc}. \begin{theorem} There exists an $r$-BC-tree of order $p$ if and only if $r \neq p-2$ and \begin{itemize} \item $2\leq r \leq p-1$ when p is odd; \item $3\leq r \leq p-1$ when p is even. \end{itemize} \end{theorem} \begin{algorithm*} \caption{Constructing $l$-BC-trees ($l = p - 1, p - 3, p - 4,\ldots,\lceil\frac{p-1}{2}\rceil$) on $p$ vertices} \label{Algorithm:Constructing1} \begin{algorithmic}[1] \STATE Initialize with $T_D$ denoting a star centered at $u_0$ with $p-1$ leaves, one of which labeled as $u_{p-1}$. Let $T_\Delta$ be a set and $T_\Delta =\{T_D\}$. \STATE If $d_{T_D } (u_0 ) = 2$ or $d_{T_D } (u_0 ) = 3$, go to Step \ref{Algorithm:Constructing1:Output}. \STATE Reorganization. \label{Algorithm:Constructing1:Reorganization} \BODY \STATE Choose two neighbors $q,r \neq u_{p-1}$ of $u_0$. \STATE $T_D := T_D - u_0q - u_0r + u_{p-1}q + qr$ and $T_\Delta :=T_\Delta \cup \{T_D\}$. \ENDBODY \STATE If $d_{T_D } (u_0 ) \leq3$, go to Step \ref{Algorithm:Constructing1:Output}. Otherwise, go to Step \ref{Algorithm:Constructing1:Reorganization}. \STATE Output the tree set $T_\Delta$. \label{Algorithm:Constructing1:Output} \end{algorithmic} \end{algorithm*} \begin{algorithm*} \caption{Constructing $l$-BC-trees ($ l= \lceil\frac{p-1}{2}\rceil- 1,\ldots,2 $ when $p$ is odd, $l=\lceil\frac{p-1}{2}\rceil- 1,\ldots,3$ when $p$ is even) on $p$ vertices} \label{Algorithm:Constructing2} \begin{algorithmic}[1] \STATE Initialize with the last $T_D$ generated from the previous algorithm with $\lceil\frac{p-1}{2}\rceil$ leaves. Let $T_\delta$ be a set and $T_\delta =\{T_D\}$, choose a neighbor $u_1 \neq u_{p-1}$ of $u_0$. For simplification, Let $m^*$ be the reference vertex and $m^*=u_1$. \STATE If $d_{T_D } (u_{p-1} ) = 2$, go to Step \ref{Algorithm:Constructing2:Output}. \STATE Reorganization. \label{Algorithm:Constructing2:Reorganization} \BODY \STATE Choose a pendant path $< u_{p - 1}, q, r >$ with $q \neq u_0$. \STATE $T_D := T_D - u_{p-1}q + m^*q$, let $T_\delta :=T_\delta \cup \{T_D\}$ and $m^*=r$. \ENDBODY \STATE If $d_{T_D } (u_{p-1} ) = 2$, go to Step \ref{Algorithm:Constructing2:Output}. Otherwise, go to Step \ref{Algorithm:Constructing2:Reorganization}. \STATE Output the tree set $T_\delta$. \label{Algorithm:Constructing2:Output} \end{algorithmic} \end{algorithm*} \begin{figure*}[htb] \centering \subfigure[Illustration of the procedures for constructing $l$-BC-trees on $p$ vertices ($r = p - 1,p - 3,p - 4,\ldots,\lceil\frac{p-1}{2}\rceil$)]{ \label{fig:constructbc_a} \includegraphics[width=0.85\textwidth]{constructing1.eps}} \subfigure[Illustration of the procedures for constructing $l$-BC-tree ($ l=\lceil\frac{p-1}{2}\rceil- 1,\ldots,2 $)~($p$ is odd) or $l$-BC-tree ($l=\lceil\frac{p-1}{2}\rceil- 1,\ldots,3$)~($p$ is even) on $p$ vertices]{ \label{fig:constructbc_b} \includegraphics[width=0.9\textwidth]{constructing2.eps}} \caption{Construction of $l$-BC-trees on $p$ vertices} \label{fig:constructbc} \end{figure*} \begin{remark} Although it is also straightforward to define such BC-trees of given order that obtain different number of leaves, it is of interests to the computational point of view to provide the above constructive algorithms. \end{remark} \section{Extremal trees with respect to the number of BC-subtrees} \label{sec:Extremaltrees} It is well known that among general trees of given order, the star maximizes the number of subtrees and the path minimizes this number (see for instance, \cite{sze05}). The explicit formulas for these numbers can also be easily obtained. \begin{lemma}(\cite{sze05}) The path $P_n$ has ${n+1 \choose 2}$ subtrees, fewer than any other trees of $n$ vertices. The star $K_{1,n-1}$ has $2^{n-1}+n-1$ subtrees, more than any other trees of $n$ vertices. \label{lemma:szely-wang} \end{lemma} \subsection{The extremality of star and path} Turning our attention to BC-subtrees, denote by $\eta (T)$ (resp. $\eta_{BC} (T)$ ) the number of subtrees (resp. BC-subtrees) of $T$ and $\eta_{BC} (T,v)$ the number of BC-subtrees of $T$ containing $v$. In what follows we show the path and star, as one would expect, are also extremal with respect to the number of BC-subtrees. \begin{theorem}The star $K_{1,n-1}$ has $2^{n-1}-n$ BC-subtrees, more than any other trees on $n$ vertices. \label{theorem:star-BC-subtree} \end{theorem} \begin{proof} By definition, It is not difficult to obtain that $$ \eta_{BC}(K_{1,n-1}) = 2^{n-1}-n , $$ i.e., all subtrees of $K_{1,n-1}$ except for the single-vertex and two-vertex subtrees. For any tree $T$ with $n$ vertices, all the single-vertex or two-vertex subtrees are evidently not BC-trees. Hence \begin{align*} \eta_{BC} (T) & \le \eta (T) - (|V(T)| + |E(T)|) = \eta (T) - (2n - 1) \\ & \le (2^{n - 1} + n - 1) - (2n-1) = \eta_{BC}(K_{1,n-1}) \end{align*} by Lemma \ref{lemma:szely-wang}. Furthermore, if $T$ is not the star $K_{1,n - 1}$, then there is at least one path $P$ of length 3 in $T$. This path $P$ is a non-BC-subtree of $T$ with more than two vertices and hence \begin{equation} \eta_{BC} (T) \le \eta (T) - (|V(T)| + |E(T)|) - 1 < \eta_{BC} (K_{1,n - 1} ). \nonumber \end{equation} \end{proof} \begin{theorem}The number of BC-subtrees of the path $P_n$ is \[\eta_{BC} (P_n )=\begin{cases} {n(n-2)/ 4}&n\equiv0(mod~2),\\ (n-1)^2/ 4&n\equiv1(mod~2), \end{cases} \] less than that of any other $n$-vertex tree. \label{theorem:path-BC-subtree} \end{theorem} \begin{proof} Again, it is easy to obtain $$\eta_{BC} (P_n )={n(n - 2)/4} $$ for even $n$ and $$\eta_{BC} (P_n ) ={(n - 1)^2/4} $$ for odd $n$, i.e., the number of nontrivial subpaths of even length. Now let $T$ be an $n$-vertex tree that is not a path with $(n \geq 4)$ (the cases for small values of $n$ is trivial). For any $u \in V(T)$, let \[ E_u (T) = \{ v \in V(T)|d(u,v)\;{\rm{is}}\;{\rm{even}}\} \] \[ O_u (T) = \{ v \in V(T)|d(u,v)\;{\rm{is}}\;{\rm{odd}}\} \] where $d(u,v)$ is the distance between $u$ and $v$. Let $|E_u (T)| = p$, $|O_u (T)| = q$, and $p+q=n$. It is easy to see that the path between any two vertices in $E_u (T)$ (resp. $O_u (T)$) is a BC-subtree of $T$. Consequently, there are ${p \choose 2}$ BC-subpaths with two endpoints in $E_u (T)$ and ${q \choose 2}$ BC-subpaths with two endpoints in $O_u (T)$. Since $T$ is not a path, there is at least one vertex $v$ with degree at least three. The subtree induced by $v$ and its neighbors is also a BC-subtree of $T$, hence \[ \eta_{BC} (T) \geq {p \choose 2}+{q \choose 2}+ 1. \] Note that we have $p \geq 2$ and $q \geq 2$ unless $T$ is a star, in which case $\eta_{BC} (T) > \eta_{BC} (P_n)$ from Theorem~\ref{theorem:star-BC-subtree}. Assuming now $p \geq 2$ and $q\geq 2$, \begin{itemize} \item If $n$ is odd, we have \[ {p \choose 2}+{q \choose 2} + 1 - \frac{{(n - 1)^2 }}{4} = \frac{{(p - q)^2 + 3}}{4} > 0 ; \] \item If $n$ is even, we have \[ {p \choose 2}+{q \choose 2}+ 1 - \frac{{n(n - 2)}}{4} = \frac{{(p - q)^2 }}{4} + 1 > 0. \] \end{itemize} \end{proof} \subsection{Leaf-containing BC-subtrees} In the rest of this section, we consider BC-subtrees containing at least one leaf, analogue of the concept of leaf-containing subtrees. We denote by $\eta_{BC}^* (T)$ the number of BC-subtrees of $T$ that contain at least a leaf of $T$. \begin{theorem} For any tree $T$ on $n \geq 3$ vertices, we have $$ n-2 \le \eta_{BC}^*(T) \le 2^{n - 1} - n $$ with equalities at the upper (lower) bound if and only if $T$ is a star (path). \label{theorem:starpath-BC-subtree2} \end{theorem} \begin{proof} The sharp upper bound simply follows from Theorem~\ref{theorem:star-BC-subtree} and the fact that every BC-subtree of a star $T$ contains some leaf of $T$. From Theorem~\ref{theorem:path-BC-subtree} it is easy to see that $$\eta_{BC}^ * (P_n) = \eta_{BC} (P_n ) - \eta_{BC} (P_{n - 2} ) = n-2 . $$ Now consider a tree $T$ of order $n$ and the two partites $X, Y$ of $T$ as a bipartite graph, denote by $n_X$ ($n_Y$) and $l_X$ ($l_Y$) the number of vertices (leaves) in $X$ and $Y$ respectively. \begin{itemize} \item If $l_X >0$ and $l_Y > 0$, let $w$ be a leaf in $X$, then the path connecting $w$ and any other vertex $u \in X - \{ w \}$ is a leaf-containing BC-subtree of $T$, yielding at least $(n_X - 1)$ of such subtrees. Similarly, there are at least $(n_Y - 1)$ leaf-containing BC-subtrees of $T$ formed by the paths connecting pairs of vertices in $Y$. Hence $$ \eta_{BC}^*(T) \geq n_X - 1 + n_Y - 1 = n-2 $$ with equality if and only if $l_X = l_Y = 1$, in which case $T$ is a path. \item Otherwise, assume without loss of generality that $l_X \geq 2$ and $l_Y = 0$. Let $n_k$ denote the number of vertices whose closest leaf is at distance $k$, i.e., $n_0=l_X$, $n_1$ is the number of internal vertices (in $Y$) adjacent to leaves. Then, by noting that all leaves are at even distance from each other, there is no edge between vertices counted by $n_k$ for any $k$. We have $$ l_X = n_0 \geq n_1 \geq n_2 \geq n_3 \ldots \geq n_{s-1} \geq n_s = 1 \hbox{ for some $s$.} $$ Note that $s$ is the largest distance between any internal vertex and leaf, thus $n_{s-1} > n_s$, implying that (for either odd or even $s$) $$ n_X = n_0 + n_2 + n_4 + \ldots \geq 1 + n_1 + n_3 + n_5 + \ldots = 1 + n_Y . $$ Consequently $n_X \geq \frac{n+1}{2}$. Considering the paths connecting a pair of vertices in $X$ with at least one of which being a leaf yields $$ \eta_{BC}^* (T) = (n_X - l_X)l_X + {l_X \choose 2} = \left( n_X - \frac{l_X + 1}{2} \right) l_X \geq n-2, $$ with equality if and only if $l_X=2$ and $n_X = \frac{n+1}{2}$, in which case $T$ is a path of even length. \end{itemize} \end{proof} \section{The BC-subtree-core of a tree} \label{sec:mid} In \cite{sze05}, the ``middle part'' of a tree $T$ with respect to the number of subtrees (i.e. the set of vertices contained in most subtrees) was defined as the {\em subtree-core} of $T$. It is shown that this set contains one or two adjacent vertices, a fact analogous to those of the {\em center} and {\em centroid} of a tree \cite{adam, jordan}. These facts were all proved by establishing the ``concavity'' or ``convexity'' of the corresponding counting function along any path of a tree. In terms of the number of subtrees, the following was shown in \cite{sze05}. \begin{proposition}\label{prop:sze} (\cite{sze05}) For any three vertices $x, y, z$ of $T$ such that $xy, yz \in E(T)$, $y$ is contained in more subtrees than those containing $x$ or those containing $z$. \end{proposition} Inspired by the concept of subtree-core, we consider the {\em BC-subtree-core} of a tree $T$ as the set of vertices maximizing $\eta_{BC} (T,v)$ (recall that $\eta_{BC} (T,v)$ is the number of BC-subtrees of $T$ containing $v$). Unlike other ``middle parts'' of a tree, simple examination of a path shows that the BC-subtree-core is more complicated. Let $P_n$ be a path with $V(P_n) = \{v_i|i = 1,2,\dots,n\}$, simple calculation shows \begin{equation} \eta_{BC} (P_n,v_i)=\begin{cases} {\frac{i(n+1-i)}{2}-1}&i\equiv0(mod~2),\\ \lfloor\frac{i(n+1-i)-1}{2}\rfloor&i\equiv1(mod~2). \end{cases} \label{equ:oddeven} \end{equation} Consequently, one easily finds the BC-subtree-core of $P_n$ depending on $n$ modulo 4: \begin{itemize} \item when $n=4k$, $\eta_{BC} (P_n,v_i)$ is maximized at $v_\frac{n}{2}$ and $v_\frac{(n+2)}{2}$ with the maximum value $\frac{(n^2+2n-8)}{8}$; \item when $n=4k+2$, $\eta_{BC} (P_n,v_i)$ is maximized at $v_\frac{n}{2}$ and $v_\frac{(n+2)}{2}$ with the maximum value $\lfloor\frac{(n^2+2n-4)}{8}\rfloor$ ; \item when $n=4k+1$, $\eta_{BC} (P_n,v_i)$ is maximized at $v_\frac{(n+1)}{2}$ with the maximum value $\frac{(n^2+2n-3)}{8}$; \item when $n=4k+3$, $\eta_{BC} (P_n,v_i)$ is maximized at $v_\frac{(n-1)}{2}$, $v_\frac{(n+1)}{2}$ and $v_\frac{(n+3)}{2}$ with the maximum value $\frac{(n^2+2n-7)}{8}$. \end{itemize} Noting that every BC-subtree of the star $K_{1,n-1}$ must contain the center, the following is obvious. \begin{proposition} The BC-subtree-core of the star $K_{1,n-1}(n>3)$ contains exactly the center vertex. \end{proposition} In general, the BC-subtree-core of a tree needs not contain only adjacent vertices (unlike in the cases of a path or star). In Fig. \ref{fig:bccorecounterexam}, simple calculations illustrate that $\eta_{BC} (T_0,v_i)=11$ $(i=1,2,3,4)$, $\eta_{BC} (T_0,u_i)=18$ $(i=1,2)$, $\eta_{BC} (T_0,x)=\eta_{BC} (T_0,y)=21$, $\eta_{BC} (T_0,z)=19$. \begin{figure*}[htbp] \centering \includegraphics[width=0.45\textwidth]{bccorecounterexam.eps}\\ \caption{The BC-subtree-core does not necessarily contain adjacent vertices.} \label{fig:bccorecounterexam} \end{figure*} From observing both a path and Fig.~\ref{fig:bccorecounterexam}, it is obvious that one cannot hope for an analogue of Proposition~\ref{prop:sze}. With the special characteristics of BC-subtrees in mind, it is also interesting to ask the similar question for ``alternative'' vertices on a path. That is, for consecutive vertices $a,b,c,d,e$ on a path of a tree $T$ (i.e., $ab,bc,cd,de \in E(T)$), is $c$ necessarily contained in more BC-subtrees than $a$ and $e$? Unfortunately, in Fig.~\ref{fig:alterbccorecounterexam}, simple calculation shows that $\eta_{BC} (T,v_i)=39$ $(i=1,2,3,4,5,6)$, $\eta_{BC} (T,u_i)=69$ $(i=1,2)$, $\eta_{BC} (T,z)=67$, $\eta_{BC} (T,x)=\eta_{BC} (T,y)=73$. \begin{figure*}[htbp] \centering \includegraphics[width=0.45\textwidth]{alterbccorecounterexam.eps}\\ \caption{No concavity exists even for alternative vertices on a path} \label{fig:alterbccorecounterexam} \end{figure*} There are also many examples where the BC-subtree-core differs from the subtree-core, the center, and/or the centroid. Consider the path $T=P_3$ with vertex set $\{x,y,z\}$ where $x$ and $z$ are leaves. Then it is clear to see that the center, the centroid, and the subtree core is $\{y\}$, while the BC-subtree-core is $\{x,y,z\}$. Another natural question is: must the subtree-core be a subset of the BC-subtree-core? Again, in Fig.~\ref{fig:bccorecounterexam}, we have $f_{T_0}(v_i)=17$ $(i=1,2,3,4)$, $f_{T_0}(u_i)=32$ $(i=1,2)$, $f_{T_0}(x)=f_{T_0}(y)=35$, $f_{T_0}(z)=36$, where $f_{T_0}(v)$ denote the number of subtrees of $T_0$ containing $v$. Therefore by definition the subtree-core of $T_0$ is $\{z\}$ while the BC-subtree-core is $\{x,y\}$. \section{Concluding remarks} \label{Sec:Conclusion} Questions related to BC-trees an BC-subtrees are considered. Motivated by the study of graph core, algorithms are provided that construct BC-trees with any number of leaves when possible. Consequently, the sufficient and necessary conditions on the number of leaves for the existence of a BC-tree follow immediately. As analogous results to those on subtrees and distance-based graph invariants, the path and star are shown to be extremal with respect to the number of BC-subtrees. Although the results are similar in nature to the previous work, considering BC-subtrees turns out to be less trivial. Regarding the number of subtrees, Yan and Yeh \cite{yan06} illustrated a linear-time algorithm to count the sum of weights of subtrees of $T$ by using the method of generating function. Considering the similar question for BC-subtrees would be interesting and natural. Considered as a topological index or graph invariant, the number of BC-subtrees can be studied for other categories of trees such as trees with given degree sequences. Based on the same concept, the ``middle part'' of a tree is defined and named as the BC-subtree-core, analogous to those on subtrees or distance-based invariants. A brief discussion is provided on this topic, where examples are presented showing that the BC-subtree-core behave in a rather different way than all previously known ``middle parts'' of a tree. Nevertheless, evidences suggest the following: \begin{itemize} \item Knowing that a BC-subtree-core does not necessarily contain adjacent vertices, must a BC-subtree-core (containing at least two vertices) contain vertices of distance at most 2? \item Knowing that the subtree-core is not necessarily a subset of the BC-subtree-core, must they be adjacent (when they contain different vertices)? \end{itemize} The first question seems natural considering the special property of BC-subtrees; the second question is related to one that asks how far different ``middle parts'' can be. Also inspired by the work on subtrees, the extremal trees with respect to the number of leaf-containing BC-subtrees are characterized. The similar concept on ``middle part'' can also be defined. It is not difficult to present similar elementary observations on the set of vertices that are contained in most leaf-containing BC-subtrees. We skip the details here. \section{Acknowledgment} This work is supported partly by the Simons Foundation (Grant No. 245307), the National Natural Science Foundation of China (Grant No. 61173 035), and the Program for New Century Excellent Talents in University (Grant No. NCET-11-0861).
2,877,628,089,327
arxiv
\section{Introduction} The considerable high capital and operational costs on semiconductor fabrication have motivated most semiconductor companies to outsource it is a fabrication to off-shore foundries. Despite the reduced cost and other benefits, this trend has led to ever-increasing security risks such as IC counterfeiting, piracy and unauthorized overproduction by the contract foundries \cite{subramanyan2015evaluating}. The overall financial risk caused by such counterfeit and unauthorized ICs was estimated to be over \$169 billion per year \cite{Doe:2009:Misc}. The major threats from the attackers arise from reverse engineering (RE) an IC and fully identifying its functionality through brute force approaches by applying test inputs and obtaining outputs. To prevent such reverse engineering, \emph{Hardware obfuscation} techniques have been extensively researched in recent years \cite{yasin2017provably}. The general idea is to introduce the ambiguity in the functionality of the IC through obfuscation so that the while preserving the original functionality. Such techniques were highly effective until the advent of advanced attacking techniques. This is based on the fact that there are limited types of gates (e.g., AND, OR, XOR) in IC, so the attackers can just brute force all the possible combinations of types for all obfuscated gates to find out the one that functions identically to the targeted IC to be deobfuscated. As brute force is usually prohibitively expensive, more recently, efficient methods such as Boolean satisfiability problem (SAT)-based attacks have been proposed, which have attracted enormous attention \cite{liu2016oracle}. The runtime of the SAT attack to reverse engineer the IC highly depends on the complexity of the obfuscated IC, which can vary from milliseconds to years or more depending on the number and location of obfuscated gates. Therefore, a successful obfuscation defence is to increase the amount of time (i.e., many years) required to reverse engineer the design. However, obfuscation comes at a substantial cost in finance, power, and space, and such trade-off requires us to search for optimal positions instead of purely increasing their quantity. The obfuscation policy tries to select a set of gates such that maximum obfuscation can be achieved while incurring minimal overheads. Although such selection can significantly influence the deobfuscation runtime, however, until now it is still generally based on human heuristics or experience, which is seriously arbitrary and sub-optimal \cite{khaleghi2018hardware}. This is major because it is unable to ``try and error'' all the different ways of obfuscation, as there are millions of combinations to try and the runtime for each execution (i.e., to run the attacker) can be days, weeks, or years. To address this issue, this paper focuses on efficient and scalable ways to estimate the runtime of an attacker to reverse engineer an obfuscated IC. This research topic is highly under-explored because of its significant challenges: \textbf{1) Difficulty in characterizing the hidden and sophisticated algorithmic mechanism of attackers.} Over the recent years, a large number of deobfuscation methods have been proposed with various techniques \cite{khaleghi2018hardware}. In order to practically defeat the obfuscation schemes, methods with more and more sophisticated theories, rules, and heuristics have been proposed and adopted. The behaviour of such highly nonlinear and strongly-coupling systems is prohibitive for conventional simple models (e.g., linear regression and support vector machine \cite{bishop2014pattern}) to characterize. \textbf{2) Challenge in extracting determinant features from discrete and graph-structured IC.} The inputs of the runtime estimation problem is the IC and the selected gates for obfuscation, where the first input is a heterogeneous graph while the second is a vector with discrete values. Conventional feature extraction methods are not intuitive to be applied to such type of data without significant information loss. Hence, it is highly challenging to instantly formulate and seamlessly integrate them as mathematical forms that can be input to conventional computational and machine learning models. \textbf{3) Requirement on high efficiency and scalability for deobfuscation runtime estimation.} The key to the defence against deobfuscation is the speed. The faster the defender can estimate the deobfuscation runtime for each candidate set of obfuscated gates, the more candidate sets the defender can evaluate, and hence the better the obfuscation effect will be. Moreover, the estimation speed of deobfuscation runtime must not be sensitive to different obfuscation strategies in order to make the defender strategy controllable. This work addresses all the above challenges and proposes the first generic framework for deobfuscation runtime prediction, based on graph deep learning techniques. In the recent years, deep learning methods in complex cognitive tasks such as object recognition and machine translation have achieved immense success \cite{Wang'19,Lechner_IGSC'19}, which motivates the generalization of it into graph-structured data \cite{kipf2017semi}. By concretely formulating ICs and the obfuscated gates as multi-attributed graphs, this work innovatively leverages and extends the state-of-the-art graph deep learning methods such as Graph Convolutional Neural Networks (GCN) \cite{kipf2017semi} to instantiate a graph regressor. Such end-to-end deep graph regressor can characterize the underlying and sophisticated cognitive process of the attacker for deobfuscating the ICs. To adopt the powerfulness of GCN and handle the aforementioned issues, we extend it by adjusting the connectivity representation inspired by domain facts. Our enhanced GCN can automatically extract the discriminative features that are determinants to the estimation of the deobfuscation runtime to achieve accurate runtime prediction. After being trained, the prediction based on this deobfuscation runtime estimator just runs instantly fast by simply performing a feed-forward propagation process. The major contributions of this paper are: \begin{itemize} \item Proposing a new framework, ICNet, for deobfuscation runtime estimation based on graph deep learning. \item Developing a new multi-attributed graph convolutional neural network for graph regression. \item Conducting systematical experimental evaluations and analyses on real-world datasets (ISCAS-85 benchmark). \end{itemize} The rest of the paper is organized as follows. Section \ref{sec:related_work} reviews the existing work. Section \ref{sec:method} elaborates proposed graph learning model for SAT runtime prediction. In Section \ref{sec:evaluation}, experiments on real-world data are presented. This paper concludes by summarizing the study's important findings in Section \ref{sec:conclusion}. \vspace{-5pt} \section{Background and Related Work}\label{sec:related_work} We discuss the logic obfuscation and SAT attacks followed by graph convolutional networks and the relevant works. \vspace{-5pt} \subsection{Logic Obfuscation and SAT Attacks} Logic obfuscation often referred to as logic locking \cite{logiclock} is a hardware security solution that facilitates to hide the IP using key-programmable logic gates The activation of the obfuscated IP is accomplished in a trusted regime before releasing the product into the market, thereby reducing the probability to obtain the secret configuration keys by the attacker. During the activation phase, the correct key is applied to these key-programmable gates to recover the correct functionality of the IC/IP. Besides, the correct key will be stored in the IC in a tamper-proof memory. Although obfuscation schemes try to minimize the probability of determining the correct key by an attacker, thereby curbing the ongoing piracy of the legitimate IPs. However, SAT attack shows that the contemporary obfuscation schemes can be broken \cite{Subramanyan} to retrieve the correct key. In order to perform SAT attack, the attacker is required to have The SAT attack first tries to find the Distinguishing Input Patterns (DIP) X${_i}$, which when applied as the input can produce different outputs ($Y_i$) such that ($Y_1 \neq Y_2$) when different key values are applied (K${_1}$, K${_2}$). This DIP can then be used to distinguish the correct and incorrect keys. The number of DIPs discovered during the SAT-based attack is the same as the number of iterations needed to unlock the obfuscated design. In each iteration, a constraint is added to SAT solver, until SAT solver cannot find a satisfying assignment. This results in finding the correct key. Different SAT-hard schemes such as \cite{Kolhe:2019:CLO:3299874.3319496,iccad:2019:CLO:3299874.3319496} are proposed Furthermore, new obfuscation schemes that focus on non-Boolean Behaviour of circuits \cite{delay_lock}, that are not convertible to an SAT circuit is proposed for SAT resilience. Some of such defences include adding cycles into the design \cite{SRCLock}. By adding cycles into the design may cause that the SAT attack gets stuck in the infinite loop, however, advanced SAT-based attacks such as cycSAT \cite{cycSAT} can extract the correct key despite employing such defences. To ensure that the proposed defence ensures robustness against SAT attacks, the defenders need to run the rigorous simulations which could range from few minutes up to a few days. Furthermore, this can be exacerbated when the defender verifies for large and real-world circuits. This work proposes the use of graph convolutional networks (GCNs) to alleviate the need to run the attack to verify whether the defence is strong enough or not. The work in \cite{Neurosat} utilizes neural network with single-bit supervision to predict whether a given circuit in Conjunctive Normal Form (CNF) can be decrypted or not. However, this is limited to determining for few kinds of SAT-solvers, but cannot be applied to SAT-hard solutions such as SMT-SAT \cite{kimia'19}, a superset of SAT attacks. However, with the proposed graph convolutional network (GCN) based predictor, the defender can determine the deobfuscation time in a single run of GCN, which consumes a few seconds. We introduce the GCN below. \vspace{-12pt} \subsection{Graph Convolutional Networks} Spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph. Many graphs and geometric convolution methods have been proposed recently. The spectral convolution methods \cite{defferrard2016convolutional,kipf2017semi} are the mainstream algorithms developed as the graph convolution methods. Their theory is based on the graph Fourier analysis \cite{shuman2013emerging}. The polynomial approximation is firstly proposed by \cite{hammond2011wavelets}. Inspired by this, graph convolutional neural networks (GCNs) (\cite{defferrard2016convolutional}) is a successful attempt at generalizing the powerful convolutional neural networks (CNNs) in dealing with Euclidean data to modelling graph-structured data. Kipf and Welling proposed a simplified type of GCNs \cite{kipf2017semi}, called graph convolutional networks (GCNs). The GCN model naturally integrates the connectivity patterns and feature attributes of graph-structured data and outperforms many state-of-the-art methods significantly. Therefore, it is promising to apply GNNs for the circuit problem, since ICs can be naturally represented using a graph with connectivity among gates. \section{Proposed Model for Runtime Prediction}\label{sec:method} This section introduces the problem setting, and we present the deobfuscation time prediction through the proposed ICNet. \vspace{-10pt} \subsection{Problem Setting} First, a circuit is modeled as a graph network: $\mathcal{G} = (\mathcal{V}, \mathcal{E}, \mathcal{W})$, where $\mathcal{V}$ is a set of $n$ vertexes (gates), $\mathcal{E}$ represents links among gates and $\mathcal{W} = [w_{ij}] \in \{0,1\}^{n\times n}$ is an unweighted adjacency matrix. A signal $\X$ defined on the nodes is regarded as a vector $\X \in \mathbb{R}^{n\times F}$. A graph structure representation, i.e., combinatorial graph Laplacian, is defined as $\Ll= D-\mathcal{W} \in \mathbb{R}^{n\times n}$ where $D$ is degree matrix. Accordingly, we formulate the estimation of running time on IC as a regression task. Specifically, the model accepts graph structure along with gate features as input, and predict the running time: \begin{equation} \small Y=f(\mathcal{G}, \X)\Theta, \label{eq:problem} \end{equation}where $f$ is a function integrating graph structure $\mathcal{G}$ and gate feature $\X$. $\mathcal{G}$ is often represented by graph Laplacian $\Ll$ in graph theory \cite{chung1997spectral} $\Theta$ indicates the parameters of fully neural network layers connecting the actual runtime $Y$ and $f$. The purpose of $\Theta$ is (1) fitting dimension with $Y$ and (2) generalizing the logic pattern between $Y$ and $f$. The goal of \ref{eq:problem} is to learn $f$ and $\Theta$ so that the difference between $Y$ and $f(\mathcal{G}, \X)\Theta$ is minimized. However, there exists no straightforward relationship among the number of obfuscated gates, type of obfuscated gates and other factors to determine the deobfuscation time. This makes it harder to efficiently estimate the SAT-attack runtime with traditional machine learning models. Hence, we survey a thread of works called graph neural networks or geometric deep learning to address the problem of deobfuscation estimation, as the netlist can be perceived as the graph representation of various logical elements. \begin{figure}[!htb] \small \centering \includegraphics[width=0.32\textwidth]{gcn} \caption{GCN workflow.} \label{fig:gcn_example1} \end{figure} \vspace{-10pt} Graph convolutional network (GCN) is a recently emerging technique that integrate graph structure and node attributes, and its general process is performed as follows: it determines a vertex complete set of orthonormal Eigen vectors (frequency components) and their associated ordered real non-negative eigenvalues identified as the weights of these frequencies components. Specifically, the Laplacian is first diagonalized by the Fourier basis $\UT$: $\Ll = \Uu \Lambda \UT$ where $\Lambda$ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e., ${\displaystyle \Lambda _{ii}=\lambda _{i}}$. The graph Fourier transform of a signal $\X\in \mathbb{R}^{n\times F}$ is defined as $\hat{\X}=\UT \X \in \mathbb{R}^{n}$ and its inverse as $\X=\Uu \hat{\X}$ \cite{shuman2013emerging,shuman2016vertex}. To enable the formulation of fundamental operations such as filtering in the vertex domain, the convolution operator on graph is defined in the Fourier domain such that $f_{1}*f_{2}=\Uu \left[\left(\UT f_{1} \right) \otimes \left(\UT f_{2}\right)\right]$, where $\otimes$ is the element-wise product, and $f_{1}/f_{2}$ are two signals defined on vertex domain. The intuitive workflow of GCN is shown in Figure \ref{fig:gcn_example1}. It follows that a vertex signal $f_{2}=\X$ (gate features) is filtered by spectral signal $\hat{f_{1}}=\UT f_{1}=\g$ (graph structure) as: \begin{equation*} \scriptsize \g * \X = \Uu \left[\g(\LBD)\odot \left(\UT f_{2}\right)\right] = \Uu \g(\LBD) \UT \X. \end{equation*} \vspace{-10pt} \vspace{-10pt} \subsection{Proposed Model: ICNet} Our proposed method, namely ICNet, is a neural network that is based on graph convolution operator. As shown in Figure \ref{fig:icnet}, ICNet encodes the obfuscated circuit into two components: \begin{figure*}[!hbtp] \centering \includegraphics[width=6.6in]{icnet.png} \caption{Illustration of ICNet structure: Two graph convolutions ($\mathcal{W}\X\Theta_{\mathbf{GCN}}$) followed by ReLU activation, and attention layers for features ($\Theta_{feat}$) and gate ($\Theta_{gate}$) respectively.} \label{fig:icnet} \vspace{-20pt} \end{figure*} \begin{itemize} \item \textbf{Graph Structure $\mathcal{G}$}: Complete set of local connections are often used to represent the graph structure \cite{chung1997spectral}. Typically, a graph Laplacian is employed, in this work since it contains gate-wise connections. \item \textbf{gate features $\X$}: Gate-level information is encoded as numerical vector as input feature. Such information could include gate type, whether it is obfuscated and so on. \end{itemize} By applying the GCN, we can easily build a model to learn the relationship between the circuit and deobfuscation time automatically. However, GCN suffers from several issues: \textbf{(1)} the original graph convolutional operator is not suitable for the circuit since the graph Laplacian will make the graph convolutional operator behaviour as label propagation, i.e., the attributes of each gate are similar to its neighbours. This is called the smoothness assumption \cite{li2018deeper}, and it does not fit the fact that gate type or encryption location of each gate does not determine its neighbours' related attributes in theory. This issue is due to that graph Laplacian matrix is used during graph convolution operation, which counts each node as -$N_{i}$ ($i$ is the index of the row in graph Laplacian), and counts the weighted sum of its neighbours as $N_{i}$. Consequently, they are cancelled out when gate representation are aggregated using sum, and the model can hardly learn the relationship between their sum (residues) and actual runtime. \textbf{(2)} The default setting of GCN aggregate gates and their features using the mean function, which is not supported by any domain knowledge, and not likely to cover the actual pattern on features or gates. To solve these issues, our model employs several policies to enhance the traditional GCN for circuit learning. \begin{itemize} \item \textbf{Graph Representation $\mathcal{G}=A$}: Our model uses adjacency matrix $A$ instead of graph Laplacian. This representation can avoid intrinsic smoothness assumption which is not compatible with ICs. \item \textbf{Feature Aggregation($\Theta_{feat}$)}: The mean function is a typical methods for aggregating node feature. However, the mean function does not consider the quantity of sum. A more flexible way is to learn feature aggregation by a neural network automatically. \item \textbf{Gate Aggregation($\Theta_{gate}$)}: similarly, mean function can also be used to aggregate gate representation. Due to the complicated real-world aggregation, another neural network is designed to learn the gate aggregation function for more flexibility. \end{itemize} Our model is based on GCN which simplify the layer parameters of graph convolutional operator and applies an approximate technique to boost the efficiency. GCNs, as a state-of-the-art deep learning method for the graph, focus on processing graph signals defined on undirected graphs. According to the analysis above, graph Laplacian is replaced with adjacency matrix. To fit whole-graph level regression task, the proposed method designs two aggregation neural networks. Formally, it is denoted as: \begin{equation} \begin{aligned} Y=&f(\mathcal{G}, \X) \Theta &&{\scriptstyle{\text{(GNN definition Eq. \ref{eq:problem})}}}\\ =& \underbrace{\mathbf{GCN}(\mathcal{W}, \X)}_\text{f=GCN}\Theta &&{\scriptstyle{\text{(apply GCN with adjacency matrix)}}} \\ =& \mathbf{GCN}(\mathcal{W}, \X)\Theta_{feat}\Theta_{gate}&&{\scriptstyle{(\Theta\rightarrow\{\Theta_{feat}, \Theta_{gate}\})}} \\ = & \underbrace{\sigma(\mathcal{W}\X\Theta_{GCN}}_\text{GCN})\Theta_{feat}\Theta_{gate}, &&{\scriptstyle{\text{(rewrite GCN in matrix form)}}} \end{aligned} \end{equation} where activation $\sigma$ is implemented by ReLU function. The running time tends to grow at an exponential rate as the number of encrypted gates increase. Therefore, the model is modified as: \begin{align} Y = \exp({\mathcal{W}\X\Theta_{\mathbf{GCN}}\Theta_{feat}\Theta_{gate}}) \label{eq:icnet} \end{align} As illustrated in Fig. \ref{fig:icnet}, the proposed ICNet conducts two graph convolutional operations (GCN) to fuse the information from graph structure and gate features. Then two sets of neural networks are performed for the feature and gate aggregation. To further increase the model's interpretability, we replace fully connected layers $\Theta_{feat}$ and $\Theta_{gate}$ as follows: Generally, \textit{sum} or \textit{mean} function is a typical method for aggregating node attribute into lower dimensional vector. This treats voting from each gate equally, which is not fit in theory. For example, the encrypted gate should be weighed higher, since it impose more difficulty on obfuscation task; gate types also have a significant impact on runtime \cite{subramanyan2015evaluating}. Therefore, a more flexible way is to build a neural network to automatically learn attribute aggregation. To fit the whole-graph level regression task, the proposed method designs two aggregation neural components based on soft attention mechanism \cite{velivckovic2018graph} for feature-level and gate-level. Formally, the feature based attention is calculated as: { \begin{equation} \begin{aligned} a_{i}=\frac{exp(e_{i})}{\sum_{i} exp(e_{i})}, e_{i}=\sum_{i} \theta_{i}\mathbf{F}_{i}, \label{attention} \end{aligned} \end{equation}}where $\mathbf{F}_{i}$ represents $i$th feature after $\mathbf{GCN}$, $\theta_{i}$ is the weight parameter for $\mathbf{F}_{i}$, $a_{i}$ is the corresponding attention and thereby the output of this layer is $\sum_{i}a_{i}\mathbf{F}_{i}$. This attention shows which feature contributes more to the obfuscation time. Similarly, gate-wise attention is utilized for gate-level aggregation by setting $\mathbf{F}_{i}$ to $i$th gate in \eqref{attention}. \vspace{-10pt} \begin{algorithm}[!h] \small \caption{ICNet} \label{algo:fgan} \SetAlgoLined \KwIn{An integrated circuit graph $\mathcal{G}=\{\mathcal{V}, \mathcal{E}\}$, gate features set: $x_{j}(i)$, $i$ $\in$ {$1, 2, ..., |\mathcal{V}|$} for each encryption instance $D_{j}$, the real runtime $Y_{j}$ for instance $D_{j}$} \KwOut{A neural network function with parameters $\Theta_{\mathbf{GCN}}$, $\Theta_{feat}$ and $\Theta_{gate}$} // Data preparing \\ Calculate $\mathcal{W}$ which is the adjacency matrix of $\mathcal{G}$ \\ Split encryption instances $D$ into training set $D_{train}$ and testing set $D_{test}$ \\ Split both $D_{train}$ and testing set $D_{test}$ into batch set $d_{train}$ and testing set $d_{test}$ \\ // Update ICNet\\ $\theta=\{\Theta_{\mathbf{GCN}}, \Theta_{feat},\Theta_{gate}\}$\\ Initialize $\theta$ with Gaussian or uniform distribution. \\ \Repeat{$\delta$ convergence}{ Randomly select one $d_{train}={x_{d1}, x_{d2}, ...}$ \\ Calculate predicted runtime $\hat{Y}$ \Comment{Eq. \ref{eq:icnet} and \ref{attention} }\\ Calculate residues $\delta= Y-\hat{Y} $ \\ Compute derivatives to update parameters: $ \theta \leftarrow \theta + \beta\nabla_{\theta}\delta$, where $\beta$ is learning rate } \end{algorithm} \vspace{-6pt} \vspace{-6pt} \subsection{Algorithm description} The Algorithm \ref{algo:fgan} first prepare graph adjacency as circuit connection representation (line 2). To fit the machine learning schema, the whole dataset is split into training and testing dataset. Each dataset is then split into small batch size to improve learning efficiency (line 3-4). ICNet training is an iterative process which updates the model until the residues are small enough or converged (line 6-13). First, the model parameters are initialized by Gaussian or uniform distribution. In each iteration, a batch of the training set is selected randomly. By equation \ref{eq:icnet}, the model computes the predicted runtime (line 10) and then calculates the residues between real runtime and prediction (line 11). Following normal deep learning schema, the model update parameters by the derivatives regarding the parameters themselves with learning rate (line 12). \vspace{-10pt} \section{Evaluation}\label{sec:evaluation} This section elaborates evaluation of the proposed method ICNet with competitive baselines including: Graph deep learning methods: GCN \cite{kipf2017semi}, ChebNet \cite{defferrard2016convolutional}. The input of these models above is exactly same as our model. We also compare against several state-of-the-art regression models\footnote{https://scikit-learn.org/stable/modules/linear\_model.html}: Linear Regression (LR), LASSO \cite{tibshirani1996regression}, Epsilon-Support Vector Regression(SVR) (Two kernels were applied: polynomial (P) and RBF (R)), \cite{smola2004tutorial}, Ridge Regression (RR) \cite{ng2004feature}, Elastic Net (EN) \cite{zou2005regularization}, Orthogonal Matching Pursuit (OMP) \cite{mallat1993matching}, SGD Regression, Least Angle Regression (LARS) \cite{efron2004least}, Theil-Sen Estimators (Theil) \cite{dang2008theil}. These regression models does not model graph using Laplacian or adjacency matrix, since they can only accept feature vector. Therefore, the input are encoded as mean or sum on concatenation of Laplacian or adjacency matrix and gate features. \vspace{-10pt}\subsection{Data processing} The datasets are obtained by running SAT algorithm \cite{Subramanyan,subramanyan2015evaluating} on real-world ISCAS-85 benchmark: First, we take a circuit and select a random gate and replace it with LUT of fixed size (LUT size 4 in current work). To deobfuscate, we implement SAT attack algorithm \cite{Subramanyan,subramanyan2015evaluating} with the obfuscated circuit netlist as input. We monitor the time that SAT-attack takes to decode the key, which is the deobfuscation time. The proposed model is evaluated on two datasets: \textbf{Dataset 1}: the total number of the encryption location ranges from 1 to 350, this is for testing if the model is sensitive to the number of encrypted quantity of gates. \textbf{Dataset 2}: the total number of the encryption location ranges from 1 to 3, this is for testing if the model can handle very small value. The circuit in the experiments is the same, and the total gate number of the circuit is 1529. For graph deep learning methods, the graph is represented using Laplacian matrix or adjacency matrix, while for general regression baselines, the graph Laplacian or adjacency matrix is summed or averaged across gates. Though the evaluations showed here are mere proof-of-concept of how powerful the proposed GCN based deobfuscation runtime prediction is, it can be applied to an SAT-hardening solution utilizing any replacement policy, LUT size and other SAT parameters, by retraining GCN. \vspace{-8pt} \subsection{Experiment configuration} The features of gate used in experiments include: \textbf{gate mask}: if the gate is encrypted, the value is set to 1, otherwise 0. and \textbf{gate type}: the gate type include \{AND, NOR, NOT, NAND, OR, XOR\}, they are encoded using one-hot coding, such as [1,0,0,0,0,0] for AND and [0,1,0,0,0,0] for NOR gate. For graph deep learning model (ChebNet and ICNet), the graph structure is represented using graph Laplacian matrix or adjacency matrix. These model employ ADAM \cite{le2011optimization} optimizer and will stop learning when the learning loss is converged. The implementation of our model will be available online. All the baselines and the proposed model are tested on two different feature set, since gate type is useful or not is unknown.: \textbf{Location}: Only the gate mask is included. \textbf{All features}: Besides gate mask, gate type is also included. For node aggregation, we apply $sum$, and $mean$ since they are popular. Deep learning model can have another node aggregation method, i.e., learning by a neural network automatically. Therefore, in the results, ChebNet-NN and ICNet-NN denote the automatic version. It is expected that a deep neural network can learn an optimal aggregation which is not worse than our assumption, i.e., sum or mean. \vspace{-5pt} \begin{table}[hbt] \centering \caption{Regression Performance (MSE) on Dataset 1} \label{data1} \scriptsize \begin{tabular}{l|cc|cc} \toprule \hline & \multicolumn{2}{|c|}{Location} &\multicolumn{2}{c}{All feat}\\ \hline Method & Sum & Mean & Sum & Mean \\ \hline SVR RBF & 1.6791 & 0.6784 & 1.6675 & 0.6739\\ SVR Poly & 0.1913 & 2.1890 & 0.1696 & 2.2091\\ SGD & 2.1450e+25 & 2.1823 & 1.0430e+26 & 2.2072 \\ LR & 0.2839 & 0.2284 & 0.2449 & 0.2253\\ RR & 0.2309 & 2.1508 & 0.2058 & 2.1738\\ LASSO & 0.9213 & 2.1843 & 1.0127 & 2.2083\\ EN & 0.5763 & 2.1843 & 0.6409 & 2.2083 \\ OMP & 1.8182 & 1.9192 & 1.8651 & 2.0337 \\ LARS & 1.9968 & 2.1277 & 2.0434 & 2.1833\\ Theil & 0.2948 & 0.2238 & 0.2385 & 0.2277\\ \hline ChebNet & 0.1484 & 8.8370e+33 & 0.1761 & 0.1760 \\ ChebNet-NN & \multicolumn{2}{|c|}{0.17858} & \multicolumn{2}{|c}{3.8549e+27} \\ GCN & 0.3364 & 0.4149 & 0.2496 & 0.3290 \\ GCN-NN & \multicolumn{2}{|c|}{0.1811} & \multicolumn{2}{|c}{0.1606} \\ ICNet & 0.1534 & 0.1256 & 0.2390 & 0.1902 \\ ICNet-NN & \multicolumn{2}{|c|}{\textbf{0.0843}} & \multicolumn{2}{|c}{\textbf{0.1367}}\\ \hline \bottomrule \end{tabular} \end{table} \vspace{-10pt} \vspace{-10pt} \subsection{Regression Results} In the dataset 1 experiment (Table \ref{data1}, all methods achieved acceptable mean square error (MSE) except SGD (sum) which did not learn a reasonable model to predict the runtime, since the value is tremendous (at e+25/+26 scale). Most regression methods are sensitive to the aggregation method. For example, only using location feature, MSE of RR is 0.2309 when using sum, but it got 2.1508 when using the mean function. Sensitive models include SVR, LASSO, and EN. The best of the regression baselines is SVR (ploy), which achieved MSE of 0.1913. On the other hand, ChebNet is slightly better than the best regression model. However, ChebNet is not stable and sensitive to the aggregation method and feature set since it may yield a substantial error. Our proposed ICNet-NN is stable to the feature and aggregation setting and outperformed all the other methods, i.e., 0.0843 of MSE. Note that ICNet-NN is better than ICNet with sum or mean function, which demonstrates that there exists a better aggregation method, and graph neural network can learn it automatically. ICNet is always better than GCN under any settings, which shows that our improvement based on GCN works on circuit scenario. \begin{table}[hbt] \centering \scriptsize \caption{Regression Performance (MSE) on Dataset 2} \label{data2} \begin{tabular}{l|cc|cc} \toprule \hline & \multicolumn{2}{|c|}{Location} &\multicolumn{2}{c}{All feat}\\ \hline Method & Sum & Mean & Sum & Mean \\ \hline SVR RBF & 0.0051 & 0.0048 & 0.0050 & 0.0051\\ SVR Poly & 0.0048 & 0.0048 & 0.0048 & 0.0051 \\ SGD & 7.6301e+25 & 0.0045 & 2.0675e+26 & 0.0049 \\ LR & 6.9063e+23 & 4.6521e+20 & 7.2916e+25 & 5.8600e+23\\ RR & 0.0070 & 0.0045 & 0.0065 & 0.0049\\ LASSO & 0.0047 & 0.0045 & 0.0046 & 0.0049\\ EN & 0.0047 & 0.0045 & 0.0046 & 0.0049 \\ OMP & 0.0047 & 0.0045 & 0.0045 & 0.0049 \\ PAR & 0.0054 & 0.1918 & 0.0051 & 0.3143\\ LARS & 0.0047 & 0.0045 & 0.0046 & 0.0049\\ Theil & N/A & N/A & N/A & N/A \\ \hline ChebNet & 0.0047 & 0.0045 & 4.3570e+28 & 0.0048 \\ ChebNet-NN & \multicolumn{2}{|c|}{\textbf{0.0043}} & \multicolumn{2}{|c}{0.0047} \\ GCN & 0.0061 & 0.0046 & 0.0048 & 0.0050 \\ GCN-NN & \multicolumn{2}{|c|}{0.0050} & \multicolumn{2}{|c}{0.1606} \\ ICNet & 0.0049 & 0.0047 & \textbf{0.0040} & 0.0043 \\ ICNet-NN & \multicolumn{2}{|c|}{0.0051} & \multicolumn{2}{|c}{0.0048}\\ \hline \bottomrule \end{tabular} \end{table} While in the dataset 2 (Table \ref{data2}), it is more challenging, since all the runtime are small and the model has to be very precise to achieve low MSE. All methods at almost the same level of MSE. Once again, some of the regression models are not stable such as SGD and LR. Graph deep learning method includes ChebNet and ICNet still at the best error level. ChebNet can achieve the best level but sensitive to the settings (i.e., location or all feature), while ICNet is insensitive to this setting. Therefore, ICNet is more stable than GCN and ChebNet, since the difference of MSE between location and all feature is smaller than that of ChebNet or GCN. Under all feature setting, ICNet-NN is still the best method, and it outperformed its mean and sum version. The proposed method, ICNet, not only predicted the value very precisely but also with small variance. The runtime of ICNet is 1.1336 seconds on average, ranging from 1 to 2 seconds on dataset 1 and 2. This is because runtime of ICNet only depends on its parameter number. The instance with the largest runtime on dataset 1 and 2 spends 2411.11 seconds by actual solver. Therefore, ICNet can save 99.95\% of solver's time to get accurate runtime. \begin{figure*}[t!] \centering \subfigure[EN]{\includegraphics[width=1.36in]{RST_EN}} \subfigure[LASSO]{\includegraphics[width=1.36in]{RST_LASSO}} \subfigure[Linear]{\includegraphics[width=1.36in]{RST_Linear}} \subfigure[OMP]{\includegraphics[width=1.36in]{RST_OMP}} \subfigure[RR]{\includegraphics[width=1.36in]{RST_RR}} \subfigure[SGD]{\includegraphics[width=1.36in]{RST_SGD}} \subfigure[SVR(Poly)]{\includegraphics[width=1.36in]{RST_SVR_P}} \subfigure[SVR(RBF)]{\includegraphics[width=1.36in]{RST_SVR_R}} \subfigure[Theil]{\includegraphics[width=1.36in]{RST_Theil}} \subfigure[ICNet-NN]{\includegraphics[width=1.36in]{RST_ICNet}} \caption{Comparison between predictions and real values: Pink dot are real values, blue lines are the predictions. x-axis is data index in testing data while y-axis is runtime value in log scale. Note that only SGD has different y-axis scale, i.e., 1e+13.} \vspace{-20pt} \label{prediction_behave} \end{figure*} Next, Fig. \ref{prediction_behave} illustrates several predicted value along with real value to analyze the prediction characterization. Since there is little difference in dataset 2, we choose several competitive baselines in dataset 1 experiments under all feature setting. Several baselines performed very badly such as OMP and SGD which only output values around a constant level. SVR (RBF) is also bad and yield constant value when the real runtime is larger than a threshold. The results of EN and LASSO is positively related to the real values, but the correlation parameters are significantly different from the truth. Linear, RR, SVR (POLY) and Theil predicted the values that are relatively closer than that of the other baselines, but with high variance. \vspace{-15pt} \subsection{Case Study: Attentions on Attributes} The subsection studies the attention mechanism quantitatively. Several circuits are evaluated, as shown in Table \ref{case_study}. Gate number consistently attracted greater attention than the gate type by 9.64\% on average. This motivates us to study the correlation between actual runtime and gate number. The Pearson (P) and Spearman (S) correlation are 0.8238 and 0.9722 on average in Table \ref{case_study}. Take circuit c7553 as an example, the runtime is 2.37\% of gate number. Different circuits show different linear parameters, which gives us a convenient message that can accurately predict the deobfuscation time and could serve circuit obfuscation task. \begin{table}[hbt] \centering \caption{Case study: attributes and extracted rules.} \label{case_study} \begin{tabular}{l|cc|c|c} \toprule \hline circuit & gate \# & gate type & corr(P/S) & linear param \\ \hline c7553 & 56.40\% & 43.59\% & 0.8754 / 0.9345 & 0.0237 \\ c499 & 54.39\% & 47.05\% & 0.8149 / 0.9965 & 0.1300 \\ c2670 & 52.94\% & 47.05\% & 0.7769 / 0.9753 & 0.0559 \\ c1335 & 56.27\% & 43.72\% & 0.8282 / 0.9846 & 0.0599 \\ \bottomrule \end{tabular} \end{table} \section{conclusion} \label{sec:conclusion} In this work, we have introduced a neural network model for recovering SAT runtime on ICs, which expedites the evaluation on the hardness of obfuscated instances and therefore boosts the efficiency of developing obfuscation policy. To properly fuse graph structure and gate features, an enhanced graph convolutional operator is introduced. The proposed ICNet can avoid attribute propagation which is in the original GCN but not suitable for ICs. ICNet automatically extracts determinant features and aggregates gate representation regarding the runtime. Experiments on real-world datasets suggest that the proposed model is capable of modelling the runtime regarding the circuit graph accurately and stably, improving the baselines by a significant margin.
2,877,628,089,328
arxiv
\section{Introduction and background} There is a large number of classes of operators in the literature, see for example \cite{BBDP07, C14, D03, M03, PR12}. Most of these previous works have followed a very similar script, trying to prove similar properties, of which we can highlight the following: characterize the elements of space by inequalities, build a suitable norm in the space, and then show that the normed space that has just been constructed is an ideal of Banach of multilinear operators. Some works have also explored the concept of the $n$-homogeneous polynomials by seeking the same properties found for the space of multilinear applications. Faced with so many coincidences, the concern arose to create an abstract class of operators that could generalize as many as possible of those already existing in the literature. Thinking in this direction, D. Serrano-Rodríguez in \cite{S13} introduced the abstract class of multilinear operators $\gamma$-summing. This work shows that this class is a Banach ideal of multilinear applications. However, it should be noted that the work of abstraction is not an easy task. For example, Serrano-Rodríguez's work \cite{S13} contained small gaps, which were filled by the work of G. Botelho and J. Campos in \cite{BC17}. Thus, following the natural script, the proposal of this work is, in a first moment, to construct the abstract class of the $n$ -homogeneous polynomials absolutely $\gamma$-summing. Once the ideal of multilinear applications and the ideal of homogeneous polynomials is constructed, we must consider the following fact that is well-known in the literature: although several common multi-ideals and polynomial ideals are usually associated with the some operator ideal, the extension of an operator ideal to polynomials and multilinear mapping is not always a simple task. For example, the ideal of absolutely summing operators has, at least, eight possible extensions to higher degrees (see, for example, \cite{BPR07, CP07, D03, M03CM, M03, PS11, PG05}, and references therein). In this way, several authors have started to create tools to evaluate how good a polynomial/multilinear extension of a given operator ideal is. For example, in $\cite{BP05}$, the authors present the concept of ideal closed under differentiation (CUD) and ideal closed for scalar multiplication (CSM). One should also highlight $\cite{CDM09}$, where the concept of coherence and compatibility arose. This concept of coherence and compatibility was improved by Pellegrino and Ribeiro in $\cite{PR14}$, where it was started to work with pairs, and was composed of ideals of polynomials and ideals of multilinear operators. Lately, this has been widely used in the literature and we have now investigated whether the pair of abstract ideals of the $\gamma$-summing multilinear operators and the ideals of $n$-homogeneous $\gamma$-summing polynomials are coherent and compatible, in the sense introduced in the literature by Pellegrino and Ribeiro in $\cite{PR14}$. We will use the letters $E,E_{1},\dots,E_{n},F,G,H$ to represent Banach spaces over the same scalar-field $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$. The closed unit ball of $E$ is denoted by $B_E$ and its topological dual by $E'$. We use BAN to denote the class of all Banach spaces over $\mathbb{K}$. Given Banach spaces $E$ and $F$, the symbol $E\overset{1}\hookrightarrow F$ means that $E$ is a linear subspace of $F$ and $\Vert x\Vert_{F}\leq \Vert x\Vert_{E}$ for every $x \in E$. By $c_{00}(E)$ we denote the set of all $E$-valued finite sequences, which, as usual, can be regarded as infinite sequences by completing with zeros. For every $j\in\mathbb{N}$, $e_j = (0,\dots, 0, 1, 0, 0,\dots)$ where $1$ appears at the $j$-th coordinate. For each positive integer $n$, let $\mathcal{L}_{n}$ denote the class of all continuous $n$-linear operators between Banach spaces. An ideal of multilinear mappings (or multi-ideal) $\mathcal{M}$ is a subclass of the class $\mathcal{L}={\textstyle\bigcup\limits_{n=1}^{\infty}} \mathcal{L}_{n}$ of all continuous multilinear operators between Banach spaces, such that for a positive integer $n$, Banach spaces $E_{1},\ldots,E_{n}$ and $F$, the components \[ \mathcal{M}_{n}(E_{1},\ldots,E_{n};F):=\mathcal{L}_{n}(E_{1},\ldots ,E_{n};F)\cap\mathcal{M}% \] satisfy: (Ma) $\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$ is a linear subspace of $\mathcal{L}_{n}(E_{1},\ldots,E_{n};F)$, which contains the $n$-linear mappings of finite type. (Mb) If $T\in\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$, $u_{j}\in\mathcal{L}_{1}(G_{j};E_{j})$ for $j=1,\ldots,n$ and $v\in\mathcal{L}_{1}(F;H)$, then \[ v\circ T\circ(u_{1},\ldots,u_{n})\in\mathcal{M}_{n}(G_{1},\ldots,G_{n};H). \] Moreover, $\mathcal{M}$ is a (quasi-) normed multi-ideal if there is a function $\Vert\cdot\Vert_{\mathcal{M}}\colon\mathcal{M}\longrightarrow\lbrack0,\infty)$ satisfying (M1) $\Vert\cdot\Vert_{\mathcal{M}}$ restricted to $\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$ is a (quasi-) norm, for all Banach spaces $E_{1},\ldots,E_{n}$ and $F.$ (M2) $\Vert T_{n}\colon\mathbb{K}^{n}\longrightarrow\mathbb{K}:T_{n}(\lambda_{1},\ldots,\lambda_{n})=\lambda_{1}\cdots\lambda_{n}\Vert_{\mathcal{M}}=1$ for all $n$, (M3) If $T\in\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$, $u_{j}\in\mathcal{L}_{1}(G_{j};E_{j})$ for $j=1,\ldots,n$ and $v\in\mathcal{L}_{1}(F;H)$, then \[ \Vert v\circ T\circ(u_{1},\ldots,u_{n})\Vert_{\mathcal{M}}\leq\Vert v\Vert\Vert T\Vert_{\mathcal{M}}\Vert u_{1}\Vert\cdots\Vert u_{n}\Vert. \] When all of the components $\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$ are complete under this (quasi-) norm, $\mathcal{M}$ is called the (quasi-) Banach multi-ideal. For a fixed multi-ideal $\mathcal{M}$ and a positive integer $n$, the class \[ \mathcal{M}_{n}:=\cup_{E_{1},...,E_{n},F}\mathcal{M}_{n}\left( E_{1}% ,...,E_{n};F\right) \] is called ideal of $n$-linear mappings. Analogously, for each positive integer $n$, we can define the polynomial ideal $\mathcal{Q}$. For more details, see \cite{PR14}. We will also use the definitions of {\it finitely determined} and {\it linearly stable} sequence classes, which were recently introduced in the literature by Botelho and Campos in \cite{BC17}, as follows. \begin{definition} A class of vector-valued sequences $\gamma_s$, or simply a sequence class $\gamma_s$, is a rule that assigns to each $E \in BAN$ a Banach space $\gamma_s(E)$ of $E$-valued sequences; that is, $\gamma_s(E)$ is a vector subspace of $E^{\mathbb{N}}$ with the coordinate wise operations, such that: $$c_{00}(E)\subseteq \gamma_s(E)\overset{1}\hookrightarrow\ell_{\infty}(E)\mbox{ and } \Vert e_j\Vert_{\gamma_s(\mathbb{K})}=1\mbox{ for every j}.$$ \end{definition} A sequence class $\gamma_s$ is {\it finitely determined} if for every sequence $(x_j)_{j=1}^{\infty}\in E^{\mathbb{N}}$, $(x_j)_{j=1}^{\infty}\in \gamma_s(E)$ if, and only if, $\sup_{k}\Vert (x_j)_{j=1}^{k}\Vert_{\gamma_s(E)}<+{\infty}$ and, in this case, $$\Vert (x_j)_{j=1}^{\infty}\Vert_{\gamma_s(E)}=\sup_{k}\Vert (x_j)_{j=1}^{k}\Vert_{\gamma_s(E)}.$$ \begin{definition}{} A sequence class $\gamma_s$ is said to be linearly stable if for every $u \in \mathcal{L}(E; F)$ it holds \begin{equation*} \left(u\left(x_j \right)\right)_{j=1}^{\infty} \in \gamma_s(F) \end{equation*} wherever $\left(x_j \right)_{j=1}^{\infty} \in \gamma_s(E)$ and $\|\hat{u} : \gamma_s(E) \rightarrow \gamma_s(F)\| = \|u\|$. \end{definition} Throughout the text, we will also use the following definition that was introduced in \cite{BC17}. \begin{definition} Given sequence classes $\gamma_{s_1},\dots,\gamma_{s_n},\gamma_{s}$, we say that $\gamma_{s_1}(\mathbb{K})\cdots\gamma_{s_n}(\mathbb{K})\overset{1}\hookrightarrow\gamma_{s}(\mathbb{K})$ if $\left(\lambda_{j}^{1}\cdots\lambda_{j}^{n}\right)_{j=1}^{\infty}\in\gamma_{s}(\mathbb{K})$ and $$\left\Vert \left(\lambda_{j}^{1}\cdots\lambda_{j}^{n}\right)_{j=1}^{\infty}\right\Vert_{\gamma_{s}(\mathbb{K})}\le\prod_{m=1}^{n}\left\Vert\left(\lambda_{j}^{m}\right)_{j=1}^{\infty}\right\Vert_{\gamma_{s_m}(\mathbb{K})}$$ whenever $\left(\lambda_{j}^{m}\right)_{j=1}^{\infty}\in\gamma_{s_m}(\mathbb{K})$, $m=1,\dots,n$. \end{definition} So, the main goal of this note is to construct an abstract space of $n$-homogeneous polynomials and show that it is an ideal of Banach polynomials. Since the abstract ideal of multilinear applications \cite{S13} and the abstract ideal of polynomials are now known, we will start to study the coherence and compatibility of the pair $\left(\mathcal{Q},\mathcal{M}\right)$ in the sense of Pellegrino and Ribeiro \cite{PR14}, where $\mathcal{Q}$ is the ideal of polynomials and $\mathcal{M}$ is the ideal of multilinear applications. \section{Absolutely $\gamma$ - summing polynomials}\label{PGS} For this study, we will consider sequence class $\gamma_s, \gamma_{s_1},..., \gamma_{s_m}$ to be finitely determined and linearly stable, as defined in \cite{BC17}. \begin{definition} Let $E$ and $F$ be Banach spaces. An $m$-homogeneous polynomial $P : E \longrightarrow F$ is said to be $\gamma_{s, s_1}$ - summing at $a\in E$, if \begin{equation*} \left(P(a + x_j) - P(a) \right)_{j=1}^{\infty} \in \gamma_s(F) \end{equation*} whenever $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$. \end{definition} \begin{remark} This definition is inspired by the definition of the absolutely $(p,q)$-summing $m$-homogeneous polynomials. \end{remark} The space of all $m$-homogeneous polynomials $\gamma_{s, s_1}$ - summing at $a$, as denoted by $\mathcal{P}_{\gamma_{s, s_1}}^{(a)}\left(^mE; F \right)$, is a linear subspace of the $\mathcal{P}\left(^mE; F \right)$. When $a=0$, we write only $\mathcal{P}_{\gamma_{s, s_1}}\left(^mE; F \right)$. The space of all $m$-homogeneous polynomials $\gamma_{s, s_1}$-summing at every point will be denoted by $\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}\left(^mE; F \right)$. By using the polarization formula \cite[Corollary 1.6]{DJT95} we can easily prove the following result: \begin{proposition}\label{P1.1.} $P \in \mathcal{P}_{\gamma_{s, s_1}}^{ev}\left(^mE; F \right)$ if, and only if, $\check{P}$ is $\gamma_{s, s_1}$ - summing in every point $(a_1,\dots, a_m) \in E\times \overset{m}{\cdots} \times E$. \end{proposition} The next result will be used to construct a standard norm at the space of the $m$-homogeneous polynomials $\gamma_{s, s_1}$-summing at the origin, $\mathcal{P}_{\gamma_{s, s_1}}\left(^mE; F \right)$. \begin{proposition}\label{CaracterizacaoOrigem} $P \in \mathcal{P}_{\gamma_{s, s_1}}(^mE; F)$ if, and only if, there is a constant $C > 0$, such that \begin{equation}\label{E1.4} \left\Vert\left(P( x_j) \right)_{j=1}^{\infty} \right\Vert_{\gamma_s(F)} \le C\left\Vert \left(x_j \right)_{j=1}^{\infty} \right\Vert_{\gamma_{s_1}(E)}^m \end{equation} whenever $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$. In addition, the infimum of the constants $C > 0$ satisfying inequality \eqref{E1.4} defines a norm in $ \mathcal{P}_{\gamma_{s, s_1}}(^mE; F)$, as denoted by $\pi(\cdot )$. \end{proposition} \begin{proof} Suppose $P \in \mathcal{P}_{\gamma_{s, s_1}}(^mE; F)$. From Proposition \ref{P1.1.}, it follows that $\check{P}$ is absolutely $\gamma_{s, s_1}$ - summing at the origin. By \cite[Proposition $2$]{S13}, exists $C > 0$, such that \begin{equation*} \left\|\left(\check{P}\left(x_j \right)^m \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \le C\left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)}^m. \end{equation*} So, \begin{equation*} \left\|\left(P(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \le C\left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)}^m. \end{equation*} Given that $\gamma_{s}$ and $\gamma_{s_1}$ are finitely determined, the reciprocal is immediate. It is easy to see that $\pi(\cdot )$ define a norm in $ \mathcal{P}_{\gamma_{s, s_1}}(^mE; F)$. \end{proof} The following lemma, whose proof can be obtained following Proposition \ref{P1.1.} and \cite[Lemma 2]{BBJP06}, is crucial for the proof of the main result of this section: \begin{lemma}\label{L1.1.} If $P \in \mathcal{P}_{\gamma_{s, s_1}}(^mE; F)$ and $a \in E$, then there is a constant $C_a > 0$, such that \begin{equation*} \left\Vert\left(P(a + x_j) - P(a) \right)_{j=1}^{\infty} \right\Vert_{\gamma_s(F)} \le C_a, \end{equation*} for all $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$ and $\left\Vert\left(x_j \right)_{j=1}^{\infty} \right\Vert_{\gamma_{s_1}(E)} \le 1$. \end{lemma} The next result, as in the case of Proposition \ref{CaracterizacaoOrigem}, is a characterization by inequality of the operators in $\mathcal{P}_{\gamma_{s, s_1}}^{ev}(^mE; F)$. The same is very important because from it we can extract a norm that makes $\mathcal{P}_{\gamma_{s, s_1}}^{ev}(^mE; F)$ a Banach space. The proof was inspired on \cite{BBDP07} and \cite{M03}. \begin{theorem}\label{T1.2.} Let $P \in \mathcal{P}(^mE; F)$. The following assertions are equivalents: \begin{enumerate}[$(a)$] \item $P \in \mathcal{P}_{\gamma_{s, s_1}}^{(ev)}(^mE; F)$; \item There is $C > 0$ satisfying \begin{equation*} \left\Vert \left(P(b + x_j) - P(b) \right)_{j=1}^{n} \right\Vert_{\gamma_s(F)} \le C \left( \|b\| + \left\Vert\left(x_j \right)_{j=1}^{n} \right\Vert_{\gamma_{s_1}(E)} \right)^m \end{equation*} for all $n \in \mathbb{N}$ and $x_1,\cdots, x_m, a \in E$. \item There is $C > 0$ satisfying \begin{equation}\label{EE1.30} \left\Vert \left(P(b + x_j) - P(b) \right)_{j=1}^{\infty} \right\Vert_{\gamma_s(F)} \le C \left( \|b\| + \left\Vert\left(x_j \right)_{j=1}^{\infty} \right\Vert_{\gamma_{s_1}(E)} \right)^m \end{equation} for all $b \in E$ and $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$. \end{enumerate} \end{theorem} \begin{proof} $(c) \Rightarrow (a)$ and $(c) \Rightarrow (b)$ are immediate. Using the fact that the sequence classes considered are finitely determined, it immediately follows that $(b) \Rightarrow (c)$. Therefore, it remains to prove that $(a) \Rightarrow (c)$. Let $G = E \times \gamma_{s_1}(E)$. For each $P \in \mathcal{P}_{\gamma_{s, s_1}}^{(ev)}(^mE; F)$, set the following application \begin{equation*} \eta_{\gamma_{s, s_1}}(P) : G \longrightarrow \gamma_s(F) \end{equation*} given by \begin{equation*} \eta_{\gamma_{s, s_1}}(P)\left(\left(b, \left(x_j \right)_{j=1}^{\infty} \right) \right) = \left(P(b + x_j) - P(b) \right)_{j=1}^{\infty}. \end{equation*} It is not difficult to see that $\eta_{\gamma_{s, s_1}}(P)$ is an $m$-homogeneous polynomial. To show that $\eta_{\gamma_{s, s_1}}(P)$ is continuous, we will consider, for all $k \in \mathbb{N}$ and $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(F)$, the set \begin{equation*} F_{k, \left(x_j \right)_{j=1}^{\infty}} = \left\{b \in E : \left\Vert\eta_{\gamma_{s, s_1}}(P)\left(\left(b, \left(x_j \right)_{j=1}^{\infty} \right) \right) \right\Vert_{\gamma_s(F)} \le k \right\}. \end{equation*} Note that the set $F_{k, \left(x_j \right)_{j=1}^{\infty}}$ is closed for all $b \in E$ and $\left(x_j \right)_{j=1}^{\infty} \in B_{\gamma_{s_1}(F)}$. Indeed, for each $n \in \mathbb{N}$, let \begin{equation*} F_{k, \left(x_j \right)_{j=1}^{n}} = \left\{b \in E : \left\Vert\eta_{\gamma_{s, s_1}}(P)\left(\left(b, \left(x_j \right)_{j=1}^{n} \right) \right) \right\Vert_{\gamma_s(F)} \le k \right\}. \end{equation*} So, \begin{equation}\label{E1.33} F_{k, \left(x_j \right)_{j=1}^{\infty}} = \bigcap_{n \in \mathbb{N}}F_{k, \left(x_j \right)_{j=1}^{n}}. \end{equation} For each $\left(x_j \right)_{j=1}^{\infty} \in B_{\gamma_{s_1}(E)}$, and fixed $k \in \mathbb{N}$, we can define \begin{equation*} D_k : E \longrightarrow [0, \infty) \end{equation*} given by \begin{equation*} D_k(b) = \left\Vert\left(P(b + x_j) - P(b) \right)_{j=1}^{n} \right\Vert_{\gamma_s(F)}. \end{equation*} It is clear that $D_k$ is a continuous application. So, each $F_{k, \left(x_j \right)_{j=1}^{n}}$ is closed because \begin{equation*} F_{k, \left(x_j \right)_{j=1}^{n}} = D_k^{-1}([0, k]). \end{equation*} Therefore, from \eqref{E1.33} it follows that $F_{k, \left(x_j \right)_{j=1}^{\infty}}$ is closed because it is the intersection of closed sets. Let \begin{equation*} F_k = \bigcap_{\left(x_j \right)_{j=1}^{\infty} \in B_{\gamma_{s_1}^u(E)}}F_{k, \left(x_j \right)_{j=1}^{\infty}}. \end{equation*} By the Lemma \eqref{L1.1.} it follows that \begin{equation*} E = \bigcup_{k \in \mathbb{N}}F_k. \end{equation*} Using the Baire Category Theorem, we know that there is a constant $k_0 \in \mathbb{N}$ such that $F_{k_0}$ has an interior point. The continuity of the application $\eta_{\gamma_{s, s_1}}(P)$ is obtained by repeating the proof of \cite[Proposition 9.3]{BBJP06} (or \cite[Theorem 4.1]{BBDP07}). Therefore, \begin{align}\label{EE1.38} \left\Vert \left(P(b + x_j) - P(b) \right)_{j=1}^{\infty} \right\Vert_{\gamma_s(F)} &= \left\Vert\eta_{\gamma_{s, s_1}}(P)\left(\left(b, \left(x_j \right)_{j=1}^{\infty} \right) \right) \right\Vert_{\gamma_s(F)}\\ &\le \left\Vert\eta_{\gamma_{s, s_1}}(P)\right\Vert\left( ||b|| + \left\Vert\left(x_j \right)_{j=1}^{\infty} \right\Vert_{\gamma_{s_1}(E)} \right)^m .\nonumber \end{align} \end{proof} By straightforward computations, we can get the following result. \begin{corollary}\label{C1.1.} The infimum of the constants $C > 0$ that satisfy the inequality \eqref{EE1.30} defines a norm in $\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}(^mE; F)$, that will be denoted by $\pi^{(ev)}(\cdot)$. \end{corollary} It is not difficult to see that \begin{remark}\label{O1.4.} $\pi^{(ev)}(P) = \left\Vert \eta_{\gamma_{s, s_1}}(P) \right\Vert$. \end{remark} An alternative way of constructing a normed space of the polynomials associated by $\prod_{s, s_1}^{(ev)}$ was introduced in \cite{S13} and denoted by $\mathcal{P}_{\prod_{\gamma_{s, s_1}}^{ev}}$, which would be to observe Proposition \ref{P1.1.} and to consider the set \begin{equation*} \mathcal{P}_{\prod_{\gamma_{s, s_1}}^{ev}} := \left\{P \in \mathcal{P}\text{; } \check{P} \text{ is $\gamma_{s, s_1}$ - summing in every point} \right\}. \end{equation*} and, in this set, to use the norm inherited from the ideal of multilinear applications $\prod_{\gamma_{s, s_1}}^{ev}$, that is, \begin{equation*} \left\Vert P\right\Vert_{\mathcal{P}_{\prod_{\gamma_{s, s_1}}^{ev}}}:=\|\check{P}\|_{\prod_{\gamma_{s, s_1}}^{ev}} = \pi_{\gamma_{s, s_1}}^{(ev)}(\check{P}). \end{equation*} The advantage of this approach is that it is already established in the literature (see, for example, \cite[page 46]{BBJP06}) that this set, with this norm, is a Banach ideal of $n$-homogeneous polynomials. But then, one question arises: What is the relationship between the norms $\pi^{(ev)}(P)$ and $\left\Vert P\right\Vert_{\mathcal{P}_{\prod_{\gamma_{s, s_1}}^{ev}}}$? The answer of this question is given in the next proposition. \begin{proposition}\label{P.1.3.} The norm $\pi^{(ev)}(\cdot)$, defined in Corollary \ref{C1.1.}, satisfies the relation \begin{equation*} \pi^{(ev)}(P) \le \pi_{\gamma_{s, s_1}}^{(ev)}(\check{P}) \le \displaystyle\frac{m^m}{m!}\pi^{(ev)}(P) \end{equation*} for any $P \in \mathcal{P}_{\gamma_{s, s_1}}^{ev}(^mE; F)$. \end{proposition} \begin{proof} If $P \in \mathcal{P}_{\gamma_{s, s_1}}^{ev}(^mE; F)$, then, by Proposition \ref{P1.1.}, $\check{P}$ is $\gamma_{s, s_1}$-summing in every point. In this way, for any $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$ and $a \in E$, we have \begin{align*} \left\|\left(P\left(a + x_j \right) - P(a) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} &= \left\|\left(\check{P}\left(a + x_j \right)^m - \check{P}(a)^m \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ &\le \pi_{\gamma_{s, s_1}}^{(ev)}(\check{P})\left(\|a\| + \left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \right)^m, \end{align*} from which it follows that $\pi^{(ev)}(P) \le \pi_{\gamma_{s, s_1}}^{(ev)}(\check{P})$. For the other inequality, we will use the same tools that appear in the demonstration of \cite[Theorem $2$]{S13}. Let $G = E \times \gamma_{s_1}(E) $ be gifted with sum norm and $\Phi : \prod_{\gamma_{s, s_1}}^{ev}(E^m; F) \rightarrow \mathcal{L}(G,\overset{m}{\dots}, G; \gamma_s(F))$ be defined by $$\Phi(T)\left(\left(a_1,\left(x_j^{(1)}\right)_{j=1}^{\infty}\right),\dots,\left(a_m,\left(x_j^{(m)}\right)_{j=1}^{\infty}\right)\right)=\left(T\left(a_1+x_j^{(1)},\dots,a_m+x_j^{(m)}\right)-T(a_1,\dots,a_m)\right)_{j=1}^{\infty}.$$ In \cite{S13}, we find that $\pi_{\gamma_{s, s_1}}^{ev}(\check{P}) = \|\Phi(\check{P})\|$. Note that, for any $\left(x_j^{(i)}\right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$ and $\epsilon_i = \pm 1$, $i=1,\dots, m$, we have that $\left(\epsilon_i x_j^{(i)} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$. Then, $\left(\epsilon_1 x_j^{(1)} +\cdots + \epsilon_m x_j^{(m)} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$ and \begin{equation*} \left\|\left(\epsilon_1 x_j^{(1)} +\cdots + \epsilon_m x_j^{(m)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \le \left\|\left( x_j^{(1)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} +\cdots + \left\|\left( x_j^{(m)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)}. \end{equation*} Therefore, \begin{align*} &\|\Phi(\check{P})\|\\ &= \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1} \left\|\Phi(\check{P})\left((a_1, (x_j^{(1)})_{j=1}^{\infty}),\cdots, (a_m, (x_j^{(m)})_{j=1}^{\infty})\right)\right\|_{\gamma_s(F)}\\ &= \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1} \left\|(\check{P}(a_1 + x_j^{(1)},\dots, a_m + x_j^{(m)})-\check{P}(a_1,\dots, a_m))_{j=1}^{\infty}\right\|_{\gamma_s(F)}\\ &\le \frac{1}{2^mm!} \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1}\sum_{\epsilon_i = \pm 1}\left\|(P(\epsilon_1(a_1 + x_j^{(1)})+\cdots+ \epsilon_m(a_m + x_j^{(m)}) ) - P\left(\epsilon_1a_1+\cdots + \epsilon_ma_m\right) )_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ &= \frac{1}{2^mm!} \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1}\sum_{\epsilon_i = \pm 1}\left\|\eta_{\gamma_{s, s_1}}(P)\left(\left(\epsilon_1a_1+\cdots + \epsilon_ma_m, \left(\epsilon_1x_j^{(1)}+\cdots+ \epsilon_mx_j^{(m)}\right)_{j=1}^{\infty} \right) \right) \right\|_{\gamma_s(F)}\\ &\le \frac{1}{2^mm!} \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1}\sum_{\epsilon_i = \pm 1} \|\eta_{\gamma_{s, s_1}}(P)\|\left\|\left(\epsilon_1a_1+\cdots + \epsilon_ma_m, \left(\epsilon_1x_j^{(1)}+\cdots+ \epsilon_mx_j^{(m)}\right)_{j=1}^{\infty} \right)\right\|_{G}^m\\ &\le \frac{\|\eta_{\gamma_{s, s_1}}(P)\|}{2^mm!} \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1}\sum_{\epsilon_i = \pm 1}\left(\|a_1\| + \left\|\left(x_j^{(1)}\right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} +\cdots + \|a_m\| + \left\|\left(x_j^{(m)}\right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \right)^m\\ &= \frac{\|\eta_{\gamma_{s, s_1}}(P)\|}{m!} \sup_{\left\|\left(a_i,\left(x_j^{(i)}\right)_{j=1}^{\infty}\right)\right\|_{G} \le 1}\left(\left\|\left(a_1,\left(x_j^{(1)}\right)_{j=1}^{\infty}\right)\right\|_G +\cdots + \left\|\left(a_m,\left(x_j^{(m)}\right)_{j=1}^{\infty}\right)\right\|_G \right)^m\\ &= \frac{m^m}{m!}\|\eta_{\gamma_{s, s_1}}(P)\|, \end{align*} where the function $\eta_{\gamma_{s, s_1}}(P)$ was defined in the proof of Theorem \ref{T1.2.}. Therefore, it follows from Remark \ref{O1.4.} that \begin{equation*} \pi^{ev}(P) \le \pi_{\gamma_{s, s_1}}^{ev}(\check{P}) \le \frac{m^m}{m!}\pi^{ev}(P). \end{equation*} \end{proof} This proposition gives us a relationship that is satisfactory between $\pi^{ev}(\cdot)$ and $\pi_{\gamma_{s, s_1}}^{ev}(\cdot)$. However, it is important to note that this result was already expected because there is a well-known inequality in the literature that establishes a relationship between the norm of the a $m$-homogeneous polynomial $P$ and the symmetric $m$-linear application associate to $P$, by \begin{equation*} \left\Vert P\right\Vert\le\left\Vert\check{P}\right\Vert\le\frac{m^m}{m!}\left\Vert P\right\Vert \end{equation*} which was shown in \cite[Theorem 2.2]{mujica}. This same shows that the constant $m^m/m!$ is the best possible solution. For more details to see \cite[example 2I]{mujica}. We also emphasize that the result presented in Proposition \ref{P1.1.} is of great importance because through it we have that $\mathcal{P}_{\gamma_{s, s_1}}^{ev}$ is a homogeneous polynomials ideal. We now need proof that this is a normed homogeneous polynomials ideal and complete (Banach), with the norm $\pi^{ev}(\cdot)$. The next proposition shows that $\pi^{(ev)}(id_{\mathbb{K}}) = 1$. The proof can be obtained by following \cite[Proposition 4.3]{BBJP06} with the necessary adaptations. \begin{proposition} Let $id_{\mathbb{K}} : \mathbb{K} \longrightarrow \mathbb{K}$ given by $id_{\mathbb{K}}(x) = x^m$ and suppose that $\gamma_{s_1}(\mathbb{K})\overset{m}{\cdots} \gamma_{s_1}(\mathbb{K}) \overset{1}{\hookrightarrow} \gamma_s(\mathbb{K})$. Then, $id_{\mathbb{K}} \in \mathcal{P}_{\gamma_{s, s_1}}^{ev}(^m\mathbb{K}; \mathbb{K})$ and \begin{equation*} \pi^{(ev)}(id_{\mathbb{K}}) = 1. \end{equation*} \end{proposition} The proof of the next Proposition follows the similar result from \cite{BBDP07}. \begin{proposition} The linear map \begin{equation*} \eta_{\gamma_{s, s_1}} : \mathcal{P}_{\gamma_{s, s_1}}^{(ev)}\left(^mE; F \right) \longrightarrow \mathcal{P}\left(^mG; \gamma_s(F) \right) \end{equation*} where $G = E \times \gamma_{s_1}(E)$, given by \begin{equation} \eta_{\gamma_{s, s_1}}(P)\left(\left(b, \left(x_j \right)_{j=1}^{\infty} \right) \right) = \left(P(b + x_j) - P(b) \right)_{j=1}^{\infty}. \end{equation} is injective and its range is closed in $\mathcal{P}\left(^mG; \gamma_s(F) \right)$. \end{proposition} So, we easily get the following result: \begin{proposition} The space $\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}\left(^mE; F \right)$ is complete under the norm $\pi^{(ev)}(\cdot )$. \end{proposition} \begin{theorem}\label{T1.4.} $\left(\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}, \pi^{(ev)}(\cdot ) \right)$ is a homogeneous polynomial ideal Banach between Banach spaces. \end{theorem} \begin{proof} Let $u \in \mathcal{L}(G; E)$, $P \in \mathcal{P}\left(^mE; F \right)$, $t \in \mathcal{L}(F; H)$ and $a \in G$. Given that $\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}$ is a homogeneous polynomials ideal, then $t \circ P \circ u \in \mathcal{P}_{\gamma_{s, s_1}}^{(ev)}(^mG; H)$. Now, if $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(G)$, then it follows from the linear stability of $\gamma_s$ and $\gamma_{s_1}$ that \begin{align*} \left\|\left(t \circ P \circ u(a + x_j) - t \circ P \circ u(a) \right)_{j=1}^{\infty} \right\|_{\gamma_s(H)} &\le \|t\|\left\|\left(P \left(u(a + x_j) \right) - P \left(u(a) \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \nonumber \\ &= \|t\|\left\|\left(P \left(u(a) + u(x_j) \right) - P \left(u(a) \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \nonumber \\ &\le \|t\| \pi^{(ev)}(P) \left(\|u(a)\| + \left\|\left(u(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \right)^m \nonumber \\ &\le \|t\| \pi^{(ev)}(P)\|u\| \left(\|a\| + \left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(G)} \right)^m \nonumber. \end{align*} So, $\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}$ satisfies the ideal property and \begin{equation*} \pi^{(ev)}(t \circ P \circ u) \le \|t\| \pi^{(ev)}(P)\|u\| . \end{equation*} Therefore, $\left(\mathcal{P}_{\gamma_{s, s_1}}^{(ev)}, \pi^{(ev)}(\cdot) \right)$ is a homogeneous polynomial ideal Banach between Banach spaces. \end{proof} \section{Coherence and compatibility} In this section, we will study the coherence and the compatibility of the pairs formed by the ideals of $\gamma$-summing multilinear applications and $\gamma$-summing homogeneous polynomials. This concept that was introduced in the literature by Pellegrino and Ribeiro in \cite{PR14}, and their definitions are presented below. We will consider the sequence $\left(\mathcal{U}_k, \mathcal{M}_k \right)_{k=1}^N$, where each $\mathcal{U}_k$ is a (quasi-) normed ideal of $k$ - homogeneous polynomials and each $\mathcal{M}_k$ is a (quasi-) normed ideal of $k$ - linear mappings. The parameter $N$ can eventually be infinity. \begin{definition}[Compatible pair of ideals] Let $\mathcal{U}$ be a normed operator ideal and $N \in \left(\mathbb{N} - \{1\} \right)\cup \{\infty \}$. A sequence $\left(\mathcal{U}_n, \mathcal{M}_n \right)_{n=1}^N$, with $\mathcal{U}_1 = \mathcal{M}_1 = \mathcal{U}$, is compatible with $\mathcal{U}$ if there exists positive constants $\alpha_1, \alpha_2, \alpha_3$ such that for all Banach spaces $E$ and $F$, the following conditions hold for all $n \in \{2,\cdots, N\}:$ \begin{description} \item $(CP1)$ If $k \in \{1,\dots, n\}$, $T \in \mathcal{M}_n(E_1,\dots, E_n;F)$ and $a_j \in E_j$ for all $j \in \{1,\dots, n\}\setminus\{k\}$, then $ T_{a_1,\dots, a_{k-1},a_{k+1},\dots, a_n} \in \mathcal{U}(E_k; F)$ and \begin{equation*} \left\Vert T_{a_1,\dots, a_{k-1},a_{k+1},\dots, a_n} \right\Vert \le \alpha_1 \left\Vert T\right\Vert_{\mathcal{M}_n}\|a_1\|\cdots \|a_{k-1}\| \ \|a_{k+1}\|\cdots \|a_n\|. \end{equation*} \item $(CP2)$ If $P \in \mathcal{U}_n(^nE; F)$ and $a \in E$, then $P_{a^{n-1}} \in \mathcal{U}(E; F)$ and \begin{equation*} \left\Vert P_{a^{n-1}}\right\Vert_{\mathcal{U}} \le \alpha_2 \max{\left\{\left\Vert\overset{\vee}{P}\right\Vert_{\mathcal{M}_n}, \left\Vert P \right\Vert_{\mathcal{U}_n} \right\}}\Vert a\Vert^{n-1}. \end{equation*} \item (CP3) If $u \in \mathcal{U}(E_n; F)$, $\gamma_j \in E'_j$ for all $j = 1,\dots, n-1$, then $\gamma_1 \cdots \gamma_{n-1}u \in \mathcal{M}_n(E_1,\dots, E_n; F)$ and \begin{equation*} \left\Vert\gamma_1 \cdots\gamma_{n-1}u \right\Vert_{\mathcal{M}_n} \le \alpha_3 \Vert\gamma_1\Vert\cdots\Vert\gamma_{n-1}\Vert\left\Vert u \right\Vert_{\mathcal{U}}. \end{equation*} \item $(CP4)$ If $u \in \mathcal{U}(E; F)$ and $\gamma \in E'$, then $\gamma^{(n-1)}u \in \mathcal{U}_n(^{n}E; F)$. \item $(CP5)$ $P$ belongs to $\mathcal{U}_n(^nE; F)$ if, and only if, $\overset{\vee}{P}$ belongs to $\mathcal{M}_n(^nE; F)$. \end{description} \end{definition} \begin{definition}[Coherent pair of ideals]\label{D2.2.} Let $\mathcal{U}$ be a normed operator ideal and let $N \in \mathbb{N} \cup \{\infty \}$. A sequence $\left(\mathcal{U}_k, \mathcal{M}_k \right)_{k=1}^{N}$, with $\mathcal{U}_1 = \mathcal{M}_1 = \mathcal{U}$, is coherent if there exist positive constants $\beta_1, \beta_2, \beta_3$ such that for all Banach spaces $E$ and $F$ the following conditions hold for $k = 1,\dots,N-1:$ \begin{description} \item $(CH1)$ If $T \in \mathcal{M}_{k + 1}\left(E_1,\dots, E_{k+1}; F \right)$ and $a_j \in E_j$ for $j=1,\dots, k+1$, then \begin{equation*} T_{a_j} \in \mathcal{M}_k\left(E_1,\dots, E_{j-1}, E_{j+1},\dots, E_{k+1}; F \right) \end{equation*} and \begin{equation*} \left\|T_{a_j} \right\|_{\mathcal{M}_k} \le \beta_1 \left\| T \right\|_{\mathcal{M}_{k + 1}}\|a_j\|. \end{equation*} \item $(CH2)$ If $P \in \mathcal{U}_{k+1}\left(^{k+1}E; F \right)$, $a \in E$, then $P_a$ belongs to $\mathcal{U}_k\left(^kE; F \right)$ and \begin{equation*} \left\| P_a \right\|_{\mathcal{U}_k} \le \beta_2 \max{\left\{\left\| \overset{\vee}{P} \right\|_{\mathcal{M}_{k+1}}, \left\| P \right\|_{\mathcal{U}_{k+1}} \right\}}\Vert a \Vert. \end{equation*} \item $(CH3)$ If $T \in \mathcal{M}_k(E_1,\dots,E_k; F)$, $\gamma \in E'_{k+1}$, then \begin{equation*} \gamma T \in \mathcal{M}_{k + 1}(E_1,\dots,E_{k + 1}; F) \end{equation*} and \begin{equation*} \left\|\gamma T\right\|_{\mathcal{M}_{k + 1}} \le \beta_3\Vert\gamma\Vert \left\|T \right\|_{ \mathcal{M}_{k}}. \end{equation*} \item $(CH4)$ If $P \in \mathcal{U}_{k}\left(^kE; F \right)$ and $\gamma \in E'$, then $\gamma P \in \mathcal{U}_{k+1}\left(^{k + 1}E; F \right).$ \item $(CH5)$ For all $k=1,\dots,N$, $P$ belongs to $\mathcal{U}_k(^kE; F)$ if, and only if, $\overset{\vee}{P}$ belongs to $\mathcal{M}_k(^kE; F)$. \end{description} \end{definition} In this section, we will denote the Banach $\gamma_{s, s_1}$-summing $m$-linear operators ideal and the Banach $\gamma_{s, s_1}$-summing $m$-homogeneous polynomials ideal by $\left(\prod_{\gamma_{s, s_1}}^{m, (ev)}; \pi_{\gamma_{s, s_1}}^{m, ev}(\cdot)\right)$ and $\left(\mathcal{P}_{\gamma_{s, s_1}}^{m, (ev)}; \pi^{m, ev}(\cdot)\right)$, respectively. The reason for this is to evidence the linearity/homogeneity of the components of the ideal. We will study the coherence and the compatibility of the pair $\left(\mathcal{P}_{\gamma_{s, s_1}}^{m, (ev)}, \prod_{\gamma_{s, s_1}}^{m, (ev)} \right)_{m=1}^{N}$ with the ideal $\prod_{\gamma_{s, s_1}}^{ev}$. \begin{remark}\label{O2.1.} For any Banach spaces $E$ and $F$, $ \prod_{\gamma_{s, s_1}}^{1, ev}(E; F) = \mathcal{P}_{\gamma_{s, s_1}}^{1, ev}(E; F)= \prod_{\gamma_{s, s_1}}^{ev}(E; F) $. \end{remark} In the next two propositions, we will check the conditions (CH1) and (CH2) of Definition \ref{D2.2.} \begin{proposition}\label{P2.1.} For each $T \in \prod_{\gamma_{s, s_1}}^{m+1, ev}(E_1,\dots, E_{m+1}; F)$ and $(a_1,\dots, a_{m+1}) \in E_1 \times \cdots \times E_{m+1}$, \begin{equation*} T_{a_k}(x_1, \dots , x_{k-1}, x_{k+1}, \dots , x_{m+1}) := T(x_1, \dots , x_{k-1}, a_k , x_{k+1}, \dots , x_{m+1}) \end{equation*} belongs to $\prod_{\gamma_{s, s_1}}^{m, ev}(E_1,\dots , E_{k-1}, E_{k+1}, \dots, E_{m+1}; F)$ and \begin{equation*} \pi_{\gamma_{s, s_1}}^{m, ev}(T_{a_k}) \le \pi_{\gamma_{s, s_1}}^{m+1, ev}(T) \|a_k\|. \end{equation*} \end{proposition} \begin{proof} Let $T \in \prod_{\gamma_{s, s_1}}^{m+1, ev}(E_1,\dots, E_{m+1}; F)$ and $(a_1,\dots, a_{m+1}) \in E_1 \times \cdots \times E_{m+1}$, $\left(x_j^{(n)} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E_n)$, for $n=1,\dots, k-1, k+1,\dots, m+1$. We will do the computations only for $k = 1$. The remaining cases are similar. Thus, for each $b_i \in E_i$ and $\left(x_j^{i}\right)_{j=1}^{\infty}\in\gamma_{s_1}(E_i)$, $i=2,\dots,m$, consider the null-sequence $\left(x_{j}^{(1)} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E_1)$; that is, $x_{j}^{(1)}=0$ for every $j \in \mathbb{N}$. Then, \begin{align*} &\left(T_{a_1}(b_2 + x_j^{(2)}, \dots ,b_{m+1} + x_j^{(m+1)}) - T_{a_1}(b_2,\dots, b_{m+1}) \right)_{j=1}^{\infty}\\ &= \left(T(a_1 + x_j^{(1)}, b_2 + x_j^{(2)}, \dots ,b_{m+1} + x_j^{(m+1)}) - T(a_1, b_2,\dots, b_{m+1}) \right)_{j=1}^{\infty} \in \gamma_s(F). \end{align*} Thus, $T_{a_1} \in \prod_{\gamma_{s, s_1}}^{m, ev}(E_2,\dots ,E_{m+1}; F)$. So, \begin{align*} &\left\|\left(T_{a_1}(b_2 + x_j^{(2)}, \dots ,b_{m+1} + x_j^{(m+1)}) - T_{a_1}(b_2,..., b_{m+1}) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \le \\ &\le \pi_{\gamma_{s, s_1}}^{m+1, ev}(T)\|a_1\| \left(\|b_2\| + \left\|\left(x_j^{(2)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_1)} \right) \cdots \left(\|b_{m+1}\| + \left\|\left(x_j^{(m+1)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_m)} \right). \end{align*} Therefore, \begin{equation*} \pi_{\gamma_{s, s_1}}^{m, ev}(T_{a_1}) \le \pi_{\gamma_{s, s_1}}^{m+1, ev}(T) \|a_1\|. \end{equation*} \end{proof} \begin{proposition}\label{P2.2.} For each $P \in \mathcal{P}_{\gamma_{s, s_1}}^{m+1, ev}(^{m+1}E, F)$ and $a \in E$, \begin{equation*} P_a(x) := \check{P}(a, x, \overset{m}{\dots} , x) \end{equation*} belongs to $\mathcal{P}_{\gamma_{s, s_1}}^{m, ev}(^{m}E, F)$ and \begin{equation*} \pi^{m, ev}(P_a) \le \pi_{\gamma_{s, s_1}}^{m+1, ev}(\check{P}) \|a\|. \end{equation*} \end{proposition} \begin{proof} Let $P \in \mathcal{P}_{\gamma_{s, s_1}}^{m+1, ev}(^{m+1}E, F)$ and $a \in E$. For any $b \in E$ and $\left(x_j \right)_{j=1}^{\infty} \in \gamma_{s_1}(E)$, it follows from Proposition \ref{P1.1.} that $\check{P}$ is absolutely $\gamma_{s, s_1}$-summing in every point $(a_1,\dots, a_m) \in E\times \overset{m}{\cdots} \times E$. Again, consider the sequence $\left(y_{j} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E_1)$, such that $y_{j}=0$ for every $j \in \mathbb{N}$. Thus, \begin{align*} \left(P_a(b + x_j) - P_a(b) \right)_{j=1}^{\infty} &= \left(\check{P}(a + y_j, b + x_j, \overset{m}{\dots}, b + x_j) - \check{P}(a, b,\overset{m}{\dots}, b ) \right)_{j=1}^{\infty} \in \gamma_s(F). \end{align*} Thus, $P_a \in \mathcal{P}_{\gamma_{s, s_1}}^{m, ev}(^{m}E, F)$. Furthermore, \begin{align*} \left\|\left(P_a(b + x_j) - P_a(b) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \le \pi_{\gamma_{s, s_1}}^{m+1, ev}(\check{P}) \|a\| \left(\|b\| + \left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \right)^m. \end{align*} Therefore, \begin{equation*} \pi^{m, ev}(P_a) \le \pi_{\gamma_{s, s_1}}^{m+1, ev}(\check{P}) \|a\|. \end{equation*} \end{proof} The next definition contains an important property that will be used to prove (CH3) and (CH4) of Definition \ref{D2.2.}. \begin{definition} Let $E$ be a Banach space and $\gamma_{s}$ be a sequence class. We say that the sequence class $\gamma_{s}$ is $\mathbb{K}$-closed when, for any $\left(x_j\right)_{j=1}^{\infty}\in\gamma_{s}\left(\mathbb{K}\right)$ and $\left(y_j\right)_{j=1}^{\infty}\in\gamma_{s}\left(E\right)$, the sequence $\left(z_j\right)_{j=1}^{\infty}\in\gamma_{s}\left(E\right)$, where $z_{j}=x_{j}y_{j}$ and $$ \left\Vert\left(z_{j}\right)_{j=1}^{\infty}\right\Vert_{\gamma_{s}\left(E\right)}\le \left\Vert\left(x_{j}\right)_{j=1}^{\infty}\right\Vert_{\gamma_{s}\left(\mathbb{K}\right)} \left\Vert\left(y_{j}\right)_{j=1}^{\infty}\right\Vert_{\gamma_{s}\left(E\right)} $$ \end{definition} \begin{example} The sequence classes $\ell_p\langle \cdot\rangle$, $\ell_p(\cdot)$, $\ell_p^{mid}(\cdot)$ and $\ell_p^w(\cdot)$ are $\mathbb{K}$-closed. \end{example} \begin{definition}\label{DFC} Let $\gamma_s$ and $\gamma_{s_1}$ be sequence classes. We say that $\gamma_s$ and $\gamma_{s_1}$ are finitely coincident, when $\gamma_{s}(E)=\gamma_{s_1}(E)$ and that \begin{equation*} \left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s}(E)} = \left\|\left(x_j \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E)} \end{equation*} for any finite-dimensional linear space $E$. \end{definition} \begin{remark} In the next two propositions we will assume that the sequence class $\gamma_{s}$ is $\mathbb{K}$-closed and $\gamma_s$ and $\gamma_{s_1}$ are finitely coincident. \end{remark} \begin{proposition}\label{P2.3.} Let $T \in \prod_{\gamma_{s, s_1}}^{m, ev}(E_1,..., E_m; F)$ and $\varphi \in E_{m+1}'$, so $\varphi T \in \prod_{\gamma_{s, s_1}}^{m+1,ev}(E_1,..., E_{m+1}; F)$ and \begin{equation*} \pi_{\gamma_{s, s_1}}^{m+1, ev}(\varphi T) \le \|\varphi \| \pi_{\gamma_{s, s_1}}^{m, ev}(T). \end{equation*} \end{proposition} \begin{proof} We will do only the case $m = 2$. The other cases are analogous. Let $T \in \prod_{\gamma_{s, s_1}}^{2, ev}(E_1, E_2; F)$, $\varphi \in E_{3}'$ and $\left(x_j^{(i)} \right)_{j=1}^{\infty} \in \gamma_{s_1}(E_i)$, $a_i \in E_i$, $i=1, 2, 3$. Thus, because $\gamma_s$ is linearly stable, finitely determined, $\mathbb{K}$-closed, and because $\gamma_s$ and $\gamma_{s_1}$ are finitely coincident, it follows immediately that $$\left(\varphi T\left(a_1 + x_j^{(1)}, a_2 + x_j^{(2)}, a_3 + x_j^{(3)}\right) - \varphi T(a_1, a_2, a_3) \right)_{j=1}^{\infty}\in \gamma_s(F)$$ and that, \begin{align*} &\left\|\left(\varphi T\left(a_1 + x_j^{(1)}, a_2 + x_j^{(2)}, a_3 + x_j^{(3)}\right) - \varphi T(a_1, a_2, a_3) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ \le& \left\|\left(\varphi(a_3)T\left(a_1, x_j^{(2)}\right)\right)_{j=1}^{\infty}\right\|_{\gamma_s(F)} + \left\|\left(\varphi(a_3)T\left(x_j^{(1)}, a_2 \right)\right)_{j=1}^{\infty}\right\|_{\gamma_s(F)} + \left\| \left(\varphi(a_3)T\left(x_j^{(1)}, x_j^{(2)} \right)\right)_{j=1}^{\infty}\right\|_{\gamma_s(F)} +\\ +& \left\|\left(\varphi\left(x_j^{(3)}\right)T\left(a_1, x_j^{2} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} + \left\|\left(\varphi\left(x_j^{(3)}\right)T\left(x_j^{(1)}, a_2 \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} +\\ +& \left\|\left(\varphi\left(x_j^{(3)}\right)T\left(x_j^{(1)}, x_j^{2} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} + \left\|\left(\varphi\left(x_j^{(3)}\right)T\left(a_1, a_2 \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ \le& \left\|\left(T\left(a_1, x_j^{(2)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \left(|\varphi(a_3)| + \left\|\left(\varphi\left(x_j^{(3)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right) +\\ +& \left\|\left(T\left(x_j^{(1)}, a_2 \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\left(|\varphi(a_3)| + \left\|\left(\varphi\left(x_j^{(3)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right) +\\ +& \left\|\left(T\left(x_j^{(1)}, x_j^{(2)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \left(|\varphi(a_3)| + \left\|\left(\varphi\left(x_j^{(3)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right) + \left\|T\left(a_1, a_2 \right) \right\|\left\|\left(\varphi\left(x_j^{(3)} \right) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})}\\ \le& \pi_{\gamma_{s, s_1}}^{ev}(T)\|a_1\|\left\|\left(x_j^{(2)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_2)}\|\varphi\| \left(\|a_3\| + \left\|\left(x_j^{(3)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_3)} \right) +\\ +& \pi_{\gamma_{s, s_1}}^{ev}(T)\|a_2\|\left\|\left(x_j^{(1)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_1)}\|\varphi\|\left(\|a_3\| + \left\|\left(x_j^{(3)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_3)} \right) +\\ +& \pi_{\gamma_{s, s_1}}^{ev}(T)\left\|\left(x_j^{(1)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_1)}\left\|\left(x_j^{(2)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_2)}\|\varphi\|\left(\|a_3\| + \left\|\left(x_j^{(3)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_3)} \right) +\\ +& \pi_{\gamma_{s, s_1}}^{ev}(T)\|a_1\|\|a_2\|\|\varphi\|\left\|\left(x_j^{(3)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_3)}\\ &= \|\varphi \|\pi_{\gamma_{s, s_1}}^{ev}(T)\left( \prod_{i=1}^3\left(\|a_i\| + \left\|\left(x_j^{(i)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_i)} \right) - \|a_1\| \|a_2\| \|a_3\|\right).\\ \end{align*} Consequently, \begin{align*} &\left\|\left(\varphi T\left(a_1 + x_j^{(1)}, a_2 + x_j^{(2)}, a_3 + x_j^{(3)}\right) - \varphi T(a_1, a_2, a_3) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ &\le \|\varphi \|\pi_{\gamma_{s, s_1}}^{ev}(T) \prod_{i=1}^3\left(\|a_i\| + \left\|\left(x_j^{(i)} \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(E_i)} \right). \end{align*} From where it follows that $\varphi T \in \prod_{\gamma_{s, s_1}}^{m+1, ev}(E_1,\dots, E_{m+1}; F)$ and \begin{equation*} \pi_{\gamma_{s, s_1}}^{m+1, ev}(\varphi T) \le \|\varphi \| \pi_{\gamma_{s, s_1}}^{m, ev}(T). \end{equation*} \end{proof} \begin{proposition}\label{P2.4.} Let $P \in \mathcal{P}_{\gamma_{s, s_1}}^{m, ev}(^{m}E, F)$ and $\varphi \in E'$. Then $ \varphi P \in \mathcal{P}_{\gamma_{s, s_1}}^{m+1, ev}(^{m+1}E; F) $ and \begin{equation*} \pi^{m+1, ev}(\varphi P) \le \|\varphi \| \pi_{\gamma_{s, s_1}}^{m, ev}(P). \end{equation*} \end{proposition} \begin{proof} Let $P \in \mathcal{P}_{\gamma_{s, s_1}}^{m, ev}(^{m}E, F)$, $\varphi \in E'$ and $(x_j)_{j=1}^{\infty} \in \gamma_{s_1}(E)$. It is easy to see what is required. To illustrate this point, we will make the case $m = 2 $. The general case is analogous. For any $a \in E$, because $\gamma_s$ is linearly stable, finitely determined, $\mathbb{K}$-closed, and because $\gamma_s$ and $\gamma_{s_1}$ are finitely coincident. From Proposition \ref{P1.1.}, $\check{P} \in \prod_{\gamma_{s, s_1}}^{ev}(E^2; F)$. So, \begin{align*} &\left( \varphi P(a + x_j) - \varphi P(a)\right)_{j=1}^{\infty}\\ &= \left(\varphi(a + x_j)P\left(a + x_j \right) - \varphi(a)P(a) \right)_{j=1}^{\infty}\\ &= \left(\varphi(a + x_j)\check{P}\left(a + x_j, a + x_j \right) - \varphi(a)\check{P}(a, a) \right)_{j=1}^{\infty}\\ &= 2\left(\varphi(a)\check{P}(a, x_j) \right)_{j=1}^{\infty} + \left(\varphi(a)\check{P}(x_j, x_j) \right)_{j=1}^{\infty} + \left(\varphi(x_j)\check{P}(a, a) \right)_{j=1}^{\infty} + 2\left(\varphi(x_j)\check{P}(a, x_j) \right)_{j=1}^{\infty} + \\ &+ \left(\varphi(x_j)\check{P}(x_j, x_j) \right)_{j=1}^{\infty} \in \gamma_s(F). \end{align*} Then, $ \varphi P \in \mathcal{P}_{\gamma_{s, s_1}}^{3, ev}(^3E; F) $. Furthermore, \begin{align*} &\left\|\left(\varphi P\left(a + x_j \right) - \varphi P(a) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \\ &\le 2\left\|\left(\varphi(a)\check{P}(a, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} + \left\| \left(\varphi(a)\check{P}(x_j, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} + \left\|\left(\varphi(x_j)\check{P}(a, a) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} +\\ &+ 2\left\|\left(\varphi(x_j)\check{P}(a, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} + \left\|\left(\varphi(x_j)\check{P}(x_j, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\\ &\le 2\left\|\left(\check{P}(a, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\left(|\varphi(a)| + \left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right)\\ &+ \left\|\left(\check{P}(x_j, x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)}\left(|\varphi(a)| + \left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right) + \|\check{P}(a, a)\|\left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})}\\ \end{align*} How $\|\check{P}\| \le \pi_{\gamma_{s, s_1}}^{2, ev}(\check{P})$. We have that \begin{align*} &\left\|\left(\varphi P\left(a + x_j \right) - \varphi P(a) \right)_{j=1}^{\infty} \right\|_{\gamma_s(F)} \\ &\le \pi_{\gamma_{s, s_1}}^{2, ev}(\check{P})\|\varphi\| \left(3\|a\|^2\left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} + 3 \|a\|\left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})}^2 + \left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})}^3 \right)\\ &\le \pi_{\gamma_{s, s_1}}^{2, ev}(\check{P})\|\varphi\|\left(\|a\| + \left\|\left(\varphi(x_j) \right)_{j=1}^{\infty} \right\|_{\gamma_{s_1}(\mathbb{K})} \right)^3. \end{align*} Therefore, \begin{equation*} \pi^{3, ev}(\varphi P) \le \|\varphi \| \pi_{\gamma_{s, s_1}}^{2, ev}(\check{P}). \end{equation*} \end{proof} By Propositions \ref{P2.1.}, \ref{P2.2.}, \ref{P2.3.}, \ref{P2.4.} and \ref{P1.1.}, the pair $$\left(\left(\mathcal{P}_{\gamma_{s, s_1}}^{m, ev}, \pi^{m, ev}(\cdot ) \right), \left( \mathcal{\prod}_{\gamma_{s, s_1}}^{m, ev}, \pi_{\gamma_{s, s_1}}^{m, ev}(\cdot ) \right)\right)_{m=1}^{\infty}$$ is coherent. Since $\beta_1 = 1, \beta_2 = 1$ and $\beta_3 = 1$, it follows by \cite[Remark 3.3]{PR14} that the pair $\left(\left(\mathcal{P}_{\gamma_{s, s_1}}^{m, ev}, \pi^{m+1, ev}(\cdot ) \right), \left( \prod_{\gamma_{s, s_1}}^{m, ev}, \pi_{\gamma_{s, s_1}}^{m, ev}(\cdot ) \right)\right)_{m=1}^{\infty}$ is compatible with $\prod_{\gamma_{s, s_1}}$. So, we have the following result. \begin{theorem} The sequence $\left(\left(\mathcal{P}_{\gamma_{s, s_1}}^{m, ev}, \pi^{m+1, ev}(\cdot ) \right), \left( \prod_{\gamma_{s, s_1}}^{m, ev}, \pi_{\gamma_{s, s_1}}^{m, ev}(\cdot ) \right)\right)_{m=1}^{\infty}$ is coherent and compatible with $\prod_{\gamma_{s, s_1}}$. \end{theorem} It is important to point out that to obtain the proof of Proposition \ref{P2.1.}, \ref{P2.2.} and \ref{P1.1.} it is only necessary that the sequence class be linear stability and finitely determined. However, to demonstrate propositions\ref{P2.3.} and \ref{P2.4.}, extra properties were required for the classes involved; more specifically, the sequence classes should be finitely coincidents and the arrival sequence class should be $\mathbb{K}$-closed. These conditions do not appear to be very restrictive because the main classes of the summing ideals existing in the literature are recovered by our work. The next section illustrates our arguments. \section{Applications} For any Banach space $E$, we will denote $\ell_p\langle E \rangle, \ell_p(E)$ and $\ell_p^{w}(E)$ the spaces of Cohen strongly $p$-summing, absolutely $p$-summing and weakly p-summable $E$-valued sequences, respectively. In 2014 S. Kinha and D. Sinha \cite{KS14} introduced the space $\ell_p^{mid}(E)$, which was studied in more details by G. Botelho, J. Campos and J. Santos in \cite{BCS17}. These papers established the inclusions \begin{equation}\label{Inclusoes} \ell_p\langle E \rangle \subset \ell_p(E) \subset \ell_p^{mid}(E) \subset \ell_p^w(E). \end{equation} The nature of many operators in the literature is to "improve" the convergence of series. For example, we can cite the absolutely summing operators that transform weakly p-summable sequences into absolutely p-summable sequences. Thinking in this direction, we can define several classes of operators that improve the convergence of the series. In the next two examples, we will present classes that are already known and which are particular cases of our work. In the other examples, we present a few classes of operators that are not yet available in the literature, although they can easily be obtained through the construction presented in this work. \begin{example} Let $\mathcal{P}_{(p, q)}^{m, ev}$ be the space of absolutely summing $n$-homogeneous polynomials and $\prod_{(p, q)}^{m, ev}$ be the space of absolutely summing multilinear operators. For more details about this class, see \cite{BBDP07}. Given that the sequences classes involved are $\ell_p^{w}(\cdot)$, and $\ell_p(\cdot)$ and they are linearly stable, finitely determined and, moreover, finitely coincident, and $\ell_p(\cdot)$ is $\mathbb{K}$-closed, it immediately follows that the pair $\left(\left(\mathcal{P}_{(p, q)}^{m, ev}, \|\cdot \|_{ev^2} \right), \left(\prod_{(p, q)}^{m, ev}, \|\cdot \|_{ev^2(p, q)} \right) \right)_{m=1}^{\infty}$ is coherent and compatible with $\prod_{(p, q)}$. \end{example} \begin{example} Let $\mathcal{P}_{Coh, p}^{m, ev}$ be the space of the $n$-homogeneous polynomials Cohen strongly $p$-summing everywhere and $\mathcal{L}_{Coh, p}^{m, ev}$ be the space of the multilinears operators Cohen strongly $p$-summing everywhere. For more details about this class, see \cite{C13-tese,S13}. Given that the sequence classes involved are $\ell_p\langle\cdot\rangle$ and $\ell_p(\cdot)$, and because they are linearly stable, finitely determined, finitely coincident and, moreover, $\ell_p\langle\cdot\rangle$ is $\mathbb{K}$-closed, it immediately follows that the pair $\left(\left(\mathcal{P}_{Coh, p}^{m, ev}, \pi^{m, ev} \right), \left(\mathcal{L}_{Coh, p}^{m, ev}, \pi_{Coh, p}^{m, ev} \right) \right)_{m=1}^{\infty}$ is coherent and compatible with $\mathcal{D}_{p}$. \end{example} Note that, the classes of multilinears operator/ homogeneous polynomials that are defined in these two examples consider applications that transform weakly strongly $p$-summable sequences in strongly $p$-summable sequences and strongly $p$-summable sequences in Cohen strongly $p$-summable. However, due to the inclusions given above \ref{Inclusoes}, we can present several classes of multilinear operators and homogeneous polynomials that are yet not found in the literature. In this way, this approach establishes an interesting result for this classes. We can consider these examples: \begin{example} The classes of multilinears operators and homogeneous polynomials that transform mid $p$-summable sequences in strongly $p$-summable operators. In other words, to consider $\gamma_s = \ell_p$ and $\gamma_{s_1} = \ell_p^{mid}$. We denote these classes as multilinear operators and homogeneous polynomials mid strongly $p$-summing. \end{example} \begin{example} The classes of multilinears operators and homogeneous polynomials that transform weakly absolutely $p$-summable sequences in mid $p$-summable operators. In other words, to consider $\gamma_s = \ell_p^{mid}$ and $\gamma_{s_1} = \ell_p^{w}$. We denote these classes as multilinear operators and homogeneous polynomials weakly mid $p$-summing. \end{example} \begin{example} The classes of multilinears operators and homogeneous polynomials that transform weakly strongly $p$-summable and mid $p$-sommable sequences in Cohen strongly $p$-summable operators. In other words, to consider $\gamma_s = \ell_p\langle \cdot \rangle$ and $\gamma_{s_1} = \ell_p^w, \ell_p^{mid}$. We denote these classes as multilinear operators and homogeneous polynomials weakly Cohen $p$-summing and mid Cohen $p$-summing, respectively. \end{example} \section*{Acknowledgments} The authors thanks Geraldo Botelho for their several helpful conversations and suggestions.
2,877,628,089,329
arxiv
\section{Introduction} A supersymmetric grand unified description\cite{Dimopoulos:1981yj, Dimopoulos:1981zb, Ibanez:1981yh, Sakai:1981gr, Einhorn:1981sx, Marciano:1981un} of the fundamental forces of nature has been the holy grail of particle physics for many years now. Such a unified description would bring some order into the chaotic world of particle representations. In addition, the many different parameters of the Standard Model can be tied down using a grand unified symmetry. Unfortunately, most such unified descriptions in 4-dimensions are haunted by many issues. Two notable problems with 4-dimensional supersymmetric grand unified theories (SUSY GUTs) include the Higgs doublet-triplet splitting problem and the complicated potentials required to break the grand unified symmetry down to the Standard Model gauge groups. Apart from these theoretical hindrances, the major setback to 4D SUSY GUTs came with the experimental non-observation of proton decay at the predicted life-times of the models\cite{Dermisek:2000hr,Murayama:2001ur}. SuperK places the current lower bound on the proton lifetime ($p \rightarrow e^+ \pi^0$) to be 1 $\times \ 10^{34}$ years\cite{ichep10}. Also, SUSY GUTs, with the standard CMSSM scenario of SUSY breaking, require GUT-scale threshold corrections of about $-$3\% in order to fit the low energy value of the strong coupling. An elegant and definitive solution to the above stated theoretical issues was proposed in models of orbifold GUTs. Grand unified theories constructed in higher dimensional spaces could be reduced to 4-dimensional GUTs by compactifying the extra-dimensions on specific manifolds\cite{Dienes:1998vh,Dienes:1998vg}. By doing so, it was found that the Higgs doublet-triplet problem could be solved in a simple manner by choosing the correct parities along the strong and the weak directions\cite{Kawamura:2000ev,Hall:2001pg}. Many orbifold GUTs have been constructed since then\cite{Contino:2001si, Asaka:2001eh, Hebecker:2001wq, Hall:2001zb,Dermisek:2001hp,Hall:2001xb,Kim:2002im}, with interesting phenomenology and realistic supersymmetric spectrum. The Kaluza-Klein tower of states that arise in these extra-dimensional GUTs can also account for the GUT scale threshold corrections\cite{Hall:2001pg,Hall:2001xb,Kim:2002im,Dundee:2008ts, Anandakrishnan:2011zn}. It must be pointed out that orbifold GUT model-building in field theory constructions mirrored the earlier work in heterotic string theory constructions\cite{Dixon:1985jw, Dixon:1986jc, Breit:1985ud, Ibanez:1987sn}. In recent years, many different features of orbifold compactifications have been studied both from a phenomenological bottom-up approach, as well as top-down from string theory. Within the context of string theory, gauge coupling unification occurs at the string scale. This may occur with or without an intermediate GUT. However, such theories have the problem that the string scale is typically about 20 times larger than the 4D GUT scale \cite{Kaplunovsky:1987rp,Kaplunovsky:1993rd,Dixon:1990pc}. One might hope that gauge coupling unification can be reconciled with string unification by lowering the string unification scale. It has been argued that non-local breaking of the GUT symmetry via Wilson lines on an anisotropic orbifold can solve the problem of string unification \cite{Ross:2004mi,Hebecker:2004ce,Trapletti:2006xv}. In this paper we provide a self-consistent test of this hypothesis on a particular 6D orbifold. It is possible that this orbifold GUT is an effective low energy theory of some string compactification, but we have not come across any compactification that would lead to an orbifold with the topology discussed here. In this work, we present a 6D model with SU(6) gauge symmetry and N=2 supersymmetry. In terms of 4D language, such a 6D theory with N=2 SUSY contains one vector adjoint and three chiral adjoints. The model has gauge-Higgs unification with the Higgs doublets coming from one of the chiral adjoints. The group SU(6) is broken to SU(5) $\times$ U(1)$_X$ via orbifold boundary conditions. Then SU(5) is broken to the Standard Model gauge group and, at the same time, Higgs doublet-triplet splitting is accomplished by a non-local Wilson line. The two extra-dimensions are compactified on an orbifold that can be characterized as a sphere with a cross-cap, as described in \cite{Hebecker:2003we,Hebecker:2004ce,Trapletti:2006xv}. Quarks and leptons, and their respective Yukawa couplings to the Higgs, are localized at the orbifold fixed points which only retain an N=1 SUSY in 4D with SU(5) $\times$ U(1)$_X$ gauge invariance (see for example Refs. \cite{Hall:2001pg, Hall:2001zb} where this phenomenon has been discussed). The details of the orbifold and the symmetry breaking are discussed in Section \ref{orbifold}. We break the SU(6) $\rightarrow$ SU(5) $\times$ U(1)$_X$ using one of the orbifold projections, locally at the fixed points. We then break the SU(5) $\rightarrow$ SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ using a Wilson line along the fifth and sixth directions. In Section \ref{unification}, we analyze gauge coupling unification in the SU(6) GUT model constructed on such an orbifold and calculate the GUT-scale threshold corrections in this scenario. We find that unlike in most popular models of orbifold GUTs, the couplings do not receive any power law corrections above the compactification scale due to the effective N=4 SUSY in 4D. We analyze the GUT-scale threshold corrections to determine if they are at the required level to match low energy physics. We point out that an example of an orbifold GUT from a 6D SU(6) was considered in\cite{Hall:2001zb} with the the similar feature of gauge-Higgs unification. The extra-dimensions were compactified on $T^2/(Z_2 \times Z'_2)$ and the authors obtain realistic phenomenology with local GUT breaking. The 6D GUT theory also had an N=2 supersymmetry. As a consequence, the coefficient of the effective 6D quadratic power law dependence of the gauge couplings vanished, but due to the existence of fixed lines the effective 5D linear dependence remained. \section{GUT breaking} \label{orbifold} \subsection{Real Projective Plane} An N=2 supersymmetric SU(6) gauge theory in 6 dimensions is compactified on an orbifold, shown in Fig \ref{hebecker}, as described in Hebecker\cite{Hebecker:2003we}. The extra dimensions are compactified on a torus $T^2$ parametrized by (\ensuremath{x_{5}}, \ensuremath{x_{6}}). The two dimensions are also identified to have the periodicity, $x_{(5,6)} = x_{(5,6)} + 2 \pi R_{(5,6)}$, where \ensuremath{R_{5}}\ and \ensuremath{R_{6}}\ are the radius of the torus along the two directions. Two discrete symmetries, the rotation ${\cal Z}$ and a freely acting roto-translation ${\cal Z}'$, as defined in Eq.(\ref{P}, \ref{Pp}) are modded out. Once the first symmetry is modded out, the topology of the compact space is that of a 2-sphere with curvature concentrated at the four conical singularities. The space resembles a pillow with fundamental group $\pi_1 = \emptyset$. Once the second parity is modded out, the resulting compact space is equivalent to a projective plane, $RP^2$. It is non-orientable with no boundaries, the curvature is concentrated at the two fixed points denoted by $F_1$ and $F_2$ and $\pi_1 = {\mathbb Z}_2$. The non-orientability of the space can be ascribed to the cross-cap where opposite points on the circle are identified. \begin{align} \label{P} {\cal Z} &\qquad& \ensuremath{x_{5}} \rightarrow -\ensuremath{x_{5}}, &\quad& \ensuremath{x_{6}} \rightarrow -\ensuremath{x_{6}} \\ \label{Pp} {\cal Z}'&\qquad& \ensuremath{x_{5}} \rightarrow -\ensuremath{x_{5}} + \pi \ensuremath{R_{5}}, &\quad& \ensuremath{x_{6}} \rightarrow \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}. \end{align} \begin{figure}[ht!] \centering \includegraphics[width=14cm]{manifold.eps} \vspace{-35pt} \caption{The figure shows the manifold at each step of the compactification. After the first step of orbifolding, the space looks like a pillow with four fixed points denoted by red dots in the center figure. After the second step of orbifolding as described in \cite{Hebecker:2003we}, this space is equivalent to a real projective plane.} \label{hebecker} \end{figure} We choose to write the particle content of the theory in terms of the 4D language. There is one vector superfield, V and three chiral superfields, \ensuremath{\Sigma_{5}}, \ensuremath{\Sigma_{6}}, and $\Phi$. Using the notation in \cite{Hall:2001zb}, the bulk action in the Wess-Zumino gauge is given by: \begin{small} \begin{eqnarray} S &=& \int d^6 x \Biggl\{ {\rm Tr} \Biggl[ \int d^2\theta \Biggl( \frac{1}{4 k g^2} {\cal W}^\alpha {\cal W}_\alpha \nonumber \\ && \qquad + \frac{1}{k g^2} \Bigl( \Phi \partial_5 \Sigma_6 - \Phi \partial_6 \Sigma_5 - \frac{1}{\sqrt{2}} \Phi [\Sigma_5, \Sigma_6] \Bigr) \Biggr) + {\rm h.c.} \Biggr] \nonumber \\ && \qquad +\int d^4\theta \frac{1}{k g^2} {\rm Tr} \Biggl[(\sqrt{2} \partial_5 + \Sigma_5^\dagger) e^{-V} (-\sqrt{2} \partial_5 + \Sigma_5) e^{V} + \nonumber \\ && \qquad (\sqrt{2} \partial_6 + \Sigma_6^\dagger) e^{-V} (-\sqrt{2} \partial_6 + \Sigma_6) e^{V} \nonumber \\ && \qquad + \Phi^\dagger e^{-V} \Phi e^{V} + \partial_5 e^{-V} \partial_5 e^{V} + \partial_6 e^{-V} \partial_6 e^{V} \Biggr] \Biggr\} \label{eq:5daction} \end{eqnarray} \end{small} \subsection{SU(6) $\rightarrow$ SU(5) $\times$ U(1)$_X$} The 6D N=2 supersymmetric theory that we start with has an effective N=4 SUSY in 4 dimensions. The action of the above discussed parities can be used to break the gauge group SU(6) down to SU(5) $\times$ U(1)$_X$, and at the same time break N = 4 SUSY to N = 1 SUSY (in 4D) \cite{Mirabelli:1997aj}. We can break SU(6) to SU(5) $\times$ U(1)$_X$ by requiring the fields to transform as illustrated below, under the two parities.\\[10pt] \begin{small} Under the parity, ${\cal Z}$: \begin{eqnarray} V(-\ensuremath{x_{5}}, -\ensuremath{x_{6}}) & =& P V(\ensuremath{x_{5}}, \ensuremath{x_{6}}) P^{-1}, \nonumber \\ \ensuremath{\Sigma_{5}}(-\ensuremath{x_{5}}, -\ensuremath{x_{6}}) & =& -P \ensuremath{\Sigma_{5}}(\ensuremath{x_{5}}, \ensuremath{x_{6}}) P^{-1}, \nonumber \\ \ensuremath{\Sigma_{6}}(-\ensuremath{x_{5}}, -\ensuremath{x_{6}}) &=& -P \ensuremath{\Sigma_{6}}(\ensuremath{x_{5}}, \ensuremath{x_{6}}) P^{-1},\nonumber \\ \Phi(-\ensuremath{x_{5}}, -\ensuremath{x_{6}}) &=& P \Phi(\ensuremath{x_{5}}, \ensuremath{x_{6}}) P^{-1}, \label{Ptransform} \end{eqnarray} Under the parity, ${\cal Z'}$: \begin{eqnarray} V(-\ensuremath{x_{5}} + \pi \ensuremath{R_{5}}, \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}) &=& V(\ensuremath{x_{5}}, \ensuremath{x_{6}}), \nonumber \\ \ensuremath{\Sigma_{5}}(-\ensuremath{x_{5}} + \pi \ensuremath{R_{5}}, \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}) &=& - \ensuremath{\Sigma_{5}}(\ensuremath{x_{5}}, \ensuremath{x_{6}}), \nonumber \\ \ensuremath{\Sigma_{6}}(-\ensuremath{x_{5}} + \pi \ensuremath{R_{5}}, \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}) &=& \ensuremath{\Sigma_{6}}(\ensuremath{x_{5}}, \ensuremath{x_{6}}), \nonumber \\ \Phi(-\ensuremath{x_{5}} + \pi \ensuremath{R_{5}}, \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}) &=& - \Phi(\ensuremath{x_{5}}, \ensuremath{x_{6}}). \label{Pptransform} \end{eqnarray} \end{small} where P = diag$(i,i,i,i,i,-i)$, breaks the SU(6) $\rightarrow$ SU(5) $\times$ U(1)$_X$. The projection ${\cal Z}$ has four fixed points(as shown in Fig \ref{hebecker}) and hence the SU(6) symmetry is broken down to SU(5) $\times$ U(1)$_X$ only at those fixed points. The symmetry breaking in this case is said to be localized. On the other hand, the second parity is freely acting (without any fixed points). Therefore, breaking the gauge symmetry using the second orbifold projection would have led to non-local breaking of the SU(6). It can be shown that the gauge symmetry breaking by this orbifold action can be rewritten as symmetry breaking by a Wilson line. However, as we shall see in the next section, we require an additional Wilson line to further break the SU(5) down to SU(3) $\times$ SU(2) $\times$ U(1)$_Y$. The conditions on the Wilson lines on this orbifold (to be discussed in the next section) do not allow for a minimal execution of the gauge symmetry breaking from SU(6) $\rightarrow$ SU(5) $\times$ U(1)$_X$ $\rightarrow$ SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ $\times$ U(1)$_X$ in a completely non-local way. Hence we choose to break the SU(6) $\rightarrow$ SU(5) $\times$ U(1)$_X$ locally and the SU(5) non-locally using a Wilson line. Under the combined operation (${\cal Z}, {\cal Z}'$) the components of the fields transform as follows:\\[15pt] \begin{footnotesize} \begin{minipage}[l]{0.5\linewidth} \begin{align} V = \left( \begin{array}{c|c|c} (++) (++) (++) & (++) (++) & (-+) \\ (++) (++) (++) & (++) (++) & (-+) \\ (++) (++) (++) & (++) (++) & (-+) \\ \hline (++) (++) (++) & (++) (++) & (-+) \\ (++) (++) (++) & (++) (++) & (-+) \\ \hline (-+) (-+) (-+) & (-+) (-+) & (++) \\ \end{array} \right) \nonumber \\ \Sigma_5 = \left( \begin{array}{c|c|c} (--) (--) (--) & (--) (--) & (+-) \\ (--) (--) (--) & (--) (--) & (+-) \\ (--) (--) (--) & (--) (--) & (+-) \\ \hline (--) (--) (--) & (--) (--) & (+-) \\ (--) (--) (--) & (--) (--) & (+-) \\ \hline (+-) (+-) (+-) & (+-) (+-) & (--) \\ \end{array} \right) \nonumber \end{align} \end{minipage} \begin{minipage}[r]{0.5\linewidth} \begin{align} \Sigma_6 = \left( \begin{array}{c|c|c} (-+) (-+) (-+) & (-+) (-+) & (++) \\ (-+) (-+) (-+) & (-+) (-+) & (++) \\ (-+) (-+) (-+) & (-+) (-+) & (++) \\ \hline (-+) (-+) (-+) & (-+) (-+) & (++) \\ (-+) (-+) (-+) & (-+) (-+) & (++) \\ \hline (++) (++) (++) & (++) (++) & (-+) \\ \end{array} \right) \nonumber \\ \Phi = \left( \begin{array}{c|c|c} (+-) (+-) (+-) & (+-) (+-) & (--) \\ (+-) (+-) (+-) & (+-) (+-) & (--) \\ (+-) (+-) (+-) & (+-) (+-) & (--) \\ \hline (+-) (+-) (+-) & (+-) (+-) & (--) \\ (+-) (+-) (+-) & (+-) (+-) & (--) \\ \hline (--) (--) (--) & (--) (--) & (+-) \\ \end{array} \right) \nonumber \\ \end{align} \end{minipage} \end{footnotesize} \vspace{10pt} The parity operations (${\cal Z}, {\cal Z}'$) performed on the coordinate space are symmetries of the Lagrangian, hence the fields in the Lagrangian must be eigenstates of the parity operations. A general field $\varphi = \{V, \ensuremath{\Sigma_{5}}, \ensuremath{\Sigma_{6}}, \Phi\}$ by definition of the manifold, are periodic functions of \ensuremath{x_{5}}\ and \ensuremath{x_{6}}. \begin{eqnarray} \varphi (x, \ensuremath{x_{5}} + 2 \pi \ensuremath{R_{5}},\ensuremath{x_{6}}) = \varphi (x, \ensuremath{x_{5}},\ensuremath{x_{6}}) \nonumber \\ \varphi (x, \ensuremath{x_{5}},\ensuremath{x_{6}} + 2 \pi \ensuremath{R_{6}}) = \varphi (x, \ensuremath{x_{5}},\ensuremath{x_{6}}) \label{periodicity} \end{eqnarray} This allows us to expand them as: \begin{equation} \varphi (x, \ensuremath{x_{5}}, \ensuremath{x_{6}}) = \frac{1}{\sqrt{2 \pi \ensuremath{R_{5}} \ensuremath{R_{6}}}} \sum_{m, n = -\infty}^{+ \infty} \ensuremath{\varphi^{(m,n)}} \text{exp} \left[ i \left( \frac{m\ensuremath{x_{5}}}{\ensuremath{R_{5}}} + \frac{n\ensuremath{x_{6}}}{\ensuremath{R_{6}}} \right) \right] \\ \label{expansion} \end{equation} The eigenstates of the parity operations are required to obey: \begin{eqnarray} \varphi_{\pm \widehat \pm} (x_\mu, -\ensuremath{x_{5}}, -\ensuremath{x_{6}}) &=& \pm \varphi_{\pm \widehat \pm} (x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \nonumber \\ \varphi_{\pm \widehat \pm} (x_\mu, -\ensuremath{x_{5}}+ \pi \ensuremath{R_{5}}, \ensuremath{x_{6}} + \pi \ensuremath{R_{6}}) &=& \widehat \pm \varphi_{\pm \widehat \pm} (x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \end{eqnarray} which project out even and odd modes that can be written out as: \begin{eqnarray} && \varphi_{\pm \widehat{\pm}}(x, \ensuremath{x_{5}}, \ensuremath{x_{6}}) = \frac{1}{4\sqrt{2 \pi \ensuremath{R_{5}} \ensuremath{R_{6}}}} \nonumber \\ &&\times \sum_{m, n} \left[(\varphi^{(m,n)} \pm \varphi^{(-m,-n)}) \widehat{\pm} (-1)^{m-n}(\varphi^{(-m,n)} \pm \varphi^{(m,-n)}) \right]\text{exp} \left[ i \left( \frac{m\ensuremath{x_{5}}}{\ensuremath{R_{5}}} + \frac{n \ensuremath{x_{6}}}{\ensuremath{R_{6}}} \right) \right] \nonumber\\ \label{modeexp} \end{eqnarray} In the above three expressions, $\pm$ denotes states that are even/odd under the first parity operation, and $\widehat{\pm}$ denotes states that are even/odd under the second parity. The massless modes come only from the $+\widehat{+}$ (hereafter denoted as ++) parity modes. The above spectrum is illustrated in Fig. \ref{states} \begin{figure}[ht] \begin{minipage}[c]{0.5\linewidth} \centering \includegraphics[width=5.8cm]{statespp.eps} \vspace{15pt} \end{minipage} \begin{minipage}[c]{0.5\linewidth} \centering \includegraphics[width=5.8cm]{statespm.eps} \vspace{15pt} \end{minipage} \begin{minipage}[c]{0.5\linewidth} \centering \includegraphics[width=5.8cm]{statesmp.eps} \end{minipage} \begin{minipage}[c]{0.5\linewidth} \centering \includegraphics[width=5.8cm]{statesmm.eps} \end{minipage} \caption{\label{states}\footnotesize The mode expansion in Eq.(\ref{modeexp}) gives the information about where the various parity eigenstates exist. Notice that this figure depicts only the positive parts of the (m,n) values while for the calculations they should be summed over both positive and negative integers. It is clear from the figure that only the $(++)$ fields have zero modes.} \end{figure} \subsection{SU(5) $\rightarrow$ SU(3) $\times$ SU(2) $\times$ U(1)$_Y$} We now introduce a Wilson line to break the symmetry down to the Standard Model. A gauge field, $A_M \equiv \sum_a A_M^a T^a$ transforms under a gauge transformation as follows: \begin{equation} A_M(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \rightarrow U A_M(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) U^\dagger - i U \partial_M U^\dagger \end{equation} where $T^a$ correspond to the generators of the gauge group.\footnote{This is the remaining gauge symmetry of the supersymmetric theory in the Wess-Zumino gauge.} Now consider a constant background gauge field along the fifth ans sixth directions: \begin{equation} A_5 = \frac{1}{4\ensuremath{R_{5}}} T \quad \text{and}, \quad A_6 = \frac{1}{4\ensuremath{R_{6}}} T \end{equation} where $T$ is the generator (up to a constant) that breaks SU(6) down to SU(3) $\times$ SU(3) $\times$ U(1) given by:\footnote{This constant background field is consistent with the parity operation $A_5 \rightarrow - A_5$ with the additional periodic gauge transformation, such that $A_5' = U(\ensuremath{x_{5}}) (- A_5) U(\ensuremath{x_{5}})^\dagger -i U(\ensuremath{x_{5}}) \partial_{\ensuremath{x_{5}}} U(\ensuremath{x_{5}})^\dagger \equiv A_5$ and $U(\ensuremath{x_{5}}) = \text{exp} \left( -i \frac{\ensuremath{x_{5}}}{\ensuremath{R_{5}}} \frac{T}{2} \right)$ is periodic under $\ensuremath{x_{5}} \rightarrow \ensuremath{x_{5}} + 2 \pi \ensuremath{R_{5}}$ up to an element of the center of the group SU(6) \cite{Dermisek:2002ri}.} \begin{align} T = \left( \begin{array}{cccccc} 1& & & & & \\ & 1 & & & & \\ & & 1 & & & \\ & & & -1 & & \\ & & & & -1& \\ & & & & & -1 \\ \end{array} \right) \end{align} Note that the choice of the background gauge fields must obey some strict constraints. For example, the space group generators obey: \begin{align} {\cal Z}^2 = \mathds{1}, \qquad {\cal Z'}^2 = T_6 \end{align} The second condition implies that the action of the parity ${\cal Z'}$ is equivalent to the holonomy coming from the gauge field along the sixth direction. In addition, \begin{equation} {\cal Z Z' Z Z'} = T_5^{-1} \end{equation} Rewriting the above relation of the space group generators as holonomies, we get: \begin{equation} G({\cal Z}^2) G({\cal Z'}^2) = G(T_5^{-1}) \end{equation} where we have use the fact that U(1) holonomies commute. Noting that $G(T_5^{-1})$ = $G(T_5)$, we find that the holonomies should obey the condition: \begin{equation} G(T_5) = G(T_6) \label{holonomies} \end{equation} This statement tells us that the Wilson lines cannot be independent along the two extra-dimensions\footnote{We are thankful to the referee for pointing this out.}. The presence of such a background gauge field breaks the gauge symmetry.\footnote{This mechanism is popularly known as \textit{Hosotani} mechanism or \textit{Wilson-line} symmetry breaking\cite{Hosotani:1983xw, Candelas:1985en, Witten:1985xc}.} The constant background fields introduce a holonomy equal to $W = \text{exp} \left( i \oint A_{5} d \ensuremath{x_{5}} + i \oint A_{6} d \ensuremath{x_{6}} \right)$. This non-trivial holonomy affects the spectrum of Kaluza-Klein states. In an equivalent picture\cite{Hall:2001tn, Dermisek:2002ri}, the background gauge field can be gauged away completely by choosing the proper gauge transformation, and in this case, we find that the gauge condensate vanishes when \begin{equation} U(\ensuremath{x_{5}}) = \text{exp} \left[i \left( \frac{\ensuremath{x_{5}}}{\ensuremath{R_{5}}} + \frac{\ensuremath{x_{6}}}{\ensuremath{R_{6}}} \right) \frac{T}{4} \right] \label{eqn:gauge} \end{equation} Nevertheless, the physics remains unchanged, and we determine the change in the KK spectrum due to the non-trivial holonomy (or Wilson-line). Under the gauge transformation operator, Eq.(\ref{eqn:gauge}), a generic adjoint field $\varphi$ transforms as: \begin{equation} \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) = U(\ensuremath{x_{5}}, \ensuremath{x_{6}}) \varphi(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) U^\dagger(\ensuremath{x_{5}}, \ensuremath{x_{6}}) \end{equation} which allows us to rewrite the gauge transformed wave function as \begin{equation} \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) = \text{e}^{ i \left( \frac{\ensuremath{x_{5}}}{\ensuremath{R_{5}}} + \frac{\ensuremath{x_{6}}}{\ensuremath{R_{6}}} \right) \frac{\ensuremath{I_\rho}}{4}} \varphi(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \label{transformed} \end{equation} where, $I_\rho$ is the eigenvalue of the generator T and $ \varphi(x_\mu, \ensuremath{x_{5}},\ensuremath{x_{6}})$ is the untransformed wave function as defined in Eq. (\ref{expansion}). The periodicity condition Eq.(\ref{periodicity}) of the fields then becomes: \begin{eqnarray} \varphi'(x_\mu, \ensuremath{x_{5}}+ 2\pi \ensuremath{R_{5}}, \ensuremath{x_{6}}) &=& P' \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) P'^\dagger \equiv e^{i \frac{\pi}{2} I_\rho} \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \nonumber \\ \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}} + 2\pi \ensuremath{R_{6}}) &=& P' \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) P'^\dagger = e^{i \frac{\pi}{2} I_\rho} \varphi'(x_\mu, \ensuremath{x_{5}}, \ensuremath{x_{6}}) \end{eqnarray} where $P' \equiv \text{exp} \left(i \frac{\pi}{2} T \right) = \text{diag}(i, i, i, -i, -i, -i)$. The above equation reflects the constraints on the Wilson lines that was demonstrated in Eq. (\ref{holonomies}). In addition, now we have re-expressed the Wilson line as a parity operation that breaks SU(6) down to SU(3) $\times$ SU(3) $\times$ U(1). Under the combined parity operations on the manifold and the non-vanishing background fields along the fifth and sixth directions, we have achieved gauge symmetry breaking of the SU(6) group to [SU(3) $\times$ SU(2) $\times$ U(1)$_Y$] $\times$ U(1)$_X$. The only choice we had here was a combination of local and non-local GUT breaking. It is possible to have a purely non local GUT breaking if one started with an SU(5) gauge theory on the same orbifold and and chose the second parity to break the SU(5) down to SU(3) $\times$ SU(2) $\times$ U(1) \cite{Hebecker:2003we}. We still have to calculate how the mass spectrum changes as a result of the holonomy due to the gauge field. This can be easily done by looking at the transformed wave function in Eq.(\ref{transformed}) and calculating the eigenvalues \ensuremath{I_\rho}\ of the generator T. The eigenvalues $I_\rho$ can be determined by calculating the commutator $\left[T, \varphi \right] $ since $\varphi$ is in the adjoint representation, of the form: \begin{align} \varphi = &\left( \begin{array}{ccccc|ccc|c} &&&&&&&& \\ &&(8,1)_0&&&&(3, \bar 2)_{-5/3}&&(3,1)_{-2/3} \\ &&&&&&&& \\ \hline &&&&&&&& \\ &&(\bar{3}, 2)_{5/3}&&&&(1,3)_0&&(1,2)_1 \\ \hline &&(\bar{3},1)_{2/3}&&&&(1, \bar 2)_{-1} &&(1,1)_0 \\ \end{array} \right)& \nonumber \\ = &\left( \begin{array}{ccccc|ccc|c} &&&&&&&& \\ &&g&&&&X&&T \\ &&&&&&&& \\ \hline &&&&&&&& \\ &&\bar{X}&&&&w&&H_u \\ \hline &&\bar{T}&&&&H_d&&b \\ \end{array} \right)& \end{align} The first line in the above expression shows the quantum numbers of the the different blocks that the adjoint field gets broken into after the orbifold projection and holonomy. We name them appropriately, so that they can be associated with the fields that remain massless in the low energy theory, like the gauge bosons, $g, w, b$ and the Higgs doublets, $H_u, H_d$; and the fields that obtain mass and do not appear in the low energy spectrum like the Higgs triplets $T, \bar{T}$ and states with exotic quantum numbers $X, \bar{X}$. The commutator of the generator $T$ with this quantity is calculated and the eigenvalues of are summarized in Table \ref{Irho}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $g$ & $w$ & $b$ & $X$ & $\bar{X}$ & $T$ & $\bar{T}$& $H_u$ & $H_d$ \\ \hline $I_\rho$ & 0& 0&0&2&-2&2&-2&0&0\\ \hline \end{tabular} \end{center} \caption{Eigenvalues $I_\rho$ of the generator T acting on the various fields (labelled by $\rho$) in the model.} \label{Irho} \end{table} Eventually, we see that the masses of the states in the KK tower are given by \begin{equation} M_{(m,n), \rho}^2 = \frac{(m+\frac{I_\rho}{4})^2}{\ensuremath{R_{5}}^2} + \frac{(n+\frac{I_\rho}{4})^2}{\ensuremath{R_{6}}^2} \end{equation} The massless states are those which are even under both parities and have zero eigenvalue under the holonomy. These turn out to be only the standard model gauge bosons and the Higgs doublets, $H_u, H_d$ coming from the chiral adjoint $\Sigma_6$. Finally, we also note that at the two fixed points, $F_1$ and $F_2$ which are located at $(0,0)$ and $(0, \pi \ensuremath{R_{6}})$, there is only an SU(5) whereas the bulk has an SU(6). The three families of quarks and leptons are also assumed to sit at these singularities coming in 3 ($\bf 10_F + \bar 5_F$) representations. The Yukawa couplings are also assumed to be localized at these fixed points. They require superpotential terms of the form $\bf 10_F \ 10_F \ 5_{\Sigma_6} + 10_F \ \bar 5_F \ \bar 5_{\Sigma_6}$ where the indices are contracted in an obvious way. The SU(5) relation $\lambda_b = \lambda_\tau$ works for the third family but not for the first two. It is possible that interaction with matter in the bulk could help with this issue, but this is beyond the scope of this paper. \subsection{Proton decay} Dimension 6 operators for proton decay are suppressed by the inverse power squared of the smallest compactification scale. We will see that this is near the 4D GUT scale and thus the proton lifetime is completely consistent with the experimental bounds. On the other hand, dimension 5 operators for proton decay are only suppressed by the inverse power of the compactification scale. However, if we assume that quarks and leptons only couple to the chiral adjoints containing the Higgs fields, there are no dimension 5 operators for proton decay generated when integrating out the color triplet Higgs fields. This can be attributed to an unbroken $\mathbb{Z}_4^R$ symmetry \cite{Lee:2010gv} where the superpotential has charge 2, families have charge 1, $\{ \Sigma_{5, 6}, \ 6, \ \bar 6 \}$ have charge 0, and $\{ S, \ \Phi \}$ have charge 2. \section{Threshold Corrections} \label{unification} 4D SUSY GUTs require extra states to contribute a small amount of threshold corrections at the GUT scale in order to concur with low energy measurements. Conventionally, this quantity of GUT scale threshold corrections (defined at the 4D GUT scale) is defined as: \begin{equation} \epsilon_3 = \frac{\alpha_{3} -\alpha_{GUT}}{\alpha_{GUT}} \label{4d} . \end{equation} The running coupling constants in the 4D MSSM can be summarized by: \begin{equation} \alpha_{i}^{-1} (Q) = \alpha_{GUT}^{-1} + \frac{b_{i}}{2\pi} log \frac{M_{GUT}}{Q} - \alpha_{GUT}^{-1} \frac{\epsilon_3}{(1 + \epsilon_3)} \delta_{i3} \label{4drg} \end{equation} where $\delta_{i3}$ denotes that the term appears only for i=3 (the coupling $\alpha_3$). The exact amount of threshold corrections required from the extra states is usually model dependent, but they have to be around a few percent level. For the most popular scenarios of MSSM with unified gaugino masses, this number turns out to be about -3\%. We would like to calculate the effect of the Kaluza-Klein (KK) tower of infinite states to the running of coupling constants in the orbifold model that we have just constructed. These additional contributions to the running of the coupling constants from KK modes can be written as:\footnote{We have followed the analysis of Ref. \cite{Ghilencea:2003kt} in what follows. The details can be found in the Appendix \ref{kki}.} \begin{equation} \frac{4 \pi}{g_{i}^{2}(\mu)} = \frac{4\pi}{g^{2}(\Lambda)} + \sum_{\rho} \Omega_{i, \rho} (\mu) \label{org} \end{equation} where \begin{equation} \Omega_{i, \rho} (\mu) \equiv \frac{1}{4 \pi} \sum_{(m,n) \in Z} \beta_{i, \rho} \int_{\xi}^{\infty} \frac{dt}{t} e^{- \pi t \frac{M_{(m,n), \rho}^2 }{\mu^2}} e^{-\pi \chi t} \label{KKintegral} \end{equation} includes one-loop corrections from both massive and massless states in the theory. $\xi$ is the ultraviolet (UV) regulator introduced since the integral is UV-divergent. $\chi$ is an infrared (IR) regulator introduced since the above quantity diverges for the special case when there are massless states in the KK tower. The corrections come from each state $\rho$ that appears in the spectrum, with an associated beta-function coefficient, $\beta_{i,\rho}$, summarized in Table \ref{beta} and mass, $M_{(m,n), \rho}^2 $, as calculated in the previous section: \begin{equation} M_{(m,n), \rho}^2 = \frac{(m+\frac{I_\rho}{4})^2}{\ensuremath{R_{5}}^2} + \frac{(n+\frac{I_\rho}{4})^2}{\ensuremath{R_{6}}^2} \label{mass} \end{equation} We evaluate the expression in Eq. (\ref{KKintegral}) in three different regions on the m-n plane shown in Fig \ref{states} and then sum up the contributions to find the total corrections to the couplings. \subsection{States at m = 0 and n = 0} In this case, the contribution to the threshold corrections is: \begin{equation} \Omega_{i, \rho}^{00} (\mu) = \frac{1}{4 \pi} \beta_{i, \rho} \int_{\xi}^{\infty} \frac{dt}{t} e^{- \pi t \frac{M_{(0,0), \rho}^2 }{\mu^2}} e^{-\pi \chi t} \end{equation} We saw earlier that the only states at the m=0, n=0 point are the (++) modes. The (++) modes come from the N=1 SUSY vector fields $g,w,b, X, \bar{X},$ and chiral adjoint fields $T, \bar{T}, H_u, H_d$. The beta-function coefficients for these states are summarized in Table \ref{beta}. Using the results from Appendix, we find: \begin{small} \begin{eqnarray} \Omega_i^{00} = \frac{b^{++}_i (\ensuremath{I_\rho} = 0)}{4 \pi} \Gamma \left[0, \pi \xi \chi \right] + \frac{b^{++}_i (\ensuremath{I_\rho} = 2)}{4 \pi} \Gamma \left[0, \pi \xi \left( \frac{1}{4 \mu^2 \ensuremath{R_{5}}^2} + \frac{1}{4 \mu^2 \ensuremath{R_{6}}^2} + \chi \right) \right] \end{eqnarray} \end{small} \begin{table}[ht] \begin{center} \begin{small} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Quantum Number& Name & Type & $b_1$ & $b_2$ & $b_3$ & Type & $b_1$ & $b_2$ & $b_3$\\ \hline \hline (\textbf{8},1)$_0$ & $g$ & C & 0 & 0 & 3 & V & 0 & 0 & -9 \\ (1,\textbf{3})$_0$ & $ w $ & C & 0 & 2 & 0 & V & 0 & -6 & 0 \\ \hline (\textbf{3},\textbf{2})$_{\pm 5/3}$ & $X, \bar{X}$ & C & 5/2 & 3/2 & 1 & V & -15/2 & -9/2 & -3 \\ \hline (\textbf{3},1)$_{\pm 2/3}$ & $T, \bar{T}$ & C & 1/5 & 0 & 1/2 & V & -3/5 & 0 & -3/2 \\ \hline (1,\textbf{2})$_{\pm 1}$ & $H_u, H_d$&C & 3/10 & 1/2 & 0 & V & -9/10 & -3/2 & 0 \\ \hline \end{tabular} \end{small} \caption{Nomenclature, Quantum numbers, and beta-function coefficients for the various states in the spectrum. \label{beta}} \end{center} \end{table} \subsection{m axis, n = 0} Figure \ref{states} shows that the $(++)$ and $(--)$ states live only at even n whereas $(+-)$ and $(-+)$ states live at odd n. The absence of states at certain n has to be accounted for while evaluating the integral. The details of evaluating the odd and even integrals are explicitly presented in Appendix \ref{kki} and the result is: \begin{small} \begin{eqnarray} \Omega_i^{m0} &=& \frac{b_i^{(++)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_1, 0, \frac{\delta_1}{\nu_1} \right] + \frac{b_i^{(++)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_1, 1/2, \frac{\delta_1}{\nu_1} \right] \nonumber \\ &+& \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_1, 0, \frac{\delta_1}{\nu_1} \right] + \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_1, 1/2, \frac{\delta_1}{\nu_1} \right] \nonumber \\ &+& \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_1, 0, \frac{\delta_1}{\nu_1} \right] + \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_1, 1/2, \frac{\delta_1}{\nu_1} \right] \nonumber \\ &+& \frac{b_i^{(--)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_1, 0, \frac{\delta_1}{\nu_1} \right] + \frac{b_i^{(--)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_1, 1/2, \frac{\delta_1}{\nu_1} \right] \nonumber \\ \Omega_i^{m0} &=& \left( \frac{b_i^{(++)} (\ensuremath{I_\rho} = 0)}{4 \pi} +\frac{b_i^{(--)} (\ensuremath{I_\rho} = 0)}{4 \pi} \right) {\cal R}_1 \left[4 \xi \nu_1, 0, \frac{\chi}{4 \nu_1} \right] \nonumber \\ &+& \left(\frac{b_i^{(+-)} (\ensuremath{I_\rho} = 0)}{4 \pi} + \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 0)}{4 \pi} \right) \left( {\cal R}_1 \left[ 4 \xi \nu_1, \frac{1}{2}, \frac{\chi}{4 \nu_1} \right] + \Gamma \left[0, \pi \xi \left(\nu_1+ \chi \right) \right] \right) \nonumber \\ &+& \left( \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 2)}{4 \pi} + \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 2)}{4 \pi}\right) \Gamma \left[0, \pi \xi \left(\frac{\nu_1}{4} +\frac{\nu_2}{4} + \chi \right) \right] \end{eqnarray} \end{small} where, $\nu_1 = \frac{1}{\mu^2 \ensuremath{R_{5}}^2}$, $\nu_2 = \frac{1}{\mu^2 \ensuremath{R_{6}}^2}$, and $\delta_1 = \frac{\rho_2}{\mu^2 \ensuremath{R_{6}}^2} + \chi$ The function ${\cal R}_1$ is also defined in Appendix \ref{kki}. In simplifying the above expression, we have also used the fact that when we have complete N=4 SUSY in 4D, the beta-function coefficients sum up to zero. \begin{equation} b_i^{(++)} + b_i^{(+-)} + b_i^{(-+)} + b_i^{(--)} = 0 \end{equation} for all i\footnote{We have complete N=4 SUSY in 4D when we have one vector multiplet and three chiral multiplets. In terms of the N=1 fields in 4D, the beta-function coefficients are given by: \begin{equation} b_G = 3 C_2 (G) - N_{\text{chiral}} T{(R)} \end{equation}}. \subsection{n axis, m = 0} Along this axis, the calculation is similar to the previous case in the sense that the states exist only at certain n. The $(++)$ and $(-+)$ states live only at even n whereas $(+-)$ and $(--)$ states live at odd n. Again, using the relations in Appendix \ref{kki} and evaluating the integrals, we get: \begin{small} \begin{eqnarray} \Omega_i^{0n} &=& \frac{b_i^{(++)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_2, 0, \frac{\delta_2}{\nu_2} \right] + \frac{b_i^{(++)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_2, 1/2, \frac{\delta_2}{\nu_2} \right] \nonumber \\ &+& \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_2, 0, \frac{\delta_2}{\nu_2} \right] + \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_2, 1/2, \frac{\delta_2}{\nu_2} \right] \nonumber \\ &+& \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_2, 0, \frac{\delta_2}{\nu_2} \right] + \frac{b_i^{(-+)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^E \left[ \xi \nu_2, 1/2, \frac{\delta_2}{\nu_2} \right] \nonumber \\ &+& \frac{b_i^{(--)} (\ensuremath{I_\rho} = 0)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_2, 0, \frac{\delta_2}{\nu_2} \right] + \frac{b_i^{(--)} (\ensuremath{I_\rho} = 2)}{4 \pi} {\cal R}_1^O \left[ \xi \nu_2, 1/2, \frac{\delta_2}{\nu_2} \right] \nonumber \\ \Omega_i^{0n} &=& \left( \frac{b_i^{(++)} (\ensuremath{I_\rho} = 0)}{4 \pi} +\frac{b_i^{(-+)} (\ensuremath{I_\rho} = 0)}{4 \pi} \right) {\cal R}_1 \left[4 \xi \nu_2, 0, \frac{\chi}{4 \nu_2} \right] \nonumber \\ &+& \left(\frac{b_i^{(+-)} (\ensuremath{I_\rho} = 0)}{4 \pi} + \frac{b_i^{(--)} (\ensuremath{I_\rho} = 0)}{4 \pi} \right) \left( {\cal R}_1 \left[ 4 \xi \nu_2, \frac{1}{2}, \frac{\chi}{4 \nu_2} \right] + \Gamma \left[0, \pi \xi \left(\nu_2+ \chi \right) \right] \right) \nonumber \\ &+& \left( \frac{b_i^{(+-)} (\ensuremath{I_\rho} = 2)}{4 \pi} + \frac{b_i^{(--)} (\ensuremath{I_\rho} = 2)}{4 \pi}\right) \Gamma \left[0, \pi \xi \left(\frac{\nu_2}{4}+ \frac{\nu_1}{4} + \chi \right) \right] \end{eqnarray} \end{small} where, $\nu_1 = \frac{1}{\mu^2 \ensuremath{R_{5}}^2}$, $\nu_2 = \frac{1}{\mu^2 \ensuremath{R_{6}}^2}$, and $\delta_2 = \frac{\rho_1}{\mu^2 \ensuremath{R_{5}}^2} + \chi$ as defined in Appendix \ref{kki}. \subsection{Off the axes} \label{mnne0} This case turns out to be rather simple since all the parity eigenstates live at all (m,n) $\ne\ 0$. This includes one vector and three chiral adjoint multiplets for every state and they form complete N=4 supersymmetry. Thus these excited KK modes do not contribute anything to the running of the coupling constants. \subsection{Putting it all together} The contribution from the four individual cases can be put together with the appropriate beta-function coefficients. In the limit that the regulators can be set to zero, they can be combined with the mass scale $\mu$ and replaced by their relevant UV and IR scales. \begin{align} Q^2 \equiv \pi e^\gamma \chi \mu^2 \Big|_{\chi \rightarrow 0} \qquad \Lambda^2 \equiv \frac{\mu^2}{\xi}\Big|_{\xi \rightarrow 0} \end{align} The functions $\Gamma$ and ${\cal R}_1$ in these limits simplify and these simplified expressions are summarized in Appendix \ref{function_limits}. The final expression for the threshold corrections at the scale Q coming all the KK states that exist in the system are given by: \begin{small} \begin{eqnarray} \Omega_{i}(Q) &=& \frac{b_i^{++} (\ensuremath{I_\rho} = 0)}{4 \pi} \text{ln} \frac{\Lambda^2}{Q^2} + \left(\frac{b_i^{+-} (\ensuremath{I_\rho} = 0) + b_i^{-+} (\ensuremath{I_\rho} = 0)}{4 \pi}\right) \text{ln} \left[\frac{\pi \Lambda}{2 M_5} \right]^2 \nonumber \\ &+& \left(\frac{b_i^{+-} (\ensuremath{I_\rho} = 0) + b_i^{--} (\ensuremath{I_\rho} = 0)}{4 \pi}\right) \text{ln} \left[\frac{\pi \Lambda}{2 M_6} \right]^2 \nonumber \\ &+& \frac{b_i^{+-} (\ensuremath{I_\rho} = 2)} {4 \pi} \text{ln} \left[ \frac{4 \Lambda^2}{ M_5^2+ M_6^2}\right] \label{corrections} \end{eqnarray} \end{small} where the scales $M_i, i = 5,6$ are rescaled compactification scales, i.e. $M_i = \frac{\sqrt{\pi e^{\gamma}}}{R_i}$. Note that to arrive at this result, we have used the spectrum in Fig. \ref{states} with mass eigenvalues as shown in Eq.(\ref{mass}). The important feature of this expression is that it tells us that there are no power-law corrections to the couplings at any scale. This is unlike generic scenarios of a (4+$\delta$)D model with $\delta$ compactified dimensions, where the couplings receive power-law corrections proportional to $\left(\frac{\Lambda}{M_C}\right)^{\delta}$ where $M_C$ is the smallest compactification scale. Therefore, we should have expected quadratic corrections to the couplings in the 6D model considered here. It turns out that the quadratic corrections vanish due to the initial N=4 SUSY. This feature was also observed in Ref. \cite{Hall:2001zb} where an SU(6) theory was studied with N=2 SUSY in 6D. The model discussed in\cite{Hall:2001zb}, however had an effective 5D limit. Hence there were additional linear corrections to the couplings. In the model discussed here, the compactification takes the 6D theory directly down to 4D and hence we find only logarithmic corrections to the couplings. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline Coefficients & $(b_1, b_2, b_3)$ \\ \hline $b_i^{++} (\ensuremath{I_\rho} = 0)$ & $(\frac{33}{5}, 1, -3)$ \\ $ b_i^{+-} (\ensuremath{I_\rho} = 0) + b_i^{-+} (\ensuremath{I_\rho} = 0)$ &$(-\frac{6}{5}, 2, 6)$\\ $b_i^{+-} (\ensuremath{I_\rho} = 0) + b_i^{--} (\ensuremath{I_\rho} = 0) $ & $(\frac{6}{5}, 6, 6)$ \\ $b_i^{+-} (\ensuremath{I_\rho} = 2)$ &$(\frac{27}{5},3,3)$ \\ \hline \end{tabular} \caption{Beta-function coefficients relevant for Eq. (\ref{corrections})} \label{finalbeta} \end{center} \end{table} \section{Results \& Discussion} \label{results} We now compare the result we obtained in Eq. (\ref{corrections}) from the 6D orbifold to the gauge couplings of the low energy 4D MSSM and determine if the spectrum obtained can account for the correct amount of GUT-scale threshold corrections as required by the standard scenarios of MSSM, about -3 \%, when the 4D GUT-scale, \ensuremath{M_{GUT}}\ is around 3 $\times 10^{16}$ GeV. If the low energy limit of the orbifold construction is the same as the MSSM, then at energies below the smallest of the compactification scales, $M_C$, the couplings should be the same for both theories. Above $M_C$ new states appear in the orbifold GUT and then, the running of the couplings differ in the two theories. In the 4D MSSM, it is believed that the couplings unify at a grand unification scale, with small corrections from states near that scale that spoil precision unification. If $M_C$ happens to be close to the 4D GUT scale and we obtain the appropriate threshold corrections, then we have an alternate understanding of \ensuremath{M_{GUT}}. The 4D GUT scale in this case is just a fictitious scale obtained by running the couplings from the weak scale up. However it can now be identified with the compactification scale, where all the new physics arises. At the same time, the real unification naturally happens at the cut-off scale. This scale would be identified with the string scale, assuming the underlying theory of an orbifold GUT is string theory. At the lowest compactification scale (largest compactification radius), we have 6D orbifold and the 4D MSSM, respectively: \begin{eqnarray} \alpha_{i}^{-1} (Q) &=& \alpha^{-1} (\Lambda) + \sum_{\rho} \Omega_{i, \rho} (Q) \nonumber \\ \alpha_{i}^{-1} (Q) &=& \alpha_{GUT}^{-1} + \frac{b_{i}}{2\pi} log \frac{M_{GUT}}{Q} - \alpha_{GUT}^{-1} \frac{\epsilon_3}{(1 + \epsilon_3)} \delta_{i3} \nonumber \end{eqnarray} We have 3 sets of equations, one for each coupling of SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ and four unknowns: $\Lambda$, $M_5$, $M_6$, and $\alpha(\Lambda)$, the unified coupling constant of the orbifold theory, given \ensuremath{M_{GUT}}\ and $\epsilon_3$ from the 4D MSSM. We find that we can uniquely solve for $M_5$ and $M_6$ in terms of \ensuremath{M_{GUT}}\ and $\epsilon_3$ and we obtain a curve in the $\alpha - \Lambda$ plane. The details of the solution are elaborated in Appendix \ref{analyticalsolution} and we summarize the solutions obtained: \begin{eqnarray} M_5 &=& \left( m(\epsilon_3)^{({\cal G- H})/2} (m(\epsilon_3)+1)^{{\cal H}/2} e^{{\cal I}/2}\right) \ensuremath{M_{GUT}} \nonumber \\ M_6 &=& \left( m(\epsilon_3)^{({\cal G- H}-1)/2} (m(\epsilon_3)+1)^{{\cal H}/2} e^{{\cal I}/2}\right) \ensuremath{M_{GUT}} \nonumber \\ \alpha^{-1}(\Lambda) &=& - \frac{3}{\pi} \text{ln} \frac{\Lambda^2}{M_{GUT}^2} + \frac{3}{\pi} \text{ln} \left(m(\epsilon_3)^{({\cal G- H})} (m(\epsilon_3)+1)^{{\cal H}} e^{{\cal I}} \right) \nonumber \\ &+& \text{ln} \left(m(\epsilon_3)^{({\cal L- M})} (m(\epsilon_3)+1)^{{\cal M}} e^{{\cal N}} \right) \label{al} \end{eqnarray} \begin{figure}[ht!] \begin{center} \includegraphics[width=8cm]{me3.eps}\end{center} \caption{\label{me3}The figure shows the dependence of $m = \left(\frac{M_5}{M_6}\right)^2$ on $\epsilon_3$. The statement that MSSM requires small threshold corrections at the GUT scale translates to anisotropic compactification.} \end{figure} The coefficients ${\cal G, H, I}$ and ${\cal N}$ are given in Table \ref{cal} in Appendix \ref{analyticalsolution}. To analyze the GUT scale threshold corrections, we fix \ensuremath{\alpha_{GUT}^{-1}}\ to be 24 in all further calculations. Benchmark points are shown in Table \ref{benchmark}. The ratio of $M_5$ and $M_6$ = $m$, depends only on $\epsilon_3$ and is shown in Fig \ref{me3}. The value of m sets the hierarchy between the two compactification scales, $M_5$ and $M_6$. We analyzed the particle spectrum at intermediate energies in the cases when (i) $M_5 \ \ll \ M_6$ (ii) $M_6 \ \ll \ M_5$ and (iii) $M_5 \ = \ M_6$ to determine the scale associated with the unication of SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ gauge groups. Also, to determine if the SU(6) was broken down to a subgroup at these intermediate scales, reflecting the two step GUT breaking procedure that we employed. We find two unification scales - the SM gauge group unify to an SU(3) $\times$ SU(3) $\times$ U(1) at the scale $M_5$ in all the above three cases. Then further at the scale $\sqrt{M_5^2 + M_6^2}$ there is another unification scale associated with SU(3) $\times$ SU(3) unification to the SU(6) GUT. On the other hand, we do not find a scale associated with the breaking to SU(5) $\times$ U(1)$_X$ which is a typical feature of local GUT breaking as noted in \cite{Trapletti:2006xv}. \begin{figure}[H] \begin{center} \vspace{1cm} \includegraphics[height=7.4cm, width=9.5cm]{alphastr1.eps}\\ \caption{\label{alpha}Once $M_5$ and $M_6$ are solved for uniquely, we are left with a curve in the $\alpha^{-1}-\Lambda$ plane, as expressed in Eq. (\ref{al}). The unified coupling at the cut-off scale is in the perturbative regime.} \end{center} \end{figure} It is also interesting to note that the standard scenarios of the MSSM can be embedded in an isotropic or anisotropic orbifold. We find that in the anisotropic as well as isotropic ($M_5 \sim M_6$) cases, the lowest compactification scale is around the 4D GUT scale, making it possible to connect the compactification scale and the 4D GUT scale. For three benchmark points, the curve in the $\alpha^{-1} (\Lambda) - \Lambda$ plane, from Eq. (\ref{al}) is shown in Fig \ref{alpha}. Finally we note that the values of $\alpha^{-1} (\Lambda), \Lambda$ are not consistent with perturbative heterotic string boundary conditions. In particular, since $\alpha$ depends only of the logarithm of $\Lambda$, it is not possible to embed this orbifold GUT into the weakly coupled regime of the heterotic string, where value of the GUT coupling constant at the string scale is given by\cite{Dundee:2008tr}: \begin{equation} \alpha^{-1}(\Lambda = M_{string}) = \frac{1}{8} \left(\frac{M_{PL}}{M_{string}} \right)^2 \end{equation} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & $\epsilon_3$ & $M_5$ & $M_6$ & $\Lambda$ & $\alpha^{-1} (\Lambda)$ \\ \hline Point 1 & -3.0\%& $0.174 \times 10^{16}$& $2.08 \times 10^{16}$ & $6.0 \times 10^{17}$ & 13.57 \\ \hline Point 2 & 0.0 \% &$3.39 \times 10^{16}$& $ 3.64 \times 10^{16}$ & $6.0 \times 10^{17}$ & 17.47 \\ \hline Point 3 & +3.0 \% &$1.37 \times 10^{17}$& $ 3.44 \times 10^{16}$ & $6.0 \times 10^{17}$ & 18.70 \\ \hline \end{tabular} \caption{\label{benchmark} The table shows a benchmark point for choice 1 and choice 2. We fix \ensuremath{\alpha_{GUT}^{-1}}\ to be 24 and \ensuremath{M_{GUT}}\ to be $3 \times 10^{16}$ GeV for both the points. The smallest compactification scale is naturally of the order of the 4D GUT scale. All scales are in GeV.} \end{table} \section{Summary} In this work, we discussed a supersymmetric SU(6) gauge theory on an orbifold with the topology of a real projective plane. The compact space was obtained in two steps by orbifolding a rotation and a freely-acting roto-translation. In the process, the gauge symmetry was broken down from SU(6) to SU(5) $\times$ U(1)$_X$ and the N=4 SUSY was reduced to N=2. To further break the SU(5) down to the Standard Model, we introduced a non-zero Wilson line along the fifth and sixth directions. This helped to eliminate the unwanted light states like the Higgs triplets and to break N=2 to N=1 SUSY. We calculated the Kaluza Klein spectrum of states coming from this orbifolding and also calculated the threshold corrections coming from these states at the 4D grand unification scale. We find that the threshold corrections coming from the KK states due to compactification on an orbifold with the topology of $RP^2$ are at the percent level allowing for realistic 4D MSSM. The solutions allow for threshold corrections to be between $\{-3\%, +2\%\}$ allowing for the standard universal gaugino mass scenario like CMSSM or the non-universal gaugino mass scenarios (especially lighter gluinos as discussed in \cite{Raby:2009sf, Anandakrishnan:2011zn}). There have been previous calculations of threshold corrections in orbifold GUT models on various orbifolds with local and non-local GUT breaking. We have already pointed out that unlike in other scenarios we do not get power law running of couplings above the compactification scale due to the large N=2 in 6D. The advantage of not having such large power-law corrections is that we do not lose any predictability due to UV scale physics. We should point out that in the work of Trapletti \cite{Trapletti:2006xv}, the author considered a non-local GUT breaking and concluded that the running of couplings stops precisely above the compactification scale. We however find that there are small finite threshold corrections at all scales. Our analysis was a bottom-up approach studying the phenomenology of models on an orbifold with the topology of a projective plane. It would be interesting to explore the possibility of embedding these orbifold GUTs into a more fundamental theory, like string theory. On the other hand, it would be equally interesting to study low energy features like SUSY breaking and spectra. Finally, since the compactification scale is naturally around the 4D GUT scale or larger, one does not have to worry about proton decay from dimension 6 operators. Moreover, proton decay from dimension 5 operators vanishes due to a discrete R symmetry. \section{Acknowledgments} We would like to thank Michael Ratz for useful discussions. The authors acknowledge partial support from DOE grant DOE/ER/01545-893. AA would also like to thank Konstantin Bobkov and Ben Dundee for their helpful insights. \newpage
2,877,628,089,330
arxiv
\section{Introduction} LHC will reach up to ten times the design luminosity $(5-10\times 10^{34}cm^{-2}s^{-1})$ resulting in unprecedented radiation conditions in a collider experiment \cite{Collaboration2011}. The CMS Endcap calorimeters covering pseudorapidity between 1.6 and 3 will need to be replaced with a high granularity calorimeter and a backing hadron calorimeter during Phase II upgrades in order to maintain/improve the superior performance of CMS. The backing hadron calorimeter will have scintillator tiles as active media, and either direct coupling to the photodetectors to the tiles or utilization of WLS fibers. In this context, R\&D on radiation-hard, high light yield scintillators and radiation-hard WLS fibers will be crucial. Here, we attempt to identify the most suitable active media and readout options for this upgrade. \section{Scintillator R\&D} As the radiation-hard material, quartz is considered. Quartz plates are extremely radiation-hard \cite{Akgun2006}, but the signal generation based on measuring \v{C}erenkov light, hence is low level. The solution developed to overcome this issue is coating quartz plates with organic and inorganic scintillators such as para-Terphenyl (pTp), Anthracene (AN) and Gallium-doped Zinc Oxide (ZnO:Ga) in order to enhance the light yield. Organic scintillators, pTp and AN are aromatic hydrocarbons with three benzene rings $(C_{6}H_{6})$ formed. They exhibit blue fluorescence under UV. The evaporation technique is used for coating pTp and RF sputtering technique is used for coating AN on the quartz plates, Fig. \ref{coatings}. \begin{figure}[h] \makebox[\textwidth]{% \includegraphics[scale=0.4]{Evaporation.png}% \\ \includegraphics[scale=0.4]{Sputtering.png}% }% \caption{pTp evaporation (left) and AN RF-sputtering (right) on quartz plates.} \label{coatings} \end{figure} The radiation hardness of pTp was tested with proton beams at CERN and at the Indiana University Cyclotron Facility (IUCF). The light yield of pTp sample dropped to 84\% of the initial light yield after 20 MRad proton irradiation, and 80\% of the initial light yield after 40 MRad proton irradiation \cite{Bilki2010}. Inorganic scintillators such as ZnO:Ga is also coated on quartz plates to increase the light yield. It has a short de-excitation time of 0.7 ns and very high luminous yield of 15k photon/MeV \cite{Derenzo2002}. Also studied are intrinsically radiation-hard scintillators such as Polyethylene Naphthalate (PEN), Polyethylene Terephthalate (PET) and High Efficiency Mirror (HEM). PEN and PET are bright and inexpensive plastic scintillators. PEN was created by the Japanese company Teijin Chemicals \cite{Teijin2015}. The company initially produced a sample in size of 5 mm x 35 mm x 35 mm and measured its light yield as 10,500 photons/MeV. PEN makes intrinsic blue scintillation, as can be seen in Fig. \ref{PENscintillation} with a peak emission spectrum of 425 nm \cite{nakamura2011evidence}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{PEN.png} \caption{The intrinsic blue scintillation of PEN} \label{PENscintillation} \end{figure} PET is a common type of polyster and it is widely used to make plastic bottles and as a substrate in thin film solar cells. The emission spectrum of PET peaks at 385 nm \cite{Nakamura2013}. Another radiation-hard material is HEM, which is structurally a multilayer of polymer mirrors. We have made a stack of alternating slices of HEM sheet and quartz plates and tested the scintillating properties of the stack. \section{Test Beam Activities and Results} Various tiles were prepared and their timing characteristics, scintillation and transmission properties were studied at the University of Iowa Test Station. The tiles measured 10 cm$\times$10 cm and thickness of 1 mm and 2 mm. WLS fibers were coupled to the tiles with either sigma or bar shaped grooves. Figure \ref{fig:sigmabars} shows different tiles groove geometries. \begin{figure}[h] \makebox[\textwidth]{% \includegraphics[scale=0.75]{Sigma.png}% \\ \includegraphics[scale=0.204]{BarReal.png}% }% \caption{Picture of sigma shape grooved tile with one WLS fiber (left) and a bar shape grooved tile with several WLS fibers (right).} \label{fig:sigmabars} \end{figure} The test setup consisted of a light-tight box, a 334 nm wavelength UV laser, a Hamamatsu R7525 photomultiplier tube (PMT) \cite{R7525} and a Tektronix TDS 5034 digital oscilloscope. \begin{figure}[h] \centering \includegraphics[scale=0.6]{HETiming.png}% \caption{Signal timing of Kuraray SCSN-81 HE Tile.} \label{fig:HETiming} \end{figure} \begin{figure}[h] \makebox[\textwidth]{% \includegraphics[scale=0.55]{PETTiming.png}% \\ \includegraphics[scale=0.58]{PENTiming.png}% }% \caption{Signal timing of PET (left) and PEN (right).} \label{fig:PENPETTiming} \end{figure} Figure \ref{fig:HETiming} and \ref{fig:PENPETTiming} shows the timing characteristics of HE, PET and PEN. Signal timing was calculated, which is the time takes to fall from peak to half (1/2), from peak to 1/e and from peak to 1/10 of the signal. Peak to 1/e values for tiles, HE, PET and PEN are respectively 10.56 ns, 6.884 ns and 27.12 ns. PET has a much faster response than HE baseline tile, however PEN is slower. \begin{figure}[!h] \centering \includegraphics[scale=0.52]{CERNTestBeamResult.png}% \caption{The MIP (muon) response of various tiles, tested at CERN H2 Test Beam Area.} \label{fig:CERNResults} \end{figure} The assembled tiles were also tested at Fermilab Test Beam Facility (FTBF) and CERN H2 Test Beam Area for minimum ionizing particle (MIP) response. Figure \ref{fig:CERNResults} shows the MIP (muon) response of various tiles. For this test, the same PMT was used \cite{R7600PMT}. The most probable values (MPVs) of the Landau fits to the charge spectra above 15 fC for PEN, PET, HEM, coated quartz with pTp and AN are compared with the baseline HE tile. The measured MPV of HE is 36.15 fC and the other tiles have 29.22 fC, 19.83 fC, 23.86 fC, 20.82 fC and 22.8 fC respectively. PEN has the closest response compared to HE. The systematic effects associated with WLS fiber coupling has not been studied in detail. \section{Conclusion} Table \ref{tab:tiles} shows the MIP and timing response summary of HE, PEN and PET tiles. Although the light yield of PEN is much higher than PET, PET has a faster time response than PEN and SCSN-81 which is currently used as the active medium in the Hadron Endcap Calorimeters at CMS. A blended sample of PEN and PET was produced and tested by H. Nakamura, et al. and light yield of the blended substrate was measured 0.85 times that of PEN and much higher than that of PET \cite{Nakamura2013}. The blended sample is yet to be investigated for signal timing properties. \begin{table}[h] \caption{Summary of HE, PEN, and PET comparison.} \begin{center} \begin{tabular}{l|ccc} Tiles & SCSN-81 HE & PEN & PET \\ \hline MIP Response (MPV, fC) & 36.15 & 29.22 & 19.83 \\ Timing Response (Peak to 1/e, ns) & 10.56 & 27.12 & 6.884 \\ \hline \end{tabular} \label{tab:tiles} \end{center} \end{table} The other tiles, quartz with pTp and AN coatings and HEM, have also comparable light yields with HE. The R\&D is still underway in order to find the best radiation hard and high light yield materials. The extent of the R\&D is not limited to the future CMS upgrades but can find implementation areas in future collider detector experiments and in facilities where measurements in high radiation areas are crucial. \bigskip \bigskip \begin{center} \begin{large The author would like to thank Eileen Han at Fermi National Accelerator Laboratory for her assistance with coating quartz plates.
2,877,628,089,331
arxiv
\section{Introduction}\label{sec:intro} The importance of inputs to a function is commonly measured via Sobol' indices. Those are defined in terms of the functional analysis of variance (ANOVA) decomposition, which is conventionally defined with respect to statistically independent inputs. In applications to computer experiments, it is common that the input space is constrained to a non-rectangular region, or that the input variables have some other known form of dependence, such as a general Gaussian distribution. When the inputs are described by an empirical distribution on observational data it is extremely rare that the variables are statistically independent. Even designed experiments avoid having independent inputs (i.e., a Cartesian product of input levels) when the dimension is moderately large \citep{wu2011experiments}. A common way to address dependence is to build on work by \cite{ston:1994} and \cite{hook:2007} who define an ANOVA for dependent inputs and then define variable importance through that generalization of ANOVA. This is the method taken by \cite{chas:gamb:prie:2012} for computer experiments. The dependent-variable ANOVA leads to importance measures with two conceptual problems: \begin{compactenum}[\quad1)] \item the needed ANOVA is only defined when the random $\boldsymbol{x}$ has a distribution with a density (or mass function) uniformly bounded below by a positive constant times another density/mass function that has independent margins, and \item the resulting importance of a variable can be negative \citep{chas:gamb:prie:2015}. \end{compactenum} The first condition is very problematic. It fails even for Gaussian $\boldsymbol{x}$ with nonzero correlation. It fails for inputs constrained to a simplex. It fails when the empirical distribution of say $(x_{i1}, x_{i2})$ is such that some input combinations are never observed or, by definition, cannot possibly be observed. The second condition is also conceptually problematic. A variable on which the function does not depend at all will get importance zero and thus be more important than one that the function truly does depend on in a way that gave it negative importance. The Shapley value, from economics, provides an alternative way to define variable importance. As we describe below, Shapley value provides a way to attribute the value created by a team to its individual members. In our context the members are individual input variables. \cite{sobolshapley} derived Shapley value importance for independent inputs where the value is variance explained. The Shapley value of a variable turns out to be bracketed between two different Sobol' indices. \cite{song:nels:staum:2016} recently advocated the use of Shapley value for the case of dependent inputs. They report that it is more suitable than Sobol' indices for such problems. They use the term ``Shapley effects'' to describe variance based Shapley values. The Shapley value provides an importance measure that avoids the two problems mentioned above: It is available for any function in $L^2$ of the appropriate domain and it never gives negative importance. Although Shapley value solves the conceptual problems, computational problems remain a serious challenge \citep{castro2009polynomial}. The Shapley value is defined in terms of $2^d-1$ models where $d$ is the dimension of $\boldsymbol{x}$. \cite{song:nels:staum:2016} presented a Monte Carlo algorithm to estimate Shapley importance and they apply it to detailed real-world problems. We address only the conceptual appropriateness of Shapley value to variable importance, not computational issues. The outline of this paper is as follows. Section~\ref{sec:notation} introduces our notation, defines the functional ANOVA and the Sobol' indices and presents the dependent-variable ANOVA. Section~\ref{sec:shapley} presents the Shapley value and its use for variable importance. From the definition there it is clear that Shapley value for variance explained will never be negative. Section~\ref{sec:examples} gives several examples of simple cases and exceptional corner cases where we can derive the Shapley value of variable importance and verify that it is reasonable. Section~\ref{sec:conc} has brief conclusions. Section~\ref{sec:proofs} contains the longer proofs. \section{Notation}\label{sec:notation} We consider real valued functions $f$ defined on a space $\mathcal{X}$. The point $\boldsymbol{x}\in\mathcal{X}$ has $d$ components, and we write $\boldsymbol{x} =(x_1,\dots,x_d)$ where $x_j\in\mathcal{X}_j$. The individual $\mathcal{X}_j$ are ordinarily interval subsets of $\mathbb{R}$ but each of them may be much more general (regions in Euclidean space, functions on $[0,1]$, or even images, sounds, and video). What we must assume is that $\boldsymbol{x}$ follows a distribution $P$ chosen by the user, and that $f(\boldsymbol{x})$ is then a random variable with $\mathbb{E}(f(\boldsymbol{x})^2)<\infty$. When the components of $\boldsymbol{x}$ are independent, then Sobol' indices \citep{sobo:1990,sobo:1993} provide ways to measure the importance of individual components of $\boldsymbol{x}$ as well as sets of them. They are based on a functional ANOVA decomposition. For details and references on the functional ANOVA, see~\cite{sobomat}. \subsection{ANOVA for independent variables} Here is a brief summary of the ANOVA to introduce our notation. For simplicity we will take $f\in L^2[0,1]^d$ with the argument $\boldsymbol{x}=(x_1,\dots,x_d)$ of $f$ uniformly distributed on $[0,1]^d$, but the approach extends straightforwardly to $L^2(\prod_{j=1}^d\mathcal{X}_j)$ with independent not necessarily uniform $x_j\in\mathcal{X}_j$. The set $\{1,2,\dots,d\}$ is written $1{:}d$. For $u\subseteq 1{:}d$, $|u|$ denotes cardinality and $-u$ is the complement $\{1\le j\le d\mid j\not\in u\}$. If $u=(j_1,j_2,\dots,j_{|u|})$ then $\boldsymbol{x}_u = (x_{j_1},x_{j_2},\dots,x_{j_{|u|}})\in[0,1]^{|u|}$ and $\mathrm{d}\boldsymbol{x}_u = \prod_{j\in u}\mathrm{d} x_j$. We use $u+v$ as a shortcut for $u\cup v$ when $u\cap v=\emptyset$, especially in subscripts. The ANOVA is defined via functions $f_u\in L^2[0,1]^d$. These functions satisfy $f(\boldsymbol{x}) = \sum_{u\subseteq 1{:}d}f_u(\boldsymbol{x})$. They are defined as follows. First, $f_\emptyset = \int f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ and then \begin{align}\label{eq:deffu} f_u(\boldsymbol{x}) = \int\bigl( f(\boldsymbol{x})-\sum_{v\subsetneq u}f_v(\boldsymbol{x}) \bigr)\,\mathrm{d}\boldsymbol{x}_{-u} \end{align} for $|u|>0$. The integral in~\eqref{eq:deffu} is over $[0,1]^{d-|u|}$ and it yields a function $f_u$ that depends on $\boldsymbol{x}$ only through $\boldsymbol{x}_u$. The effects $f_u$ are orthogonal: $\int f_u(\boldsymbol{x})f_v(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}=0$ when $u\ne v$. The variance component for the set $u$ is $\sigma^2_u=\int f_u(\boldsymbol{x})^2\,\mathrm{d}\boldsymbol{x}$ for $|u|>0$ and $\sigma^2_\emptyset=0$. The variance of $f$ for $\boldsymbol{x}\sim\mathbf{U}[0,1]^d$ is $\sigma^2 = \sum_{u\subseteq1{:}d}\sigma_u^2$. We can define the importance of a set of variables by how much of the variance of $f$ is explained by those variables. The best prediction of $f(\boldsymbol{x})$ given $\boldsymbol{x}_u$ is $$ f_{[u]}(\boldsymbol{x}) \equiv \mathbb{E}( f(\boldsymbol{x})\mid \boldsymbol{x}_u) = \sum_{v\subseteq u}f_v(\boldsymbol{x}).$$ This prediction explains \begin{align}\label{eq:varexpl} \underline{\tau}^2_u \equiv \sum_{v\subseteq u}\sigma^2_v, \end{align} of the variance in $f$. This is one of Sobol's global sensitivity indices. His other index is $$\overline{\tau}^2_u \equiv \sum_{v\cap u\ne \emptyset}\sigma^2_v = \sigma^2 - \underline{\tau}^2_{-u}.$$ It is more conventional to use normalized versions $\underline{\tau}^2_u/\sigma^2$ and $\overline{\tau}^2_u/\sigma^2$ but unnormalized ones are simpler for our purposes. The importance of an individual variable $x_j$ is sometimes defined through $\underline{\tau}^2_{\{j\}}$ or $\overline{\tau}^2_{\{j\}}$. If $\underline{\tau}^2_{\{j\}}$ is large then $x_j$ is important and if $\overline{\tau}^2_{\{j\}}$ is small then $x_j$ is unimportant. \subsection{ANOVA for dependent variables} Now suppose that $f$ is defined on $\mathbb{R}^d$ but the argument $\boldsymbol{x}$ does not have independent components. Instead $\boldsymbol{x}$ has distribution $P$. We could generalize~\eqref{eq:deffu} to the Stone-Hooker ANOVA \begin{align}\label{eq:deffug} f_u(\boldsymbol{x}) = \int\bigl( f(\boldsymbol{x})-\sum_{v\subsetneq u}f_v(\boldsymbol{x}) \bigr)\,\mathrm{d} P(\boldsymbol{x}_{-u}) \end{align} but the result would not generally have orthogonal effects. To take a basic example, suppose that $P$ is the $\mathcal{N} \left( \left(\begin{smallmatrix} 0\\0 \end{smallmatrix}\right), \left(\begin{smallmatrix} 1 &\rho\\ \rho &1 \end{smallmatrix}\right) \right) $ distribution for $0<\rho<1$ and let $f(\boldsymbol{x}) = \beta_1x_1+\beta_2x_2$. Then~\eqref{eq:deffug} yields $$ f_\emptyset(\boldsymbol{x})=0,\quad f_{\{1\}}(\boldsymbol{x}) = (\beta_1+\beta_2\rho)x_1,\quad f_{\{2\}}(\boldsymbol{x}) = (\beta_2+\beta_1\rho)x_2 $$ and $f_{\{1,2\}}(\boldsymbol{x})=-\beta_2\rho x_1 -\beta_1\rho x_2. $ These effects are not orthogonal under $P$ and their mean squares do not sum to the variance of $f(\boldsymbol{x})$ for $\boldsymbol{x}\sim P$. It is however possible to get a decomposition $f(\boldsymbol{x}) = \sum_{u\subseteq1{:}d}f_u(\boldsymbol{x})$ with a hierarchical orthogonality property \begin{align}\label{eq:hop} \int f_u(\boldsymbol{x}) f_v(\boldsymbol{x}) \,\mathrm{d} P(\boldsymbol{x}) = 0,\quad \forall v\subsetneq u. \end{align} \cite{chas:gamb:prie:2012} give conditions under which a decomposition of $f$ satisfying~\eqref{eq:hop} exists and they use it to define variable importance. They assume that the joint distribution $P$ is absolutely continuous with respect to a product probability measure $\nu$. That is $P(\mathrm{d} \boldsymbol{x}) = p(\boldsymbol{x})\prod_{j\in1{:}d}\nu_j(\mathrm{d} x_j)$ for a density function $p$. They require also that this density satisfies \begin{align}\label{eq:noholes} \exists\, 0<M\le 1,\quad\forall u\subseteq1{:}d,\quad p(\mathrm{d}\boldsymbol{x}) \ge M p(\mathrm{d}\boldsymbol{x}_u)p(\mathrm{d}\boldsymbol{x}_{-u}),\quad\nu-\text{a.e.} \end{align} The joint density is bounded below by a product of two marginal densities. Among other things, this criterion forbids `holes' in the support of $P$. There cannot be regions $R_u\in\mathbb{R}^u$ and $R_{-u}\in\mathbb{R}^{-u}$ with $P(R_u\times R_{-u})=0$ while $\min( P(R_u\times\mathbb{R}^{-u}),P(\mathbb{R}^u\times R_{-u}))>0$. \subsection{Challenges with dependent variable ANOVA} The no holes condition~\eqref{eq:noholes} is problematic in many applications. For example, when $\boldsymbol{x}$ is uniformly distributed on the triangle $$ \{ (x_1,x_2)\in[0,1]^2\mid x_1\le x_2\} $$ then~\eqref{eq:noholes} is violated. More generally, \cite{gilquin2015sobol} and \cite{kucherenko2016sobol} consider functions on non-rectangular regions defined by linear inequality constraints. These and similar regions arise in many engineering problems where safety or costs impose constraints on design parameters. The simplest distribution with a hole is one with positive probability on the points $$ \{ (0,0), (0,1), (1,0)\} $$ and no others. Sobol's `pick-freeze' methods \citep{sobo:1990,sobo:1993} estimate variable importance by freezing the level of some inputs and then picking new values for the others. For the example here, setting $x_1=1$ implies that $x_2$ cannot be changed at all, which is a severe problem for a pick-freeze approach with dependent inputs. It is not just probability zero holes that cause a problem for dependent variable ANOVA. When $\boldsymbol{x}$ is normally distributed with some nonzero correlations, then~\eqref{eq:noholes} does not hold, and then as we mentioned in the introduction, the dependent-variable ANOVA is unavailable. The second problem we mentioned there is that the dependent variable ANOVA can yield negative estimates of importance. \section{Shapley value}\label{sec:shapley} Shapley value is a way to attribute the economic output of a team to the indivitual members of that team. In our case, the team will be the set of variables $x_1,x_2,\dots,x_d$. Given any subset $u\subseteq1{:}d$ of variables, the value that subset creates on its own is its explanatory power. A convenient way to measure explanatory power is via \begin{align}\label{eq:uval} \mathrm{val}(u) = \underline{\tau}^2_u \equiv \mathrm{var}( \mathbb{E}( f(\boldsymbol{x})\mid \boldsymbol{x}_u)). \end{align} Here, the empty set creates no value and the entire team contributes $\sigma^2$ which we must now partition among the $x_j$. There are four very compelling properties that an attribution method should have. The following list is based on the account in \cite{wint:2002}. Let $\mathrm{val}(u)\in\mathbb{R}$ be the value attained by the subset $u\subseteq\{1,\dots,d\}\equiv1{:}d$. It is always assumed that $\mathrm{val}(\emptyset)=0$, which holds in our variance explained setting. The values $\phi_j=\phi_j(\mathrm{val})$ should satisfy these properties: \begin{compactenum}[\quad1)] \item (Efficiency) $\sum_{j=1}^d\phi_j = \mathrm{val}(1{:}d)$. \item (Symmetry) If $\mathrm{val}(u\cup\{i\})=\mathrm{val}(u\cup\{j\})$ for all $u\subseteq 1{:}d-\{i,j\}$, then $\phi_i=\phi_j$. \item (Dummy) If $\mathrm{val}(u\cup\{i\})=\mathrm{val}(u)$ for all $u\subseteq1{:}d$, then $\phi_i=0$. \item (Additivity) If $\mathrm{val}$ and $\mathrm{val}'$ have Shapley values $\phi$ and $\phi'$ respectively then the game with value $\mathrm{val} +\mathrm{val}' $ has Shapley value $\phi_j+\phi'_j$ for $j\in1{:}d$. \end{compactenum} \medskip \cite{shap:1953} showed that the unique valuation $\phi$ that satisfies these axioms attributes value \begin{align*} \phi_j &= \frac1d\sum_{u\subseteq -\{j\}} {d-1\choose |u|}^{-1} \bigl( \mathrm{val}(u\cup\{j\})-\mathrm{val}(u)\bigr) \end{align*} to variable $j$. Defining the value via~\eqref{eq:uval} we get \begin{align} \phi_j &= \frac1d\sum_{u\subseteq -\{j\}} {d-1\choose |u|}^{-1} (\underline{\tau}^2_{u+\{j\}}-\underline{\tau}^2_u).\label{eq:shapfromult} \end{align} From~\eqref{eq:shapfromult} we see that the Shapley value is defined for any function for which $\mathrm{var}(\mathbb{E}(f(\boldsymbol{x})\mid \boldsymbol{x}_u))$ is always defined. The components $\boldsymbol{x}_j$ do not have to be real valued, though $f(\boldsymbol{x})$ must be. Holes in the domain $\mathcal{X}$ do not make it impossible to define a Shapley value. Next, because $\boldsymbol{x}_{u+\{j\}}$ always has at least as much explanatory power as $\boldsymbol{x}_u$ has, we see that $\phi_j\ge0$. That is, no variable has a negative Shapley value. As a result, the Shapley value addresses the two conceptual problems mentioned in the introduction. \cite{song:nels:staum:2016} show that the same Shapley value arises if we use $\mathrm{val}(u) = \mathbb{E}( \mathrm{var}(f(\boldsymbol{x})\mid \boldsymbol{x}_{-u}))$. That provides an alternative way to compute Shapley value. The Shapley value simplifies for independent inputs. \begin{theorem}\label{thm:shapleyshare} Let the ANOVA decomposition of a function with $d$ independent inputs have variance components $\sigma^2_u$ for $u\subseteq1{:}d$. If the value of a subset $u$ of variables is $\mathrm{val}(u)=\underline{\tau}^2_u$, then the Shapley value of variable $j$ is $$ \phi_j = \sum_{u\subseteq 1{:}d,\; j\in u} {\sigma^2_u}/{|u|}. $$ \end{theorem} \begin{proof} \cite{sobolshapley}. \end{proof} It follows from Theorem~\ref{thm:shapleyshare} that $\underline{\tau}^2_{\{j\}} \le \phi_j \le \overline{\tau}^2_{\{j\}}$. This is how the Sobol' indices bracket the Shapley value. \section{Special cases}\label{sec:examples} Here we consider some special case distributions and toy functions where we can work out the Shapley value in a closed or nearly closed form. The point of these examples is to show that Shapley gives sensible answers in both regular cases and corner cases. Because $\sigma^2 = \mathrm{var}(\mathbb{E}(f(\boldsymbol{x})\mid \boldsymbol{x}_u)) + \mathbb{E}( \mathrm{var}( f(\boldsymbol{x})\mid \boldsymbol{x}_u))$ we may use \begin{align}\label{eq:othertau} \underline{\tau}_u^2 = \sigma^2-\mathbb{E}( \mathrm{var}(f(\boldsymbol{x})\mid \boldsymbol{x}_u)). \end{align} \subsection{Linear functions} Let $f(\boldsymbol{x}) = \beta_0+\sum_{j=1}^d \beta_jx_j$ where $x_j$ are independent with variances $\sigma^2_j$. It is then easy to find that $\phi_j = \beta_j^2\sigma^2_j$. If we reparameterize $x_j$ to $cx_j$ for $c\ne 0$ then $\beta_j$ becomes $\beta_j/c$ and the importance of this variable remains unchanged as it should. Dependence among the $x_j$ complicates the expression for Shapley effects in linear settings. Shapley value for linear functions has historically been used to partition the $R^2$ quantity (proportion of sample variance explained) from a regression on $d$ variables among those $d$ variables. Taking the value of a subset $u$ of variables to be $R^2_u$, the $R^2$ value when regressing a response on predictors $x_j$ for $j\in u$, yields Shapley value \begin{align}\label{eq:lmg} \phi_j = \frac1d\sum_{u\subseteq -\{j\}}{d-1\choose |u|}^{-1}(R^2_{u+\{j\}}-R^2_u). \end{align} This is the LMG measure of variable importance, named after the authors of~\cite{lind:mere:gold:1980}. If we rearrange the $d$ variables into all $d!$ orders, find the improvement in $R^2$ that comes at the moment the $j$'th variable is added to the regression, then \eqref{eq:lmg} is the average of all those improvements. The LMG reference is difficult to obtain. \cite{geni:1993} is another reference, having~\eqref{eq:lmg} as equation (1). \cite{grom:2007} cites several more references on partitioning $R^2$ in regression and discusses alternative measures and criteria for choosing. It is clear that~\eqref{eq:lmg} is expensive for large $d$. Here we consider a population/distribution version of partitioning variance explained among a set of variables acting linearly. We suppose that $\boldsymbol{x} \sim\mathcal{N}( \mu,\Sigma)$ where $\Sigma\in\mathbb{R}^{d\times d}$ is a positive semi-definite symmetric matrix. The function of interest is $f(\boldsymbol{x}) = \beta_0+\boldsymbol{x}^\mathsf{T}\beta$ where $\beta=(\beta_1,\dots,\beta_d)\in\mathbb{R}^d$. If there is an error term as in a linear regression on noisy data, then we can let $x_d$ be that error variable with a corresponding $\beta_d=1$. If $\Sigma$ is not diagonal then the Stone-Hooker ANOVA is not available because~\eqref{eq:noholes} does not hold. Shapley value gives an interpretable expression for general $d$. \begin{theorem}\label{thm:gauslin} If $f(\boldsymbol{x})=\beta_0+\beta^\mathsf{T}\boldsymbol{x}$ for $\boldsymbol{x}\sim\mathcal{N}(\mu,\Sigma)$ where $\Sigma\in\mathbb{R}^{d\times d}$ has full rank, then the Shapley effect for variable $j$ is \begin{align*}\phi_j = \frac1d\sum_{u\subseteq-j} {d-1\choose |u|}^{-1} \dfrac{ \mathrm{cov}\bigl(x_j,\boldsymbol{x}_{-u}^\mathsf{T}\beta_{-u}\mid\boldsymbol{x}_u\bigr)^2} {\mathrm{var}(x_j\mid\boldsymbol{x}_u)}. \end{align*} \end{theorem} \begin{proof} See Section~\ref{sec:proofgauslin}. \end{proof} A variable with $\beta_j=0$ can still have $\phi_j>0$. For instance if $\Sigma=\bigl(\begin{smallmatrix}0&\rho\\\rho&0\end{smallmatrix}\bigr)$ and $f(\boldsymbol{x})=x_1$, then we can find directly from~\eqref{eq:shapfromult} that $\phi_2=\rho^2/2$ and $\phi_1=1-\rho^2/2$. For $\rho=\pm1$ we already know this by bijection. The Shapley value works with conditional variances and the Gaussian distribution makes these very convenient. For non-Gaussian distributions the conditional covariance of $\boldsymbol{x}_v$ and $\boldsymbol{x}_w$ given $\boldsymbol{x}_u$ may depend on the specific value of $\boldsymbol{x}_u$, while in the Gaussian case it is simply $\Sigma_{vw}-\Sigma_{vu}\Sigma_{uu}^{-1}\Sigma_{uw}$ for all $\boldsymbol{x}_u$. In a related problem, if we define $\mathrm{val}(u)$ to be $\mathrm{var}(\sum_{j\in u}x_j)$, instead of $\mathrm{var}( \mathbb{E}(\sum_jx_j\mid \boldsymbol{x}_u))$, then the Shapley value of variable $j$ is $\phi_j = \mathrm{cov}(x_j,S)$, where $S=\sum_{j\in1{:}d}x_j$. See \cite{coli:scar:vacc:2016}. This quantity can be negative. For instance, if $d=2$, then $\phi_1=\mathrm{var}(x_1)+\mathrm{cov}(x_1,x_2)$ which is negative when $x_1$ and $x_2$ are negatively correlated and $x_2$ has much greater variance than $x_1$. \subsection{Transformations, bijections and invariance}\label{sec:transbij} We can generalize the linear example to independent random variables that contribute additively: $f(\boldsymbol{x}) = \sum_{j=1}^dg_j(x_j)$. Then $\phi_j = \mathrm{var}( g_j(x_j))$. Replacing $x_j$ by a bijection $\tau_j(x_j)$ and adjusting $g_j$ to $g_j\circ \tau_j^{-1}$ leaves $\phi_j$ unchanged. More generally, suppose that $y = f(\boldsymbol{x})$ and we transform the variables $x_j$ into $z_j$ by bijections: $z_j = \tau_j(x_j)$, $x_j = \tau_j^{-1}(z_j)$, for $j=1,\dots,d$. Now define $f'(\boldsymbol{z})= f( \tau_1^{-1}(z_1),\dots,\tau_d^{-1}(z_d))$ and let $\phi_j'$ be the Shapley importance of $z_j$ as a predictor of $y'=f'(\boldsymbol{z})$. Because $\mathrm{var}( \mathbb{E}( f'(\boldsymbol{z})\mid\boldsymbol{z}_u)) = \mathrm{var}( \mathbb{E}( f(\boldsymbol{x})\mid\boldsymbol{x}_u))$, we find that $\phi_j'=\phi_j$ for $j=1,\dots,d$, where $\phi_j$ is the Shapley importance of $x_j$ as a predictor of $y$. As a result we can apply invertible transformations to any or all of the $x_j$ without changing the Shapley values. Now lets revisit the linear setting with an extreme example: $f(x_1,x_2) = 10^6x_1 +x_2$ with $x_1=10^6x_2$ where $x_2$ (and hence $x_1$) has a finite positive variance. Because $\partial f/\partial x_1 \gg \partial f/\partial x_2>0$ and $\mathrm{var}(x_1)\gg\mathrm{var}(x_2)$ one might expect $x_1$ to be the more important variable. However, the Shapley formula easily yields $\phi_1=\phi_2$; these variables are equally important. This is quite reasonable because $f$ is a function of $x_1$ alone and equally a function of $x_2$ alone. More generally, for $d\ge2$, if there is a bijection between any two of the $x_j$ then those two variables have the same Shapley value. To see this, let $x_1 = g_1(x_2)$ and $x_2=g_2(x_1)$, both with probability one then for any $u\subset 1{:}d$ with $u\cap\{1,2\}=\emptyset$ we have $$\mathbb{E}( f(\boldsymbol{x})\mid \boldsymbol{x}_{u+\{1\}})=\mathbb{E}( f(\boldsymbol{x})\mid \boldsymbol{x}_{u+\{2\}}).$$ It follows that $\underline{\tau}^2_{u+\{1\}} - \underline{\tau}^2_u=\underline{\tau}^2_{u+\{2\}} - \underline{\tau}^2_u$ and therefore $\phi_1=\phi_2$ by the symmetry property of Shapley value. To summarize: \begin{compactenum}[\quad1)] \item Shapley value is preserved under invertible transformations, and \item a bijection between variables implies that they have the same Shapley value. \end{compactenum} \subsection{Bivariate settings}\label{sec:any2} When $d=2$ we can get some simpler formulas for the importance of the two variables. \begin{proposition}\label{prop:anyd2} Let $f(\boldsymbol{x})$ have finite variance $\sigma^2>0$ for random $\boldsymbol{x}=(x_1,x_2)$. Then from~\eqref{eq:shapfromult}, \begin{align} \frac{\phi_1}{\sigma^2} &= \frac12\Bigl( 1 + \frac{\mathrm{var}(\mathbb{E}(Y\mid x_1)) - \mathrm{var}(\mathbb{E}(Y\mid x_2)}{\sigma^2}\Bigr) \label{eq:dis2VCE} \\ &= \frac12\Bigl( 1 + \frac{\mathbb{E}( \mathrm{var}(Y\mid x_2))-\mathbb{E}( \mathrm{var}(Y\mid x_1))}{\sigma^2}\Bigr),\quad\text{and} \label{eq:dis2ECV}\\ \frac{\phi_1}{\phi_2} & = \frac{ \mathrm{var}(\mathbb{E}(Y\mid x_1)) + \mathbb{E}(\mathrm{var}(Y\mid x_2))}{ \mathrm{var}(\mathbb{E}(Y\mid x_2)) + \mathbb{E}(\mathrm{var}(Y\mid x_1))}. \label{eq:dis2sym} \end{align} \end{proposition} \begin{proof} Using $\underline{\tau}^2_{\{1,2\}} = \sigma^2$ and $\underline{\tau}^2_\emptyset=0$, we find that $$ \phi_1 = \frac12\bigl( \underline{\tau}^2_{\{1\}} + \sigma^2-\underline{\tau}^2_{\{2\}}\bigr) = \frac12\bigl( \sigma^2 + \mathrm{var}(\mathbb{E}(Y\mid x_1)) - \mathrm{var}(\mathbb{E}(Y\mid x_2)), $$ which gives us~\eqref{eq:dis2VCE}. The others are algebraic rearrangements. \end{proof} We can use Proposition~\ref{prop:anyd2} to get analogous expressions for $\phi_2/\sigma^2$ and $\phi_2/\phi_1$ by exchanging indices. \subsubsection{Farlie-Gumbel-Morgenstern copula for $d=2$} Here we focus on the case where the dependence between both components $x_1$ and $x_2$ is explicitly described by some copula. There exist simple conditional expectation formulas when considering some classical classes of copulas (see e.g., \cite{crane2008conditional} and references therein). Starting from such formulas, it is possible to derive explicit computations for Shapley values in a linear model. In this section, we state explicit results for the Farlie-Gumbel-Morgenstern family of copulas. The Farlie-Gumbel-Morgenstern copula describes a random vector $\boldsymbol{x}\in[0,1]^2$ with each component $x_j\sim \mathbf{U}[0,1]$ and joint probability density function \begin{align}\label{eq:fgmdensity} c_\theta(x_1,x_2) = 1 + \theta(1-2x_1)(1-2x_2),\quad -1\le\theta\le 1. \end{align} One can show that $\mathrm{cor}(x_1,x_2)=\theta/3$. \cite{lai78} proved that, for $0 \leq \theta \leq 1$, $x_1$ and $x_2$ are positively quadrant dependent and positively regression dependent. Moreover, \begin{align}\label{eq:copulalinreg} \mathbb{E}(x_2\mid x_1)= \frac\theta3x_1+\Bigl(\frac12-\frac\theta6\Bigr).\end{align} The linearity above is very useful for our purpose, as it will allow an explicit computation for Shapley values in that model. \begin{proposition}\label{prop:fgmlin} Let $f(\boldsymbol{x}) = \boldsymbol{x}^\mathsf{T}\beta$ for $\boldsymbol{x},\beta\in\mathbb{R}^2$ and $\boldsymbol{x}\sim c_{\theta}(x_1,x_2)$, with $-1 \leq \theta \leq 1$. Then $$\frac{\phi_1}{\sigma^2} = \frac12 \left(1+\Bigl(1-\frac{\theta^2}{9}\Bigr) \frac{\beta_1^2-\beta_2^2}{12 \sigma^2}\right), $$ with $\sigma^2= ({\beta_1^2+ \beta_2^2})/{12}+ {\beta_1\beta_2 \theta}/{18}$. \end{proposition} \begin{proof} From the linearity of the regression function~\eqref{eq:copulalinreg}, $$\mathbb{E}(f({\bf x})\mid x_1)=x_1 \Bigl(\beta_1+\frac{\theta}{3}\beta_2\Bigr)+ \beta_2 \Bigl(\frac12 - \frac{\theta}{6}\Bigr),$$ thus $$\mathrm{var}( \mathbb{E}(f(\boldsymbol{x})\mid x_1))=\frac{1}{12} \Bigl(\beta_1+ \frac{\theta}{3} \beta_2\Bigr)^2.$$ Symmetry gets us the corresponding expression for $\mathrm{var}(\mathbb{E}(f(\boldsymbol{x})\mid x_2))$. Then Proposition~\ref{prop:anyd2} establishes the expression for $\phi_1/\sigma^2$. Finally, because $\mathrm{var}(x_j)=1/12$ and $\mathrm{cor}(x_1,x_2)=\theta/3$, we get $\sigma^2=(\beta_1^2 + \beta_2^2)/12+ \beta_1 \beta_2 \theta/18$. \end{proof} Now we consider the Farlie-Gumbel-Morgenstern copula, but we assume $x_j$ has as cumulative distribution function $F_j$, and probability density function $F_j'$, not necessarily from the uniform distribution. \begin{lemma}\label{lem:crane} Let $\boldsymbol{x}\in\mathbb{R}^2$ have probability density $F'_1(x_1)F'_2(x_2)c_{\theta}(F_1(x_1),F_2(x_2))$, with $-1 \leq \theta \leq 1$. Then $$\mathbb{E}(x_2\mid x_1)=\mathbb{E}(x_2)+\theta (1-2F_1(x_1))\int_{\mathbb{R}} y(1-2F_2(y))F'_2(y)\,\mathrm{d} y.$$ For exponential $x_j$ with $F_j(x_j)=1-\exp(-\lambda_jx_j)$ for $\lambda_j>0$, we get \begin{equation}\label{expmargfgm} \mathbb{E}(x_2\mid x_1)= \frac{1}{\lambda_2}+ \frac{\theta}{2 \lambda_2}(1-2e^{-\lambda_1x_1}). \end{equation} \end{lemma} \begin{proof} \cite{crane2008conditional}. \end{proof} Next we assume that $\boldsymbol{x}$ has exponential margins and we transform these margins to be unit exponential by making a corresponding scale adjustment to $\beta$. From Section~\ref{sec:transbij}, we know that such transformations do not change the Shapley value. \begin{proposition}\label{propexpfgm} Let $f(\boldsymbol{x}) = \boldsymbol{x}^\mathsf{T}\beta$ for $\boldsymbol{x},\beta\in\mathbb{R}^2$ where $\boldsymbol{x}$ has probability density function $e^{-x_1-x_2} c_{\theta}(1- e^{-x_1},1- e^{-x_2})$, where $-1 \leq \theta \leq 1.$ Then \begin{align}\label{eq:expfgm} \frac{\phi_1}{\sigma^2} = \frac12 \Bigl(1+\Bigl(1-\frac{\theta^2}{12}\Bigr)\frac{\beta_1^2-\beta_2^2}{ \sigma^2}\Bigr) \end{align} with $\sigma^2= \beta_1^2+\beta_2^2+\theta\beta_1\beta_2/2$. \end{proposition} \begin{proof} From Lemma~\ref{lem:crane}, $\mathbb{E}(x_2\mid x_1)=1+\theta/2-\theta e^{-x_1}$ so $$\mathbb{E}(f(\boldsymbol{x})\mid x_1) = \beta_1x_1+\beta_2(1+\theta/2-\theta e^{-x_1}).$$ Therefore $$\mathrm{var}(\mathbb{E}(f(\boldsymbol{x})\mid x_1)) = \beta_1^2 +\beta_2^2\theta^2\mathrm{var}(e^{-x_1}) -2\beta_1\beta_2\theta\mathrm{cov}(x_1,e^{-x_1}). $$ Now $\mathrm{var}(e^{-x_1})=\mathbb{E}(e^{-2x_1})-\mathbb{E}(e^{-x_1})^2=1/12$ and $$ \mathrm{cov}(x_1,e^{-x_1})=\int_0^\infty xe^{-2x}\,\mathrm{d} x -\frac12=-\frac14, $$ so $\mathrm{var}(\mathbb{E}(f(\boldsymbol{x})\mid x_1)) = \beta_1^2+\beta_2^2\theta^2/12+\beta_1\beta_2\theta/2$. This establishes~\eqref{eq:expfgm} by Proposition~\ref{prop:anyd2}. \end{proof} Suppose that $\beta_1>\beta_2>0$. Then of course $\phi_1/\sigma^2>1/2$. Equation~\eqref{eq:expfgm} shows that $\phi_1/\sigma^2$ decreases as $\theta$ increases from $0$ to $1$. It does not approach $1/2$ because even at $\theta=1$, $x_2$ is not a deterministic function of $x_1$. \subsubsection{Gaussian variables, exponential $f$, $d=2$}\label{gausframenl} Let $\boldsymbol{x}\sim\mathcal{N}(\mu,\Sigma)$ and take $Y=e^{\beta_0+\sum_{j=1}^dx_j\beta_j}$. The effect of $\beta_0$ and $\mu_j$ is simply to scale $Y$ and so we can take $\beta_0=0$ and $\mu=0$ without affecting $\phi_j/\sigma^2$. Next we suppose that the diagonal elements of $\Sigma$ are nonzero. By the transformation result in Section~\ref{sec:transbij} we can replace each $x_j$ by $x_j/\Sigma_{jj}$ if need be without changing $\phi_j$ and so we suppose that each $x_j\sim\mathcal{N}(0,1)$. Here we find variable importances for $d=2$. \begin{proposition}\label{prop:gauslinexp} Let $f(\boldsymbol{x}) = \exp\bigl(\boldsymbol{x}^\mathsf{T}\beta\bigr)$ for $\boldsymbol{x},\beta\in\mathbb{R}^2$ and $\boldsymbol{x}\sim\mathcal{N}({\bf 0},\Sigma)$, for $\Sigma = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}$. Then \begin{align}\label{eq:gauslinexp} \frac{\phi_1}{\sigma^2} = \frac12 \biggl(1+ \frac{e^{(\beta_1+ \beta_2\rho)^2}-e^{(\beta_2+ \beta_1\rho)^2}} {e^{\beta_1^2+\beta_2^2+2\rho\beta_1\beta_2}-1}\biggr), \end{align} where the variance of $f(\boldsymbol{x})$ is \begin{align}\label{eq:varexpo} \sigma^2=e^{\beta_1^2+\beta_2^2+2 \rho \beta_1\beta_2}(e^{\beta_1^2+\beta_2^2+2 \rho \beta_1\beta_2} -1). \end{align} \end{proposition} \begin{proof} Recall the lognormal moments: if $Z\sim\mathcal{N}(\mu,\sigma^2)$ then $\mathbb{E}(e^Z)= e^{\mu+\sigma^2/2}$ and $\mathrm{var}(e^Z) = (e^{\sigma^2}-1)e^{2\mu+\sigma^2}$. Taking $Z=\boldsymbol{x}^\mathsf{T}\beta$ we find that $Y=e^Z$ has variance $\sigma^2$ given by~\eqref{eq:varexpo}. The distribution of $x_2\beta_2$ given $x_1$ is $\mathcal{N}(\rho x_1\beta_2,(1-\rho^2)\beta^2_2)$. Therefore \begin{align*} \mathbb{E}(Y\mid x_1) &= e^{(\beta_1+\rho\beta_2)x_1+\beta_2^2(1-\rho^2)/2},\quad\text{and so}\\ \mathrm{var}(\mathbb{E}(Y\mid x_1)) &= e^{\beta_2^2(1-\rho^2)} e^{(\beta_1+\rho\beta_2)^2} (e^{(\beta_1+\rho\beta_2)^2}-1)\\ & = e^{\beta^\mathsf{T}\Sigma\beta}(e^{(\beta_1+\rho\beta_2)^2}-1). \end{align*} Similarly, $\mathrm{var}(\mathbb{E}(Y\mid x_2)) = e^{\beta^\mathsf{T}\Sigma\beta}(e^{(\beta_2+\rho\beta_1)^2}-1)$. Then applying Proposition~\ref{prop:anyd2} and noticing that the lead factor $e^{\beta^\mathsf{T}\Sigma\beta}$ appears also in $\sigma^2$, yields the result. \end{proof} If $\rho=\pm1$ then $\phi_1/\sigma^2=1/2$ as it must because there is then a bijection between the variables. The value of $\phi_1/\sigma^2$ in~\eqref{eq:gauslinexp} is unchanged if we replace $\rho$ by $-\rho$. The formula is not obviously symmetric, but the fraction within parentheses there can be divided by the corresponding one for $-\rho$ and the ratio reduces to $1$. More directly, we know from Section~\ref{sec:transbij} that making the transformation $x_2\to-x_2$ and $\beta_2\to-\beta_2$ would leave the variable importances unchanged while switching $\rho\to-\rho$. It is clear that for $\beta_1>\beta_2$ we must have $\phi_1/\sigma^2\ge1/2$. Even with the closed form~\eqref{eq:gauslinexp}, it is not obvious how $\phi_1/\sigma^2$ should depend on $\rho$ or on $\beta$. Figure~\ref{fig:lognormalphi} shows that increasing $|\rho|$ from zero generally raises the importance of $x_1$ until at some high correlation level the relative importance quickly drops down to $1/2$. Also, for $\rho=0$ the effect of $\beta_1$ over the range $2\le\beta_1\le 8$ is quite small when $\beta_2=1$. \begin{figure} \centering \includegraphics[width=\hsize]{figlognormalphi} \caption{\label{fig:lognormalphi} Relative importance $\phi_1/\sigma^2$ versus correlation $|\rho|$ from Proposition~\ref{prop:fgmlin}. From top to bottom, $\beta^\mathsf{T}$ is $(8,1)$, $(4,1)$, and $(2,1)$. } \end{figure} The lognormal case is different from the bivariate normal case. There, the value of $\phi_1$ converges monotonically towards $1/2$ as $|\rho|$ increases from $0$ to $1$. \subsection{Holes} Here we consider the simplest setting where there is an unreachable part of the $\boldsymbol{x}$ space. We consider two binary variables $x_1$ and $x_2$ but $x_1=x_2=1$ never occurs. For instance $f$ could be the weight of a sea turtle, $x_1$ could be $1$ iff the turtle is bearing eggs and $x_2$ could be $1$ iff the turtle is male. It may seem unreasonable to even attempt to compare the importance of these variables (male/female versus eggs/none) but Shapley value does provide such a comparison based on compelling axioms in the event that we do seek a comparison. \begin{table}[t] \centering \begin{tabular}{llll} \toprule $p$ & $x_1$ & $x_2$ & $y$\\ \midrule $p_0$& $0$ & $0$ & $y_0$\\ $p_1$ & $1$ & $0$ &$y_1$\\ $p_2$& $0$ & $1$ &$y_2$\\ \bottomrule \end{tabular} \caption{\label{tab:3pt} The random variable $y=f(\boldsymbol{x})$ is the given function of $\boldsymbol{x}=(x_1,x_2)$. That vector takes three values with the probabilities in this table. For example, $\Pr( \boldsymbol{x}=(1,0))=p_1$ and then $y=y_1$. } \end{table} This simplest setting is depicted in Table~\ref{tab:3pt} where $p_0+p_1+p_2=1$. We assume that $p_1>0$ and $p_2>0$ for otherwise the function does not have two input variables. \begin{theorem}\label{thm:turtle} Let $y$ be a function of the random vector $\boldsymbol{x}$ as given in Table~\ref{tab:3pt}. Assume that $\sigma^2=\mathrm{var}(y)>0$, and $\min(p_1,p_2)>0$. Then the Shapley relative importance of variable $x_1$ is \begin{align}\label{eq:thmturtle} \frac12\Bigl( 1 + \frac{p_0}{\sigma^2} \times \frac{ p_1(1-p_1)\bar y_1^2-p_2(1-p_2) \bar y_2^2 } {(1-p_1)(1-p_2)} \Bigr) \end{align} where $\bar y_j=y_j-y_0$ for $j=1,2$. \end{theorem} \begin{proof} See section~\ref{sec:turtleproof}. \end{proof} We see that when $p_0=0$, then the Shapley relative importance of $x_1$ is $1/2$. That is what it must be because there is then a bijection between $x_1$ and $x_2$ via $x_1+x_2=1$. Now suppose that $\bar y_1=\bar y_2$. For instance $y_1=y_2=1$ while $y_0=0$. Then the more important variable is the one with the larger variance. That is $x_1$ is more important if $p_1(1-p_1)> p_2(1-p_2)$. This can only happen if $p_1>p_2$. So the more probable input is the more important one in this case. \subsection{Maximum of exponential random variables} \cite{kein:hilg:jeil:rupp:2004} considered a network of neurons $e_1,\dots,e_d$ where the $e_j$ have independent lifetimes $x_j$ that are exponentially distributed with mean $1/\lambda_j$. In their setting the value of a set of neurons is $\phi(u) = \mathbb{E}( \max_{j\in u}x_j)$, that is the expected amount of time that at least part of that subset survives. For $d=3$, they give a Shapley value of $$ \phi_j=\frac1{\lambda_1} -\frac12\frac1{\lambda_1+\lambda_2} -\frac12\frac1{\lambda_1+\lambda_3} +\frac13\frac1{\lambda_1+\lambda_2+\lambda_3}, $$ but they do not give a proof. While value in this example is not based on prediction error, we include it because it is another example of a closed form for Shapley value based on random variables. We prove their formula here and generalize it to any $d\ge1$. \begin{theorem}\label{thm:exprv} Let the value of a set $u\subseteq1{:}d$ be $\mathrm{val}(u) = \mathbb{E}(\max_{j\in u}x_j)$ where $x_1,\dots,x_d$ are independent exponential random variables with $\mathbb{E}(x_j) = 1/\lambda_j$. Then $$ \phi_j = \sum_{r=1}^d\frac{(-1)^{r-1}}r \sum_{w\subseteq1{:}d,j\in w,|w|=r} \frac1{\sum_{\ell\in w}\lambda_\ell}. $$ \end{theorem} \begin{proof} See section~\ref{sec:proveexprv}. \end{proof} \section{Conclusions}\label{sec:conc} The Shapley value from economics remedies the conceptual difficulties in measuring importance of dependent variables via ANOVA. Like ANOVA it uses variances, but unlike the dependent data ANOVA, Shapley value never goes negative and it can be defined without onerous assumptions on the input distribution. We find that Shapley value has useful properties. When two variables are functionally equivalent, then they get equal Shapley value. When an invertible transformation is made to a variable, it retains its Shapley value. We thus conclude that \cite{song:nels:staum:2016} had the right idea proposing Shapley value for dependent inputs. Computation of Shapley values remains a challenge outside of special cases like the ones we discuss here. A potential application that we find interesting is measuring the importance of parameters in a Bayesian context. When the parameter vector $\beta$ has an approximate Gaussian posterior distribution, as the central limit theorem often provides, then Theorem~\ref{thm:gauslin} yields a measure $\phi_j(\boldsymbol{x}_0)$ for the importance of parameter $\beta_j$ for the posterior uncertainty of the prediction $\boldsymbol{x}_0^\mathsf{T}\beta$. We hasten to add that parameter independence is quite different from variable importance, which is a more common goal. By this measure an important parameter is one whose uncertainty dominates uncertainty in $\boldsymbol{x}_0^\mathsf{T}\beta$. The corresponding variable may or may not be important. Another potential application is in modeling the importance of order statistics. They naturally belong to a non-rectangular set~\cite{lebr:dutf:2014}. \section*{Acknowledgments} This work was supported by grant DMS-1521145 from the U.S.\ National Science Foundation. We thank Marco Scarsini, Jiangming Xiang, Bertrand Iooss, two anonymous referees and an associate editor for valuable comments. \bibliographystyle{apalike}
2,877,628,089,332
arxiv
\section{Introduction} \label{} Heavy quarkonium, the bound state of a heavy quark-antiquark pair, is a prime example of a strongly interacting system whose properties are well documented in perturbative QCD. With the advent of new theoretical framework, such as effective field theory (EFT) and threshold expansion technique, as well as proper treatment for decoupling infrared degrees of freedom, the heavy quarkonium system has become an ideal laboratory for precision tests of predictions of perturbative QCD with respect to various experimental data and lattice QCD predictions. The state-of-the-art computational results in this field comprise the next-to-next-to-next-to-leading order (NNNLO) energy levels of heavy quarkonium~\cite{Beneke:2005hg,Penin:2005eu,Kiyo:2014uca}, the NNNLO pair-production cross section of heavy quarks near threshold~\cite{Beneke:2015kwa,Beneke:2016kkb}, and the leptonic decay width of $\Upsilon (1S)$ state~\cite{Beneke:2014qea}. These calculations utilize the modern EFT, potential-nonrelativistic QCD (pNRQCD)~\cite{Pineda:1997bj,Brambilla:1999xf}, for systematically organizing the perturbative expansions in $\alpha_s$ and $v$ (velocity of heavy quarks) in a sophisticated manner. This EFT describes interactions of a non-relativistic quantum mechanical system (dictated by the Schr\"odinger equation) with ultrasoft gluons, which is organized in multipole expansion. We can benefit from methods and knowledge of perturbation theory of quantum mechanics therein. It is widely known that quasi-degenerate systems need special care in perturbation theory of quantum mechanics~\cite{Weinberg:2015ww}, however, thus far the relevant consideration seems to be missing in the computation of the aforementioned NNNLO heavy quarkonium observables.\footnote{ In the computation of the NNLO energy levels the corrections from quasi-degenerate states for the $n\leq 3$ states were explicitly considered and found to be absent \cite{Titard:1994id}. } In perturbative expansion of the heavy quarkonium system, the leading-order Hamiltonian is that of the Coulomb system whose energy eigenvalues are labeled only by the principal quantum number $n$. The first-order correction resolves the degeneracy in the orbital angular momentum $l$, while the second-order correction resolves the degeneracy in the total spin $s$ and total angular momentum $j$. Once these features are properly taken into account in perturbative calculations there are enhanced contributions which rearrange the order counting. These are the mixing effects between different $l$ states for the same $n$. One finds that naively these start from the third-order corrections to the heavy quarkonium energy levels and from the first-order corrections to the wave functions.\footnote{ As a simple example, consider a matrix $$ \left( \begin{array}{cc} 0& x^2 V_2\\ x^2 V_2^* & x V_1 \end{array} \right) . $$ Its energy eigenvalues are given by $-x^3|V_2|^2/V_1$ and $xV_1+x^3|V_2|^2/V_1$ up to ${\cal O}(x^3)$, and the corresponding eigenvectors are given by $(1,-xV_2^*/V_1)$ and $(xV_2/V_1,1)$ up to ${\cal O}(x)$. The appearance of $V_1$ in the denominator signals enhanced contributions. } The latter would induce second-order corrections to the heavy quark threshold production cross section (or the quarkonium leptonic decay width). By explicit computation the relevant lowest order off-diagonal matrix elements for these corrections vanish. Hence, these enhanced corrections are pushed to higher orders. We present a necessary formulation, an explicit computation at the lowest order, and discuss further higher-order effects. It is not our purpose to claim originality of the present work but rather to recollect relevant information and to clarify the basis for systematic computation. A closely related subject is the inclusion of transitions (mixings) between two quasi-degenerate states given by off-diagonal matrix elements of interaction operators considered in many potential-model calculations~\cite{Voloshin:2007dx,Segovia:2008zz,Segovia:2013wma,Segovia:2016xqb}. However, (somewhat to our surprise) systematic order counting in light of pNRQCD in expansion in $\alpha_s$ and $v$ has not been addressed so far. In the case of QED, it was already pointed out in the late 1940s and 1950s that contributions from quasi-degenerate states to the positronium energy levels do not appear at and below order $\alpha ^6$ (order $\alpha^4$ relative to the LO energy levels); see Ref.\cite{Adkins:1999hf} and references therein. However, the situations of positronium and heavy quarkonium systems differ in some aspects and it is worth clarifying the latter case explicitly. The crucial difference stems from the fact that the degeneracy of the heavy quarkonium energy level is lifted at $\alpha _s^3$ whereas the degeneracy is lifted at $\alpha ^4$ in the case of positronium. \section{Perturbation theory for quasi-degenerate system} \label{sec:2} Consider the Schr$\ddot{\mathrm{o}}$dinger equation of heavy quarkonium \begin{align} \left(H^{(0)}+\sum _{i=1}^\infty \varepsilon ^i V^{(i)} \right) \ket{\Psi _{nlsj}} =E_{nlsj} \ket{\Psi _{nlsj}}, \label{eq:schrodinger} \end{align} which dictates the quantum mechanical subsystem in pNRQCD. An expansion parameter $\varepsilon$ (corresponding to $\alpha_s$ or $v$) is introduced\footnote{ For simplicity we neglect the electromagnetic interaction of quarks. In the case of bottom quark, numerically its effects are small even compared to the NNNLO corrections in $\alpha_s$. The electric charge of bottom quark $Q_b=-1/3$ plays a role of an extra suppression factor in addition to the small QED coupling constant $\alpha \simeq 1/137$, as compared to, e.g., $\alpha _s(m_b)\simeq 0.23$. }, and a unique order in $\varepsilon$ is assigned to each potential operator $V^{(i)}$. The definitions of $H^{(0)}, V^{(1)},\dots$ can be found, for instance, in \cite{Kniehl:2002br,Kiyo:2014uca}, but we do not need their explicit forms in this section. The energy level and the wave function are labeled with $(n,l,s,j)$. The operators $H^{(0)}$ and $V^{(1)}$ preserve $l$, $s$ and $j$. Furthermore, $H^{(0)}$ and $V^{(i)}$ preserve $s$ and $j$ (see Sec.~\ref{s4}), hence we suppress these two labels in the following. ($\ket{nl}=\ket{nlsj}$ represents an eigenstate of $H^{(0)}$.) The perturbative expansion of the energy level is given by \begin{align} &E_{nlsj} =E^{(0)}_n +\varepsilon E^{(1)}_{nl +\varepsilon^2 \left[ \bra{nl}V^{(2)}\ket{nl} +\sum_{n'\neq n}^{}\frac{|\bra{nl}V^{(1)}\ket{n'l}|^2}{E_n^{(0)}-E_{n'}^{(0)}}\right]\nonumber\\ &~~~~~~+\varepsilon^3 \left[ \bra{nl}V^{(3)}\ket{nl} +\sum_{l'\neq l}^{}\frac{|\bra{nl}V^{(2)}\ket{nl'}|^2}{E_{nl}^{(1)}-E_{nl'}^{(1)}} +\sum_{n'\neq n}^{}\frac{\bra{nl}V^{(2)}\ket{n'l} \bra{n'l}V^{(1)}\ket{nl}}{E_n^{(0)}-E_{n'}^{(0)}}\right.\nonumber\\ &~~~~~~~~~~~~~~+\left. \sum_{n'\neq n}^{}\frac{ \bra{nl}V^{(1)}\ket{n'l}}{E^{(0)}_n-E^{(0)}_{n'}} \left\{ \bra{n'l}V^{(2)}\ket{nl} -E^{(1)}_{nl}\frac{\bra{n'l}V^{(1)}\ket{nl}}{E^{(0)}_n-E^{(0)}_{n''}} \right. \right. \nonumber\\ & \left. \left. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +\sum_{n''\neq n}^{} \frac{\bra{n'l}V^{(1)}\ket{n''l}\bra{n''l}V^{(1)}\ket{nl}}{E^{(0)}_n-E^{(0)}_{n''}} \right\}\right] +\mathcal{O}(\varepsilon^4), \label{eq:energy} \end{align} where we use short-hand notations $E^{(0)}_n\equiv \bra{nl}H^{(0)}\ket{nl}$, $E^{(1)}_{nl}\equiv \bra{nl}V^{(1)}\ket{nl}$. The fourth- and fifth-order corrections will be given in eqs.\eqref{eq:energy4},\eqref{eq:energy5}. The subscript of $E^{(0)}_n$ indicates that the leading energy eigenvalue depends only on $n$, and that of $E^{(1)}_{nl}$ indicates that degeneracy in $l$ is resolved at the first-order. The degeneracy is fully resolved at the second order. The $\varepsilon^3$-term proportional to $|\bra{nl}V^{(2)}\ket{nl'}|^2$ in eq.\eqref{eq:energy} is the main focus of this paper. This correction has not been considered explicitly in the previous studies \cite{Beneke:2005hg,Penin:2005eu,Kiyo:2014uca}. Since the operator $V^{(2)}$ is accompanied by $\varepsilon^2$, naive order counting indicates that the $|\bra{nl}V^{(2)}\ket{nl'}|^2$ term may be order $\varepsilon^4$. Due to the quasi-degeneracy of the states $\ket{nl}$ and $\ket{nl'}$, however, the denominator $(E_{nl}^{(1)}-E_{nl'}^{(1)})$ compensates one $\varepsilon$, rendering the term to be order $\varepsilon^3$. The perturbative expansion of the wave function is given by \begin{align} \ket{\Psi_{nlsj}}&= \ket{nlsj} +\sum _{i=1}^\infty \varepsilon^i \left[ \sum_{l'\neq l}^{}\ket{nl'sj} \frac{c^{(i)}_{nl';nl}}{E^{(1)}_{nl}-E^{(1)}_{nl'}} +\sum_{n'\neq n,\ l'}^{}\ket{n'l'sj} \frac{d^{(i)}_{n'l';nl}}{E^{(0)}_{n}-E^{(0)}_{n'}} \right] , \label{eq:wavefunc} \end{align} where $\ket{\Psi_{nlsj}}$ is normalized as $\braket{nlsj|\Psi_{nlsj}}=1$. The coefficients are given by \begin{align} c^{(1)}_{nl';nl} =&\bra{nl'}V^{(2)}\ket{nl} ,\qquad \qquad d^{(1)}_{n'l';nl} =\bra{n'l'}V^{(1)}\ket{nl}, \label{eq:c1d1}\\ c^{(2)}_{nl';nl} =&\bra{nl'}V^{(3)}\ket{nl} +\bra{nl'}V^{(2)}\ket{nl'} \frac{c^{(1)}_{nl';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl'}}\nonumber\\ &+\sum_{i=1}^{2} \sum_{n''\neq n,\ l''}^{} \bra{nl'}V^{(3-i)}\ket{n''l''} \frac{d^{(i)}_{n''l'';nl}} {E^{(0)}_{n}-E^{(0)}_{n''}} -E^{(2)}_{nl} \frac{c^{(1)}_{nl';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl'}}, \label{eq:c2}\\ d^{(2)}_{n'l';nl} =&\bra{n'l'}V^{(2)}\ket{nl} +\bra{n'l'}V^{(1)}\ket{nl'} \frac{c^{(1)}_{nl';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl'}}\nonumber\\ &+\sum_{n''\neq n,\ l''}^{} \bra{n'l'}V^{(1)}\ket{n''l''} \frac{d^{(1)}_{n''l'';nl}} {E^{(0)}_{n}-E^{(0)}_{n''}} -E^{(1)}_{nl} \frac{d^{(1)}_{n'l';nl}} {E^{(0)}_{n}-E^{(0)}_{n'}}, \\ c^{(3)}_{nl';nl} =&\bra{nl'}V^{(4)}\ket{nl} +\sum_{i=1}^{2} \sum_{l''\neq l}^{} \bra{nl'}V^{(4-i)}\ket{nl''} \frac{c^{(i)}_{nl'';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl''}}\nonumber\\ &+\sum_{i=1}^{3} \sum_{n''\neq n,\ l''}^{} \bra{nl'}V^{(4-i)}\ket{n''l''} \frac{d^{(i)}_{n''l'';nl}} {E^{(0)}_{n}-E^{(0)}_{n''}} -\sum_{i=1}^{2} E^{(4-i)}_{nl} \frac{c^{(i)}_{nl';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl'}},\\ d^{(3)}_{n'l';nl} =&\bra{n'l'}V^{(3)}\ket{nl} +\sum_{i=1}^{2} \sum_{l''\neq l}^{} \bra{n'l'}V^{(3-i)}\ket{nl''} \frac{c^{(i)}_{nl'';nl}} {E^{(1)}_{nl}-E^{(1)}_{nl''}}\nonumber\\ &+\sum_{i=1}^{2} \sum_{n''\neq n,\ l''}^{} \bra{n'l'}V^{(3-i)}\ket{n''l''} \frac{d^{(i)}_{n''l'';nl}} {E^{(0)}_{n}-E^{(0)}_{n''}} -\sum_{i=1}^{2} E^{(3-i)}_{nl} \frac{d^{(i)}_{n'l';nl}} {E^{(0)}_{n}-E^{(0)}_{n'}}. \label{eq:d3} \end{align} Here, $E_{nl}^{(k)}$ denotes the coefficient of $\varepsilon^k$ of $E_{nlsj}$ [c.f., eq.(\ref{eq:energy})], and it is understood that $c^{(i)}_{nl';nl}=0$ if $l'= l$. As can be seen, an enhanced contribution proportional to $c^{(1)}_{nl';nl}/ \Bigl[ E^{(1)}_{nl}-E^{(1)}_{nl'}\Bigr]$ appears already at the first order for the wave function. A derivation of the perturbative expansions of $E_{nlsj}$ and $\ket{\Psi_{nlsj}}$ is given in the appendix. Physical quantities, such as the quark pair production cross section near threshold in $e^+e^-$ collisions\footnote{ These corrections can make sense only in the close vicinity of distinct quasi-degenerate resonance peaks. Otherwise the enhancement from the small denominators are lost, for instance, by smearing due to the resonance widths. } and the quarkonium leptonic decay width, are proportional to the absolute square of the wave function at the origin, and the enhanced corrections to these observables arise from the second order. This is because only the $S$-wave ($l=0$) wave functions have non-vanishing values at the origin, and since the enhanced corrections should connect different $l$s, namely they should be proportional to $|\bra{n,l=0}V^{(2)}\ket{n,l'\neq 0}|^2$. So far we have implicitly assumed that $\bra{nl}V^{(2)}\ket{nl'}\neq 0$ for $l'\neq l$, but if this matrix element vanishes for some reasons, the enhanced corrections from quasi-degenerate states do not appear at least up to the fourth order in the energy level, as well as up to the third order in the quark pair production cross section (or quarkonium leptonic decay width).\footnote{ There is also a contribution from the $D$-wave production (or decay) via the higher-dimensional local current operator. In the case $\bra{nl}V^{(2)}\ket{nl'}= 0$, contribution of enhanced corrections through such an operator also starts from the fourth-order correction. } \section{Vanishing off-diagonal matrix elements of \boldmath{$V^{(2)}$}} In this section we evaluate the matrix element $\bra{nl}V^{(2)}\ket{nl'}$ explicitly and show that it vanishes if $l'\neq l$.\footnote{ The vanishing of the matrix elements relies on the property of radial wave functions of Coulomb system $H_0=p^2/m -C_F \alpha_s/r$. If one takes another wave function, for instance one in phenomenological potential models the matrix elements can have non-zero values, while it violates rigid order counting of pNRQCD and bring yet higher-order effects considered here into the computation of the matrix elements. } The operator in $V^{(2)}$ which can have non-vanishing off-diagonal matrix elements for different $l$s is the so-called tensor operator \begin{align} V^{(2)}_T= \frac{C_F\alpha _s}{4m^2r^3} S_{12}, \quad\quad S_{12} =2\left( 3\frac{(\vec{r}\cdot \vec{S})^2}{r^2}-\vec{S}^2 \right) . \label{} \end{align} Its matrix element can be factorized into two parts: \begin{align} \bra{nlsj}V^{(2)}_T\ket{nl'sj} =\bra{nl}\frac{C_F \alpha_s}{4m^2r^3}\ket{nl'} \bra{lsj}S_{12}\ket{l'sj}, \label{} \end{align} where the first factor represents the radial part and the second factor represents the angular and spin part. The radial matrix has a form \begin{align} \frac{C_F\alpha _s}{4m^2} \bra{nl}\frac{1}{r^3}\ket{nl'} = \begin{blockarray}{c@{}cccccc@{\hspace{4pt}}cr} \mLabel{l'=} & \mLabel{0} & \mLabel{1} & \mLabel{2} & \mLabel{3} & \mLabel{\cdots} & \mLabel{n-1} & & \\ \begin{block}{(c@{\hspace{5pt}}cccccc@{\hspace{5pt}}c)r} &\star&\star&0& 0& \cdots & 0&& \mLabel{l=0~~~} \\ &\star&\star&\star& 0& \cdots & 0&& \mLabel{1~~~} \\ &0&\star&\star& \star & \cdots & 0&& \mLabel{2~~~} \\ &0&0&\star&\star& \cdots &0&& \mLabel{3~~~} \\ &\vdots &\vdots &\vdots &\vdots &\ddots &\vdots &&\\ &0&0&0&0& \cdots &\star&& \mLabel{n-1} \\ \end{block} \end{blockarray}, \label{eq:radial} \end{align} where the star ($\star$) denotes a non-zero value. (This can easily be shown using the generating function for the Laguerre polynomial.) Namely, the radial matrix element vanishes in the case $| l-l' |\geq 2$. On the other hand, because of the parity of $S_{12}$ and the orbital wave function, $S_{12}$ can mix the state of $l=j\pm 1, s=1$ with $l'=j\pm 1,s=1$, and of $l=j, s=1$ with $l'=j, s=1$. Other matrix elements vanish. By explicit computation the angular matrix elements are given by \cite{Ulehla1969, Kwong:1988gm, Cordon:2009pj} \begin{align} &\bra{lsj}S_{12}\ket{l'sj}= \begin{blockarray}{c@{}ccc@{\hspace{4pt}}cl} & \mLabel{l'=j-1} & \mLabel{j} & \mLabel{j+1} & & \\ \begin{block}{(c@{\hspace{5pt}}ccc@{\hspace{5pt}}c)l} &-\frac{2(j-1)}{2j+1}&0&\frac{6\sqrt{j(j+1)}}{2j+1} & & \mLabel{l=j-1} \\ & 0 & 2 & 0 & & \mLabel{l=j} \\ &\frac{6\sqrt{j(j+1)}}{2j+1} &0&-\frac{2(j+2)}{2j+1}& & \mLabel{l=j+1} \\ \end{block} \end{blockarray} ~~~~~~ \mbox{for $s=1,j\geq 1$}, \label{eq:angular} \\ & \bra{lsj}S_{12}\ket{l'sj}=-4 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \mbox{for $l=l'=1,s=1,j=0$}, \end{align} and $\bra{lsj}S_{12}\ket{l'sj}=0$ otherwise. Thus, the only non-vanishing off-diagonal matrix elements are the ones with $| l-l' |=2$. Combining eqs.\eqref{eq:radial}, \eqref{eq:angular}, we obtain \begin{align} \bra{nlsj}V^{(2)}_T\ket{nl'sj} =\bra{nl}\frac{C_F\alpha_s}{4m^2r^3}\ket{nl'} \bra{lsj}S_{12}\ket{l'sj}=0 \label{} \end{align} for all $n,l,l'(\neq l),s,j$. \section{Enhanced corrections at higher orders} \label{s4} The analysis of the previous section does not apply to the tensor operator in the third-order potential, \begin{align} V^{(3)}_T=& \frac{C_F\alpha_s}{2m^2r^3} \frac{\alpha_s}{\pi} \left[ \frac{1}{72}\Bigl\{ C_A(97+18 L_m-18 L_r) +4(9C_F-5T_Fn_l)\Bigr\} +\frac{1}{24}\beta _0(3L_r-8) \right] S_{12}, \label{} \end{align} where $L_m=\log (\mu ^2/m^2)$, $L_r=\log (e^{2\gamma _E} \mu ^2r^2)$. The essential difference between $V^{(2)}_T$ and $V^{(3)}_T$ originates from the $\log r$ terms, resulting in non-zero radial matrix elements for $| l-l' |\geq 2$. Indeed the off-diagonal matrix elements of $\bra{nlsj}V^{(3)}_T\ket{nl'sj}$ have non-zero values. It follows that the second order correction to the wave function $c^{(2)}$ in eq.\eqref{eq:c2} has non-zero values, as well as the cross-term of $\braket{V^{(3-i)}}$ and $d^{(i)}$ in eq.\eqref{eq:d3} also has nonzero values. Up to this point we considered enhanced contributions from the intermediate states whose degeneracy is lifted by the order $\varepsilon$ perturbation. Let us comment on contributions from the states whose degeneracy is lifted first at order $\varepsilon^2$, namely, from the multiplets of fine and hyperfine splittings [with the same $(n,l)$ but different $(s,j)$]. Such contributions, if they exist, would give rise to more pronounced enhancement effects than those considered so far, since the level splittings which enter the denominator are order $\varepsilon^2$. In fact such contributions are absent to all orders in $\varepsilon$, since the transition matrix elements of $V^{(i)}$ between the states with the same $(n,l)$ but different $(s,j)$ vanish by parity and charge-parity conservation of QCD.\footnote{ The parity and charge-parity of the heavy quarkonium system of the same flavor are given, respectively, by $P=(-1)^{l+1}$ and $C=(-1)^{l+s}$ \cite{Lucha:1991vn}. Hence, variations of $l$ and $s$, respectively, are allowed only by even numbers. Since $s=0,1$, this means $s$ cannot change and $l$ can change only by 0 or 2. } Thus, the order $\varepsilon^2$ quasi-degeneracy of the multiplets with the same $(n,l)$ does not give enhanced contributions to either the energy level or the wave function. Taking into account the fact that $c^{(1)}=0,\ c^{(2)}\neq 0$ in eq.\eqref{eq:wavefunc}, the fourth- and fifth-order corrections to the energy level are given by \begin{align} E^{(4)}_{nlsj} &=\bra{nl}V^{(4)}\ket{nl} +\sum _{i=1}^3 \sum _{n'\neq n,\ l'} \frac{\bra{nl}V^{(4-i)}\ket{n'l'}d^{(i)}_{n'l';nl}}{E^{(0)}_{n}-E^{(0)}_{n'}}\,, \label{eq:energy4}\\ E^{(5)}_{nlsj} &=\bra{nl}V^{(5)}\ket{nl} +\sum _{i=1}^4 \sum _{n'\neq n,\ l'} \frac{\bra{nl}V^{(5-i)}\ket{n'l'}d^{(i)}_{n'l';nl}}{E^{(0)}_{n}-E^{(0)}_{n'}} +\sum_{l'\neq l} \frac{\bra{nl}V^{(3)}\ket{nl'}c^{(2)}_{nl';nl}}{E^{(1)}_{nl}-E^{(1)}_{nl'}}\,. \label{eq:energy5} \end{align} Note that the fourth-order correction does not contain the enhanced corrections from quasi-degeneracy, which can be seen from the absence of the $c$-term in eq.\eqref{eq:energy4}. \section{Conclusions} We have reconsidered the perturbation theory for the Schr$\ddot{\mathrm{o}}$dinger equation of the heavy quarkonium system in pNRQCD, taking into account contributions of quasi-degenerate states. As expected there are enhanced contributions which rearrange the order counting of the expansion. (In other words, it can be regarded as the question on a proper order counting of the $l$-changing mixing effects.) At the (naive) lowest order, the effect from the quasi-degenerate states is induced only from one type of off-diagonal matrix elements in the $l$-space, $\bra{nlsj}V^{(2)}_T\ket{nl'sj}$. This matrix element vanishes, hence the quasi-degenerate correction vanishes at the naive lowest order. As a result this type of corrections are expected to appear first at the fifth order in the energy level and at the second order in the wave function. The contribution to the heavy quark threshold cross section or leptonic decay width is expected to start at the fourth order. Thus, these specific corrections turn out to be irrelevant with respect to the current highest-level perturbative QCD calculations (energy levels and leptonic decay width at NNNLO). We think that this fact itself should be stated clearly. It should also be noted that, since the enhanced corrections to the wave function are expected to appear already at the second order, they may be important for other physical observables, such as level transition rates in certain channels. We also note that, even if the flavors of quark and antiquark are different (such as the $B_c$ system), enhanced quasi-degenerate contributions to the NNNLO spectrum \cite{Peset:2015vvi} ($l$-changing mixing) vanish similarly to the equal flavor case.\footnote{ In this case there are well-known mixing effects between different $s$ states from the second-order energy levels, which correspond to the different diagonalizing basis for $V^{(2)}$ for the same $(n,l)$. } These subjects will be discussed separately. We remark that if one had a sufficiently wide knowledge one could reach the same conclusion without any computation, since all the necessary results were available in the literature. In this sense there is hardly any truly new ingredient in the present work. We find, however, that it is not easy to collect these pieces of information together, in particular since the old results on perturbative computation of bound states before the advent of modern EFT are scattered through literature in an unorganized way. At least it would be meaningful to bring these results to the attention of experts at the forefront. \section*{Appendix: General terms of perturbative series} In this appendix, we derive the general expressions for the energy level correction and the wave function correction at the $N$-th order, in the case that the degeneracy is lifted by the first-order and second-order corrections stepwise. The expressions shown in Secs.~\ref{sec:2} and \ref{s4} are special cases of the general expressions eqs.\eqref{app:energy}, \eqref{app:d2} and \eqref{app:c2}. We expand the Schr$\ddot{\mathrm{o}}$dinger equation as \begin{align} \left( H^{(0)}+\sum_{i=1}^{\infty}\varepsilon^i V^{(i)} \right) \left( \sum_{i'=0}^{\infty} \varepsilon^{i'} \Ket{\Psi_{nlsj}^{(i')}} \right) =\left( \sum_{i=0}^{\infty} \varepsilon^i E_{nlsj}^{(i)} \right) \left( \sum_{i'=0}^{\infty} \varepsilon^{i'} \Ket{\Psi_{nlsj}^{(i')}} \right). \label{app:sch} \end{align} The labels $s$ and $j$ are suppressed in the rest of this appendix. For notational simplicity, we redefine the wave function corrections as $\tilde{c}^{(i)}_{nl';nl}=c^{(i)}_{nl';nl}/(E^{(1)}_{nl}-E^{(1)}_{nl'})$ and $\tilde{d}^{(i)}_{n'l';nl}=d^{(i)}_{n'l';nl}/(E^{(0)}_{n}-E^{(0)}_{n'})$. Then the full wave function is given by \begin{align} \Ket{\Psi_{nl}}&= \sum_{i=0}^{\infty} \varepsilon^{i} \Ket{\Psi_{nl}^{(i)}} = \ket{nl} +\sum _{i=1} ^\infty \sum_{l'\neq l}^{} \varepsilon^i \ket{nl'} \tilde{c}^{(i)}_{nl';nl} +\sum _{i=1}^\infty \sum_{n'\neq n,\ l'}^{} \varepsilon^i \ket{n'l'} \tilde{d}^{(i)}_{n'l';nl}. \label{app:wavefunc} \end{align} In addition, we write $V^{(i)}_{n'l';nl}=\bra{n'l'}V^{(i)}\ket{nl}$ in the following. The coefficient of $\varepsilon^N$ of the Schr$\ddot{\mathrm{o}}$dinger equation~\eqref{app:sch} reads \begin{align} V^{(N)}\ket{nl} +\sum_{i=1}^{N-1} V^{(N-i)}\Ket{\Psi^{(i)}_{nl}} +H^{(0)}\Ket{\Psi^{(N)}_{nl}} =E^{(N)}_{nl}\ket{nl} +\sum_{i=1}^{N-1} E^{(N-i)}_{nl}\Ket{\Psi^{(i)}_{nl}} +E^{(0)}_{n}\Ket{\Psi^{(N)}_{nl}}. \label{app:eq} \end{align} The correction to the energy level is obtained by multiplying eq.\eqref{app:eq} by $\bra{nl}$ from the left. Since $\Braket{nl|\Psi ^{(i)}_{nl}}=0$ for $i\geq 1$, the only remaining term on the right-hand side is $E^{(N)}_{nl}$. Then we obtain \begin{align} E^{(N)}_{nl} =V^{(N)}_{nl;nl} +\sum_{i=1}^{N-1} \sum_{l''\neq l}^{} V^{(N-i)}_{nl;nl''} \tilde{c}^{(i)}_{nl'';nl} +\sum_{i=1}^{N-1} \sum_{n''\neq n,\ l''}^{} V^{(N-i)}_{nl;n''l''} \tilde{d}^{(i)}_{n''l'';nl}. \label{app:energy} \end{align} Next we consider the correction to the wave function. Multiplying eq.\eqref{app:eq} by $\bra{n'l'}$ from the left, we obtain \begin{align} V^{(N)}_{n'l';nl} +\sum_{i=1}^{N-1} \sum_{l''\neq l}^{} V^{(N-i)}_{n'l';nl''} \tilde{c}^{(i)}_{nl'';nl} +\sum_{i=1}^{N-1} \sum_{n''\neq n,\ l''}^{} V^{(N-i)}_{n'l';n''l''} \tilde{d}^{(i)}_{n''l'';nl} +E^{(0)}_{n'} \tilde{d}^{(N)}_{n'l';nl} ~~~~~~~~ \nonumber\\ =\sum_{i=1}^{N-1} E^{(N-i)}_{nl} \tilde{d}^{(i)}_{n'l';nl} +E^{(0)}_{n} \tilde{d}^{(N)}_{n'l';nl}, \label{app:d} \end{align} where $\tilde{d}^{(N)}_{n'l';nl}$ is separated outside the summation. Solving eq.\eqref{app:d} for $d^{(N)}_{n'l';nl}$, we obtain \begin{align} d^{(N)}_{n'l';nl} = V^{(N)}_{n'l';nl} +\sum_{i=1}^{N-1} \sum_{l''\neq l}^{} V^{(N-i)}_{n'l';nl''} \tilde{c}^{(i)}_{nl'';nl} +\sum_{i=1}^{N-1} \sum_{n''\neq n,\ l''}^{} V^{(N-i)}_{n'l';n''l''} \tilde{d}^{(i)}_{n''l'';nl} -\sum_{i=1}^{N-1} E^{(N-i)}_{nl} \tilde{d}^{(i)}_{n'l';nl}. \label{app:d2} \end{align} Note that the left-hand-side of eq.~\eqref{app:d2} is not $\tilde{d}^{(N)}_{n'l';nl}$ but $d^{(N)}_{n'l';nl}$. The case with $\tilde{c}^{(N)}_{nl';nl}$ is similar to that of $\tilde{d}^{(N)}_{n'l';nl}$ except that we first need to replace $N\to N+1$ in eq.\eqref{app:eq}. Then multiplying it by $\bra{nl'}$ from the left, we obtain \begin{align} V^{(N+1)}_{nl';nl} +\sum_{i=1}^{N-1} \sum_{l''\neq l}^{} V^{(N-i+1)}_{nl';nl''} \tilde{c}^{(i)}_{nl'';nl} +E^{(1)}_{nl'} \tilde{c}^{(N)}_{nl';nl} +\sum_{i=1}^{N} \sum_{n''\neq n,\ l''}^{} V^{(N-i+1)}_{nl';n''l''} \tilde{d}^{(i)}_{n''l'';nl} ~~~~~~~~ \nonumber\\ =\sum_{i=1}^{N-1} E^{(N-i+1)}_{nl} \tilde{c}^{(i)}_{nl';nl} + E^{(1)}_{nl} \tilde{c}^{(N)}_{nl';nl}. \label{app:c} \end{align} Solving for $c^{(N)}_{nl';nl}$, we obtain \begin{align} c^{(N)}_{nl';nl} =V^{(N+1)}_{nl';nl} +\sum_{i=1}^{N-1} \sum_{l''\neq l}^{} V^{(N-i+1)}_{nl';nl''} \tilde{c}^{(i)}_{nl'';nl} +\sum_{i=1}^{N} \sum_{n''\neq n,\ l''}^{} V^{(N-i+1)}_{nl';n''l''} \tilde{d}^{(i)}_{n''l'';nl} -\sum_{i=1}^{N-1} E^{(N-i+1)}_{nl} \tilde{c}^{(i)}_{nl';nl}. \label{app:c2} \end{align} Again we note the difference between $c$ and $\tilde{c},\tilde{d}$ on both sides. Thus, we obtain the energy level correction~\eqref{app:energy} and wave function correction~\eqref{app:d2}, \eqref{app:c2} at the $N$-th order in a recursive form. \section*{Acknowledgements} The authors are grateful to Jorge Segovia for bringing their attention to the corrections from quasi-degenerate states. The works of Y.K.\ and Y.S., respectively, were supported in part by Grant-in-Aid for scientific research Nos.\ 26400255 and 26400238 from MEXT, Japan.
2,877,628,089,333
arxiv
\section{Introduction} Scaling symmetries of space and time shape the modern theory of developed turbulence~\cite{frisch1999turbulence}, which assumes that equations of motion for a velocity field $\mathbf{u}(\mathbf{r},t)$ are invariant with respect to the scaling transformations \begin{equation} \label{eq1} t,\mathbf{r},\mathbf{u} \ \mapsto\ \lambda^{1-h} t,\lambda\mathbf{r},\lambda^h\mathbf{u} \end{equation} for arbitrary $\lambda > 0$ and $h \in \mathbb{R}$. Notice that this property refers to a wide (so-called inertial) interval of scales, at which both the forcing and viscous terms are negligible. Multi-scale systems of this kind may possess a fascinating property of \textit{spontaneous stochasticity}: a small-scale initial uncertainty develops into a randomly chosen large-scale state in a finite time, and this behavior is not sensitive to the nature and magnitude of uncertainty~\cite{lorenz1969predictability,leith1972predictability,ruelle1979microscopic,eyink1996turbulence,falkovich2001particles,boffetta2001predictability,palmer2014real,thalabard2020butterfly}. A simpler form of this phenomenon is the \textit{Lagrangian spontaneous stochasticity} (LSS) of particle trajectories in a turbulent (non-differentiable) velocity field, also known as the Richardson super-diffusion~\cite{frisch1999turbulence,falkovich2001particles}: two particles diverge to distant random states in finite time independently of their initial separation. Another intriguing form is the \textit{Eulerian spontaneous stochasticity} (ESS) of the velocity field itself: an infinitesimal small-scale noise triggers stochastic evolution of velocity field at finite scales and times. The consequences are both theoretical, revising the role of stochasticity in multi-scale classical systems, and practical, e.g. its implications for weather prediction~\cite{palmer2019stochastic,palmer2000predicting}. The ESS suggests a potentially new path for understanding the inviscid limit in the developed (Navier--Stokes) turbulence, which copes with a number of paradoxes like the recently discovered wild and non-unique dissipative weak solutions; see e.g.~\cite{buckmaster2021convex,de2021weak}. Unlike the LSS, which can been studied in various models~\cite{bernard1998slow,eijnden2000generalized,kupiainen2003nondeterministic,eyink2013flux,drivas2017lagrangian,drivas2020statistical,eyink2020renormalization}, the current knowledge on the ESS is mostly limited to numerical simulations~\cite{palmer2014real,fjordholm2016computation,mailybaev2016spontaneously,biferale2018rayleigh,mailybaev2017toward,thalabard2020butterfly}. A rigorous theory of ESS remains elusive due to its sophisticated (infinite-dimensional) character. In this paper, we propose an artificial model, which is constructed as an infinite-dimensional extension of the (hyperbolic) Arnold's cat map~\cite{arnold1968ergodic} and yields a rigorously solvable example of ESS. This model is a formally deterministic system with a scaling symmetry, which possesses non-unique (uncountably many) solutions, including analogues of wild solutions known for the Euler equations of incompressible ideal fluid~\cite{buckmaster2021convex}. However, solutions are made unique by introducing a viscous-like regularization. By mimicking the Navier--Stokes turbulence~\cite{bardos2013mathematics}, we study the inviscid limit and we prove that it exists for subsequences, but yields uncountably many limiting solutions depending on a chosen subsequence. Then, we prove that adding a random perturbation as a part of the regularization yields a unique inviscid limit in the stochastic sense, i.e., it yields a unique and universal probability measure solving the original formally deterministic system with deterministic initial conditions. This probability measure defines a stochastic process with Markovian properties, and its universality means that it does not depend on a specific form of a random perturbation. The counterintuitive property of this spontaneously stochastic solution is that it assigns equal probability (uniform probability density) to all non-unique solutions. The rigorous answers produced by our model shed light on new ways of understanding the problem of non-uniqueness in the developing mathematical theory of turbulence~\cite{de2021weak}. The paper has the following structure. Section~\ref{sec2} introduces the model and describes basic properties of non-unique solutions. Section~\ref{sec3} defines regularized solutions and studies non-unique inviscid (subsequence) limits. Section~\ref{sec4} introduces random regularization and formulates our main result on the existence and uniqueness of a spontaneously stochastic solution, which is proved in Section~\ref{sec5}. Section~\ref{sec6} investigates the convergence issues and presents results of numerical simulations. Further applications of obtained results are discussed in Section~\ref{sec7}. \section{Model}\label{sec2} We consider variables $u_n(t)$ depending on time $t$ and integer indices $n \in \mathbb{Z}^+ = \{0,1,2,\ldots\}$. One can see these variables as describing a multi-scale system with a geometric sequence of spatial scales $\ell_n = \lambda^{-n}$ for some $\lambda > 0$. In this case, the discrete analogue of scaling symmetry (\ref{eq1}) with $h = 0$ becomes \begin{equation} \label{eq2} t,u_n \ \mapsto \ \lambda t,u_{n+1}, \end{equation} where the index shift $n \mapsto n+1$ reflects the spatial scaling relation $\ell_n = \lambda\ell_{n+1}$. Notice that (\ref{eq2}) is the symmetry of the Euler equations for incompressible ideal fluid, in which case the variable $u_n$ can be introduced by low/high-pass filters or wavelet transforms of the velocity field in the range of scales between $\ell_n$ and $\ell_{n+1}$~\cite{frisch1999turbulence}. We construct an artificial model with symmetry (\ref{eq2}) by setting $\lambda = 2$ and defining variables $u_n(t)$ on the two-dimensional torus $\mathbb{T}^2 = \mathbb{R}^2/\mathbb{Z}^2$ at discrete times \begin{equation} \label{eq3} t \in \tau_n \mathbb{Z}^+ = \{0,\tau_n,2\tau_n,\ldots \}, \quad \tau_n = 2^{-n}, \end{equation} where $\tau_n$ is interpreted as the ``turn-over'' time at scale $\ell_n$. As shown in Fig.~\ref{fig1}, all scales and corresponding times define the self-similar lattice \begin{equation} \label{eq3b} \mathcal{L} = \{(n,t): n \in \mathbb{Z}^+,\ t \in \tau_n \mathbb{Z}^+\}. \end{equation} Our model is defined by the deterministic relation \begin{equation} \label{eq4} u_n(t+\tau_n) = Au_n(t)+Au_{n+1}(t) \ \mathrm{mod}\ 1, \end{equation} where the symmetric $2 \times 2$ matrix $A$ defines the Arnold's cat map~\cite{arnold1968ergodic} \begin{equation} \label{eq6} A: (x,y) \mapsto (2x+y,x+y) \ \mathrm{mod}\ 1,\quad (x,y) \in \mathbb{T}^2. \end{equation} Relation (\ref{eq4}) defines evolution at scale $\ell_n$ over a single turn-over time $\tau_n$. Here we limited the inter-scale couplings to the same and smaller scales, $\ell_n$ and $\ell_{n+1}$, and took advantage that the map $A$ is a linear, hyperbolic, invertible and area-preserving. These properties greatly facilitate analysis of the model, and we discuss further generalizations later. Relation (\ref{eq4}) is invariant with respect to the scaling symmetry (\ref{eq2}). The resulting structure of whole system is presented schematically in Fig.~\ref{fig1}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{fig1.pdf} \caption{Structure of the multi-scale map for the variables $u_n(t)$ corresponding to scales $\ell_n$ and discrete times $t \in \tau_n \mathbb{Z}^+$. Gray arrows represent the Arnold's cat map (shown on the top of the figure), which appear in the coupling relation (\ref{eq4}) and correspond to one turn-over time $\tau_n$. White circles correspond to initial conditions. Green ($\mathcal{G}$), red ($\mathcal{R}$) and small black ($\mathcal{B}$) circles denote, respectively, the next-time variables, the variables taking arbitrary values in Proposition~\ref{theorem1}, and the remaining variables.} \label{fig1} \end{figure} We assume an arbitrary deterministic initial condition \begin{equation} \label{eq7} u_n(0) = u_n^0,\quad n \in \mathbb{Z}^+. \end{equation} We say that the infinite sequence $\left(u_n(t)\right)_{(n,t) \in \mathcal{L}}$ is a solution of the initial value problem, if it satisfies relations (\ref{eq4})--(\ref{eq7}) for all $(n,t) \in \mathcal{L}$. For describing all solutions, we split the lattice, $\mathcal{L} = \mathcal{I} \cup \mathcal{G} \cup \mathcal{R} \cup \mathcal{B}$, as shown in Fig.~\ref{fig1}. Here $\mathcal{I} = \left\{(n,0): n \in \mathbb{Z}^+\right\}$ are indices of initial conditions and $\mathcal{G} = \left\{(n,\tau_n): n \in \mathbb{Z}^+\right\}$ of the next-time variables. The remaining sets of indices are defined as \begin{align} \label{eq8b} \mathcal{B} = & \{(n+1,(2j+2)\tau_{n+1}): n,j \in \mathbb{Z}^+ \}, \\[3pt] \mathcal{R} = & \{(0,j+2): j \in \mathbb{Z}^+\} \cup \{(n+1,(2j+3)\tau_{n+1}): n,j \in \mathbb{Z}^+\}. \label{eq8c} \end{align} \begin{proposition} \label{theorem1} For any given initial condition (\ref{eq7}), there is uncountable number of solutions of system (\ref{eq4}). Each solution is determined by initial conditions $u_n(t) \in \mathbb{T}^2$ for $(n,t) \in \mathcal{I}$ and arbitrary values $u_n(t) \in \mathbb{T}^2$ for $(n,t) \in \mathcal{R}$, in which case the remaining variables with $(n,t) \in \mathcal{G} \cup \mathcal{B}$ are defined uniquely. \end{proposition} \begin{proof} Let us write equation (\ref{eq4}) as \begin{equation} \label{eq9} u_{n+1}(t) = A^{-1}u_n(t+\tau_n)-u_n(t) \ \mathrm{mod}\ 1. \end{equation} Then, given arbitrary $N \in \mathbb{Z}^+$ and inspecting Fig.~\ref{fig1}, one can verify that all variables $u_n(t)$ with $n \le N$ and $(n,t) \in \mathcal{G}\cup\mathcal{B}$ are uniquely defined by the initial conditions at $(n,t) \in \mathcal{I}$ and the variables with $n < N$ and $(n,t) \in \mathcal{R}$. Hence, all equations (\ref{eq4}) and initial conditions (\ref{eq7}) are satisfied for arbitrary $u_n(t) \in \mathbb{T}^2$ at $(n,t) \in \mathcal{R}$ and uniquely defined variables at $(n,t) \in \mathcal{G}\cup\mathcal{B}$. \end{proof} We notice that solutions of Proposition~\ref{theorem1} include analogues of the so-called wild weak solutions for Euler equations in fluid dynamics~\cite{buckmaster2021convex}. These are unphysical solutions with a finite support in time, i.e., nonzero for $t \in (t_1,t_2)$ but vanishing both for $t \le t_1$ and $t \ge t_2$. Such solutions are constructed in our model by choosing the variables with $(n,t) \in \mathcal{R}$ to be zero for times $t \notin (t_1+1,t_2)$ and nonzero for $t \in (t_1+1,t_2)$, where $t_1$ and $t_2$ are arbitrary positive integers. One can show using Proposition~\ref{theorem1} and relation (\ref{eq9}) that this yields uncountably many wild solutions; see Fig.~\ref{fig_wild}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{fig_wild.pdf} \caption{Examples of wild solutions with the compact support in time: all variables vanish for $t \le 1$ and $t \ge 4$. Here all white circles correspond to zero variables and red circles denote arbitrary nonzero variables, which uniquely define the variables denoted by small black circles.} \label{fig_wild} \end{figure} \section{Regularized solutions} \label{sec3} Introducing a regularized system is a conventional way for dealing with non-uniqueness. For any integer $N \ge 1$, we define the $N$-regularized system for the finite number of variables $u_1(t),\ldots,u_N(t)$ by setting the remaining variables to zero: $u_n(t) \equiv (0,0)$ for $n > N$. Thus, the set of variables reduces to $(u_n(t))_{(n,t) \in \mathcal{L}_N}$, where $\mathcal{L}_N = \left\{(n,t): n \in \{0,\ldots,N\},\ t \in \tau_n \mathbb{Z}^+\right\}$ is the truncated lattice. Equations of the $N$-regularized system are given by (\ref{eq4}) for $n < N$ with the equation for $n = N$ reduced to the form $u_n(t+\tau_n) = Au_n(t)$. The initial conditions are defined by relations (\ref{eq7}) limited to the scales $n \le N$. This truncation resembles the viscous regularization in fluid dynamics, where the viscous term of the Navier--Stokes equations suppresses the turbulent motion below a certain (so-called Kolmogorov) microscale $\eta \sim \ell_N$~\cite{frisch1999turbulence}. One can easily see from Fig.~\ref{fig1} that $N$-regularized solutions, which we denote by $u_n^{(N)}(t)$, are uniquely defined by the initial conditions. One can always choose a subsequence $N_1 < N_2 < \cdots$ such that the $N$-regularized solutions converge for all $n$ and $t$ (see~\cite[Theorem 3.10.35]{engelking1989general}): \begin{equation} \label{eq10} u_n(t) = \lim_{i \to \infty} u_n^{(N_i)}(t), \end{equation} where $u_n(t)$ is some solution from Proposition~\ref{theorem1}. For example, for vanishing initial conditions, this limit yields the vanishing solution at all times $t \ge 0$, therefore, ruling out all wild solutions mentioned above. Similarly to turbulence models~\cite{mailybaev2016spontaneous}, solutions obtained in the regularization limit (\ref{eq10}) are non-unique in general, because different subsequences yield different solutions: \begin{proposition} \label{prop1} Consider the initial condition (\ref{eq7}) with all variables equal to the same value $u_n^0 = a$. Then, for almost every choice of $a \in \mathbb{T}^2$, there exist infinite (uncountable) number of different solutions obtained as subsequence limits (\ref{eq10}) of $N$-regularized systems. \end{proposition} \begin{proof} Let us focus on the specific variable $u_0^{(N)}(2)$. By induction with relation (\ref{eq4}) represented by gray arrows in Fig.~\ref{fig1}, one can verify the formula \begin{equation} \label{eq11} u_0^{(N)}(2) = A^2 u_0^0 + (A+A^2)\sum_{n = 1}^{N} A^nu_n^0 \ \ \mathrm{mod}\ 1. \end{equation} Taking into account that all initial values are equal to $a$, and $\sum_{n = 1}^N A^n = (A-I)^{-1}(A^{N+1}-A)$ with the identity map $I$, one reduces (\ref{eq11}) to the form \begin{equation} \label{eq12} u_0^{(N)}(2) = (A^2-B)a+BA^N a \ \ \mathrm{mod}\ 1, \end{equation} where $B = A^2(I+A)(A-I)^{-1}$ is a nonsingular matrix with integer components. The ergodicity of the Arnold's Cat map implies that the sequence $A^Na$ with $N \in \mathbb{Z}^+$ is dense on the torus for almost every $a \in \mathbb{T}^2$. Let us consider such $a$, an arbitrary $c\in \mathbb{T}^2$ and define $b \in \mathbb{T}^2$ such that $(A^2-B)a+Bb=c$. Since $A^Na$ is a dense orbit, we can choose an infinite subsequence $N_i$ such that $A^{N_i}a \to b$ as $N_i \to \infty$. Then, expression (\ref{eq12}) yields \begin{equation} \label{eq12bb} \lim_{i \to \infty} u_0^{(N_i)}(2)=c. \end{equation} Similarly to (\ref{eq10}) we can take a subsequence $N_{i_k}$ within the sequence $N_i$, so that $u_n^{(N_{i_k})}(t)$ converges to a solution $u_n(t)$ for all $n$ and $t$. In particular, this implies that $u_0(2)=c$ for arbitrary $c\in \mathbb{T}^2$, providing uncountable number of limits for the regularized system. \end{proof} Proposition~\ref{prop1} shows that the regularization does not serve as a proper selection criterion among infinitely many solutions given by Proposition~\ref{theorem1}. As we show in the next section, there is a deep reason for this failure of the regularization strategy. Contrary to the common intuition, all solutions of Proposition~\ref{theorem1} become equally relevant when the stochastic form of regularization is considered. \section{Spontaneously stochastic solution}\label{sec4} Let us modify the definition of $N$-regularized solution by adding a random small-scale perturbation. For simplicity, we consider a single random number $\xi \in \mathbb{T}^2$ added to the initial value at the cuf-off scale $n = N$ as \begin{equation} \label{eq13} u_N^{(N)}(0) = u_N^0+\xi, \end{equation} with $\xi$ having a Lebesgue integrable probability density $\rho(\xi)$. This formulation is not only technically convenient, but also highlights an exceptional role of even a single source of randomness at small scales. Generalization to multiple random sources is rather straightforward. Let us consider the mapping \begin{equation} \label{eq13IC} \left(u_0^{0},\ldots,u_N^{0}\right) \mapsto \left(u_n^{(N)}(t)\right)_{(n,t) \in \mathcal{L}} \end{equation} relating deterministic initial conditions with deterministic $N$-regularized solutions; recall that $u_n^{(N)}(t) \equiv 0$ for $n > N$. For the new random initial condition (\ref{eq13}), we introduce the full vector of initial states as \begin{equation} \label{eq13ICa} (\zeta_0,\ldots,\zeta_N) = \left(u_0^{0},\ldots,u_{N-1}^{0},u_N^{0}+\xi\right) \in \mathbb{T}^{2(N+1)}, \end{equation} and define the corresponding probability measure as \begin{equation} \label{eq13ICc} d\mu_{\mathrm{ini}}^{(N)} = \left(\prod_{n = 0}^{N-1} \delta(\zeta_n-u_N^{0})d\zeta_n\right) \rho(\zeta_N-u_N^{0})d\zeta_N. \end{equation} This measure is a product of Dirac delta functions on the torus $\mathbb{T}^2$ for the first $N$ components and the shifted density $\rho(\xi)$ for the last component. We denote by $\mu^{(N)}$ the corresponding probability measure of $N$-regularized solutions, which is naturally obtained as the image (push-forward) of $\mu_{\mathrm{ini}}^{(N)}$ by the mapping (\ref{eq13IC}). Let us consider the standard product topology on the lattice $\mathcal{L}$ and Borel probability measures endowed with the weak-convergence topology; see e.g. \cite{tao2011introduction}. We say that the original problem (\ref{eq4})--(\ref{eq7}) has a \textit{spontaneously stochastic solution} described by a non-trivial measure $\mu$, if it is obtained as the limit \begin{equation} \label{eq_sslim} \mu = \lim_{N \to \infty} \mu^{(N)}, \end{equation} in which the regularization is removed. Now we can formulate our main result as \begin{theorem} \label{theorem2} Problem (\ref{eq4})--(\ref{eq7}) has a spontaneously stochastic solution given by the probability measure $\mu$ specified below by Eqs.~(\ref{eq15a})--(\ref{eq14}). This measure is universal, i.e., independent of the small-scale perturbation $\xi$. \end{theorem} We postpone the proof for the next section, and now describe the measure $\mu$. This measure is composed as a product of four pieces. The first two are the probability measures $\mu_{\mathcal{I}}$ and $\mu_{\mathcal{G}}$ corresponding, respectively, to deterministic initial conditions (\ref{eq7}) and the next-time variables $u_n(\tau_n)$ given uniquely by relation (\ref{eq4}): \begin{align} \label{eq15a} d\mu_{\mathcal{I}} = & \prod_{n \in \mathbb{Z}^+} \delta\left(u_n(0)-u_n^0\right)du_n(0), \\[5pt] \label{eq15b} d\mu_{\mathcal{G}} = &\ \prod_{n \in \mathbb{Z}^+} \delta\left(u_n(\tau_n)-Au_n^0-Au_{n+1}^0\right)\, du_n(\tau_n). \end{align} Here $du_n(t)$ defines the Lebesgue (uniform) probability measure on $\mathbb{T}^2$ corresponding to a specific variable $u_n(t)$. The third piece is given by the measure \begin{equation} \label{eq15c} d\mu_{\mathcal{R}} = \prod_{(n,t) \in \mathcal{R}} du_n(t), \end{equation} which describes a random uniform choice of variables $u_n(t)$ from the red set $\mathcal{R}$; see Fig.~\ref{fig1}. The last piece ensures that all relations (\ref{eq4}) are satisfied for $t > \tau_n$. These relations are verified at points of the black set $\mathcal{B}$ using Eq.~(\ref{eq4}) transformed to form $u_n(t) = A^{-1}u_{n-1}(t+\tau_{n-1})-u_{n-1}(t)$; see Fig.~\ref{fig1}. Therefore, we define \begin{equation} \label{eq15d} d\mu_{\mathcal{B}} = \prod_{(n,t) \in \mathcal{B}} \delta\left( u_n(t)+u_{n-1}(t) -\,A^{-1}u_{n-1}(t+\tau_{n-1}) \right) du_n(t). \end{equation} The probability measure $\mu$ is given by the product \begin{equation} \label{eq14} d\mu = d\mu_{\mathcal{I}} \, d\mu_{\mathcal{G}} \, d\mu_{\mathcal{R}}\, d\mu_{\mathcal{B}}. \end{equation} It is remarkable that the spontaneously stochastic solution $\mu$ assigns equal probability (uniform distribution) to all solutions of Proposition~\ref{theorem1} independently of the random perturbation $\xi$. One can see, however, that the probability measure corresponding to a set of wild solutions discussed above is zero. Let us consider evolution of the spontaneously stochastic solution $\mu$ by focusing on integer times. At each $t \in \mathbb{Z}^+$, the solution defines a probability measure $\mu_t$ on the infinite-dimensional space of variables $\mathbf{u}(t) = \left(u_0(t),u_1(t),u_2(t),\ldots\right)$. For example, projecting the measure (\ref{eq15a})--(\ref{eq14}) at $t = 1$, we have \begin{equation} \label{eq18b} d\mu_1 = \delta\left(u_0(1)-Au_0^0-Au_1^0\right)\, \prod_{n \in \mathbb{Z}^+}du_n(1), \end{equation} where $u_0(1)$ is deterministic and $u_n(1)$ with $n \ge 1$ are random (independent and uniformly distributed). Measure (\ref{eq18b}) defines a Markov kernel: given a specific initial state $\mathbf{u}(0)$ it yields the probability distribution for $\mathbf{u}(1)$. Hence, the dynamics of our model at integer times represents a Markov process. In our example, $\mu_t$ converges at $t \ge 2$ to the equilibrium state $\mu_t \equiv \mu_{\mathrm{eq}}$, which is the uniform (Haar) measure on $\mathbb{T}^\infty$. As discussed in a different example later on, the convergence of $\mu_t$ as $t \to \infty$ does not always occur in a finite time. By inspecting the proofs of Propositions~\ref{theorem1} and \ref{prop1} and of Theorem~\ref{theorem2} one can generalize our results as follows. \begin{corollary} Let us consider a larger class of models given by relation (\ref{eq4}), where $A$ is an arbitrary $m \times m$ matrix with integer elements and $\det A =1$, thus defining an automorphism of the $m$-dimensional torus $\mathbb{T}^m$. Proposition 1 remains valid with no additional hypothesis. Proposition \ref{prop1} is valid if we assume that $A$ does not possess eigenvalues which are roots of unity, in which case the induced automorphism of $\mathbb{T}^m$ is ergodic~\cite{Mane}. Finally, Theorem \ref{theorem2} remains valid under the two additional assumptions: \begin{itemize} \item[(i)] The dominant (maximum absolute value) eigenvalue $\lambda$ of $A$ is simple and greater than $1$. \item[(ii)] Let $v = (v_1,\ldots,v_m)$ be the eigenvector corresponding to $\lambda$ for the transposed matrix $A^T$. Then the numbers $v_1,\ldots,v_m$ and $1$ are rationally independent. \end{itemize} \end{corollary} \section{Proof of Theorem~\ref{theorem2}}\label{sec5} The weak convergence of measures $\mu^{(N)} \to \mu$ in the product topology~\cite{tao2011introduction} follows from the following property, which describes the convergence for all finite-dimensional projections. \begin{lemma} \label{theorem2b} Let $\left(u_n(t)\right)_{(n,t) \in \mathcal{S}} \in \mathbb{T}^{2d}$ be any finite set of variables indexed by $\mathcal{S} = \{(n_i,t_i):i = 1,\ldots,d\} \subset \mathcal{L}$. Let $\mu_{\mathcal{S}}$ and $\mu_{\mathcal{S}}^{(N)}$ be the corresponding probability measures obtained by projecting the measure $\mu$ from (\ref{eq15a})--(\ref{eq14}) and the stochastically regularized measure $\mu^{(N)}$. Then, \begin{equation} \label{eq18} \lim_{N \to \infty}\int \varphi \,d\mu^{(N)}_{\mathcal{S}} = \int \varphi \,d\mu_{\mathcal{S}} \end{equation} for any continuous observable $\varphi:\mathbb{T}^{2d} \mapsto \mathbb{R}$. \end{lemma} \begin{proof} First, we express explicitly an arbitrary variable $u_n(t)$ in terms of initial conditions $u_n^0$ and the random quantity $\xi$ of the stochastically $N$-regularized problem. For this purpose, we use the polynomials $P_{n,t}^{(N)}$ defined as \begin{equation} \label{eq19A} P_{n,t}^{(N)} (x) = \sum_{\textrm{all paths }p\textrm{ from }\atop (N,0)\textrm{ to }(n,t)} x^{|p|}, \end{equation} where the sum is taken over all paths following grey (right or up-right diagonal) arrows in Fig.~\ref{fig1}, which connect $(N,0)$ to $(n,t)$, and $|p|$ denotes the number of arrows in the path. Using iteratively the linear relation (\ref{eq4}) with the truncation property $u_n(t) = 0$ for $n > N$ and the initial conditions (\ref{eq7}) and (\ref{eq13}), one can check that \begin{equation} \label{eq20} u_n^{(N)}(t) = P_{n,t}^{(N)}(A) \xi+a_{n,t}^{(N)}\ \mathrm{mod}\ 1, \quad a_{n,t}^{(N)} = \sum_{k= n}^N P_{n,t}^{(k)}(A) u_{k}^0, \end{equation} where $a_{n,t}^{(N)} \in \mathbb{T}^2$ denotes the contribution from deterministic initial conditions. The probability measure $\mu_{\mathcal{S}}^{(N)}$ corresponding to a finite set of random variables $\left(u_n^{(N)}(t)\right)_{(n,t) \in \mathcal{S}} \in \mathbb{T}^{2d}$ is obtained using relation (\ref{eq20}) as \begin{equation} \label{eq20b} d\mu_{\mathcal{S}}^{(N)} = \int\left( \prod_{(n,t) \in \mathcal{S}} \delta\left(u_n^{(N)}(t)-P_{n,t}^{(N)}(A) \xi-a_{n,t}^{(N)}\right)du_n^{(N)}(t)\right) \rho(\xi)\,d\xi, \end{equation} where $du_n^{(N)}(t)$ denotes the Lebesgue (uniform) probability measure on $\mathbb{T}^2$ corresponding to a specific variable $u_n^{(N)}(t)$, and $\rho: \mathbb{T}^2 \mapsto \mathbb{R}^+$ is a measurable probability density for the random number $\xi$. Now let us analyse an arbitrary set $\mathcal{S}$. It is enough to consider $\mathcal{S} = \mathcal{L}_{n,t}$ in the rectangular region \begin{equation} \label{eq24L} \mathcal{L}_ {n,t}= \left\{(n',t'): n' \le n,\ t' \le t\right\} \end{equation} for any integer $n$ and $t$. Both measures $\mu$ and $\mu^{(N)}$ are supported on the linear subspace determined by relations (\ref{eq4}). Specifically, according to Proposition~\ref{theorem1} and Fig.~\ref{fig1}, variables at white nodes correspond to initial conditions, variables at green nodes are determined by initial conditions only, and variables at black nodes are given by initial conditions and by variables at red nodes of the set $\mathcal{R}$. These relations do not depend on $N$. Hence, for both projected measures $\mu_\mathcal{S}$ and $\mu_\mathcal{S}^{(N)}$ with $\mathcal{S} = \mathcal{L}_ {n,t}$, relations (\ref{eq4}) define variables $u_{n'}(t')$ from $\mathcal{S}$ in terms of initial conditions and variables from $\mathcal{L}_ {n,t} \cap \mathcal{R}$. From this property, one can infer that the relation (\ref{eq18}) can be verified for smaller sets of the form \begin{equation} \label{eq24S} \mathcal{S} = \mathcal{L}_ {n,t} \cap \mathcal{R}, \end{equation} in which we ignored the remaining deterministic variables. Projecting the measure $\mu$ from (\ref{eq15a})--(\ref{eq14}) on the subspace given by (\ref{eq24S}), one obtains that $\mu_\mathcal{S}$ is the Lebesgue measure in $\mathbb{T}^{2d}$. Then, the integral in the right-hand side of (\ref{eq18}) reduces to the mean value of the observable: \begin{equation} \label{eq18mod} \lim_{N \to \infty}\int \varphi \,d\mu^{(N)}_{\mathcal{S}} = \int \varphi(\mathbf{w}) \, d^{2d} \mathbf{w}, \end{equation} where $\mathbf{w} = \left(w_{n,t}\right)_{(n,t) \in \mathcal{S}} \in \mathbb{T}^{2d}$ denotes the vector of variables indexed by $\mathcal{S}$, with $w_{n,t} = u_n^{(N)}(t)$ in the integral on the left-hand side. Let is consider the Fourier expansion \begin{equation} \label{eq25F} \varphi(\mathbf{w}) = \sum_{\mathbf{k} \in (2\pi\mathbb{Z})^{2d}} \varphi_{\mathbf{k}} \exp(i\mathbf{k} \cdot \mathbf{w}), \end{equation} where we introduced the wavevector $\mathbf{k} = \left(k_{n,t}\right)_{(n,t) \in \mathcal{S}} \in (2\pi\mathbb{Z})^{2d}$; the dot denotes the scalar product. Using (\ref{eq25F}) in relation (\ref{eq18mod}), the constant term $\varphi_{\mathbf{0}}$ compensates the integral in the right-hand side since $\varphi_{\mathbf{0}} = \int \varphi \, d^{2d} \mathbf{w}$. Therefore, it remains to show that \begin{equation} \label{eq25} \lim_{N \to \infty} \int \exp(i\mathbf{k}\cdot \mathbf{w}) \, d\mu^{(N)}_{\mathcal{S}} = 0 \end{equation} for any nonzero wavevector $\mathbf{k}$. Using (\ref{eq20b}) with the property $k_{n,t} \in (2\pi\mathbb{Z})^2$ and symmetry of the matrix $A$, we have \begin{equation} \label{eq26} \int \exp(i\mathbf{k}\cdot \mathbf{w}) \, d\mu^{(N)}_{\mathcal{S}} = \exp\left(i a_{\mathbf{k}}^{(N)}\right) \int \exp\left(i A_{\mathbf{k}}^{(N)} \cdot \xi\right)\rho(\xi)d\xi, \end{equation} where we introduced the scalar $a_{\mathbf{k}}^{(N)} \in \mathbb{R}$ and the vector $A_{\mathbf{k}}^{(N)} \in \mathbb{R}^2$ as \begin{equation} \label{eq27} a_{\mathbf{k}}^{(N)} = \sum_{(n,t) \in \mathcal{S}} k_{n,t} \cdot a_{n,t}^{(N)},\quad A_{\mathbf{k}}^{(N)} = \sum_{(n,t) \in \mathcal{S}} P_{n,t}^{(N)}(A)k_{n,t}. \end{equation} Notice that the integral in the right-hand side of (\ref{eq26}) represents the Fourier coefficient of $\rho(\xi)$ of order $-A_{\mathbf{k}}^{(N)}$. By the Riemann--Lebesgue lemma the high-order Fourier coefficients of the function $\rho(\xi)$ converge to zero. Therefore, to conclude the proof it is enough to show that $\|A_{\mathbf{k}}^{(N)}\| \rightarrow \infty$ as $N\rightarrow\infty$ for any fixed nonzero wavevector $\mathbf{k} \in (2\pi\mathbb{Z})^{2d}$. Using the eigenvalue decomposition of the Arnold's cat map (\ref{eq6}), we can write~\cite{arnold1968ergodic} \begin{equation} \label{eq28} P_{n,t}^{(N)}(A) = P_{n,t}^{(N)}(\alpha) A_1+P_{n,t}^{(N)}(\alpha^{-1}) A_2, \end{equation} where $\alpha = (3+\sqrt{5})/2$ and $\alpha^{-1}$ are eigenvalues of $A$, and the symmetric matrices $A_1$ and $A_2$ are given by the linear maps \begin{equation} \label{eq28A} \textstyle A_1:(x,y) \mapsto \left(\frac{\alpha x+(\alpha-1)y}{\alpha+1},\frac{(\alpha-1)x+y}{\alpha+1}\right), \quad A_2:(x,y) \mapsto \left(\frac{x+(1-\alpha)y}{\alpha+1},\frac{(1-\alpha)x+\alpha y}{\alpha+1}\right). \end{equation} Substituting (\ref{eq28}) into the second expression of (\ref{eq27}) yields \begin{equation} \label{eq27_new} A_{\mathbf{k}}^{(N)} = \sum_{(n,t) \in \mathcal{S}} \left[P_{n,t}^{(N)}(\alpha) A_1k_{n,t} +P_{n,t}^{(N)}(\alpha^{-1}) A_2k_{n,t}\right]. \end{equation} Since $P_{n,t}^{(N)}$ defined in (\ref{eq19A}) is a polynomial with positive coefficients and $\alpha > 1$, we have \begin{equation} \label{eq29} \lim_{N \to \infty} \frac{P_{n,t}^{(N)}(\alpha)}{P_{n,t}^{(N)}(\alpha^{-1})} = \infty. \end{equation} Using Lemma~\ref{pol_conv} formulated and proved below, we can order the elements in $\mathcal{S} = \{(n_i,t_i): i =1,\ldots,d\}$ such that \begin{equation} \label{eq30} \lim_{N \to \infty} \frac{P_{n_{i+1},t_{i+1}}^{(N)}(\alpha)}{P_{n_i,t_i}^{(N)}(\alpha)} = \infty, \quad i = 1,\ldots,d-1. \end{equation} Notice that $A_1k_{n,t} = \frac{1}{\alpha+1}\left(\alpha \ \ \alpha-1 \atop \alpha-1 \ \ 1 \right) k_{n,t}$ following from (\ref{eq28A}), where $\alpha = (3+\sqrt{5})/2$ is the irrational number. Since the wavevector $k_{n,t}/(2\pi) \in \mathbb{Z}^2$ has integer components, $A_1k_{n,t}$ is nonzero if $k_{n,t}$ is nonzero. Therefore, using properties (\ref{eq29}) and (\ref{eq30}) in expression (\ref{eq27_new}), one can see that the magnitude of $A_{\mathbf{k}}^{(N)}$ is dominated by the polynomial $P_{n_i,t_i}^{(N)}(\alpha)$ with the largest $i$ such that $k_{n_i,t_i}$ is nonzero. Since $P_{n_i,t_i}^{(N)}(\alpha) \to \infty$ as $N \to \infty$, we prove the desired property that $\|A_{\mathbf{k}}^{(N)}\| \rightarrow \infty$ as $N\rightarrow\infty$. \end{proof} \begin{lemma} \label{pol_conv} Elements $(n_j, t_j)$, $j = 1,\ldots,d$ of any finite subset $\mathcal{S} \subset \mathcal{R}$ can be ordered such that (\ref{eq30}) holds. \end{lemma} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{figA.pdf} \caption{(a) Every path connecting $(N,0)$ to $(n,t)$ defines the path connecting $(N,0)$ to $(n+1,t)$ through the following surgery procedure. The upper (red) part of the path is removed and the remaining (green) part is shifted to the left. Then, the lower (red) part is added to complete the new path. (b) Black lines connect nodes $(n',t')$ related by Eq.~(\ref{eq23b}), where $(n,t)$ are taken at red points. Polynomials on the same line have finite (nonzero and non-infinite) ratios in the limit $N \to \infty$. (c) The path connecting $(N,0)$ to $(n,t)$ with the largest number of segments.} \label{figA} \end{figure} \begin{proof} Observe that the condition $(n,t) \in \mathcal{R}$ with $n \le N$ ensures that $P_{n,t}^{(N)} (x)$ from (\ref{eq19A}) is nonzero for any $x > 0$; see Fig.~\ref{fig1}. For any path $p$ from $(N,0)$ to $(n,t)$ in (\ref{eq19A}), one constructs a new path $p'$ from $(N,0)$ to $(n+1,t)$ as shown in Fig.~\ref{figA}(a): removing the final segments at scale $n$, shifting the remaining part to the right, and adding extra segments at scale $N$. In this procedure, each removed segment yields the $\tau_n/\tau_N = 2^{N-n}$ added segments. This means that \begin{equation} \label{eq21} \lim_{N \to \infty} \frac{P_{n+1,t}^{(N)}(x)}{P_{n,t}^{(N)}(x)} = \infty, \end{equation} where we assumed an arbitrarily chosen number $x > 1$. Notice that the definition (\ref{eq19A}) implies \begin{equation} \label{eq22} P_{n,t}^{(N)} (x) = xP_{n,t-\tau_n}^{(N)} (x)+xP_{n+1,t-\tau_n}^{(N)} (x), \end{equation} where the last two terms correspond to the paths ending, respectively, with the horizontal and diagonal arrows (Fig.~\ref{fig1}). Using (\ref{eq21}) in (\ref{eq22}), we have \begin{equation} \label{eq23} \lim_{N \to \infty} \frac{P_{n,t}^{(N)}(x)}{P_{n+1,t-\tau_n}^{(N)}(x)} = x. \end{equation} Iterating this relation yields \begin{equation} \label{eq23b} \lim_{N \to \infty} \frac{P_{n,t}^{(N)}(x)}{P_{n',t'}^{(N)}(x)} = x^{n'-n} \quad \textrm{for}\quad n' > n,\quad t' = t-\sum_{j = n}^{n'-1}\tau_j. \end{equation} When $(n,t) \in \mathcal{R}$, the points $(n',t')$ from (\ref{eq23b}) belong to a descending diagonal line as shown in Fig.~\ref{figA}(b). Inspecting these diagonal lines and using the property (\ref{eq21}), one can deduce that \begin{equation} \label{eq24extr} \lim_{N \to \infty} \frac{P_{n_2,t_2}^{(N)}(x)}{P_{n_1,t_1}^{(N)}(x)} = \infty \end{equation} for any distinct elements $(n_1,t_1)$ and $(n_2,t_2)$ of the set $\mathcal{R}$. Here the indices are chosen such that the black line starting at $(n_2,t_2)$ is located to the right of the line starting at $(n_1,t_1)$; see Fig.~\ref{figA}(b). In particular, this implies that any finite subset of elements $(n_j,t_j) \in \mathcal{R}$ can be ordered satisfying the properties (\ref{eq30}). \end{proof} \section{Convergence rate}\label{sec6} We now address practical aspects of convergence: how small can be the random perturbation $\xi$ and how large must be the number of scales $N$ for observing the spontaneously stochastic solution with a given variable $u_n(t)$? Relations (\ref{eq20}) and (\ref{eq28}) in the proof of Theorem~\ref{theorem2} with the limit (\ref{eq29}) indicate that the convergence to the spontaneously stochastic limit for each variable $u_n^{(N)}(t)$ is controlled by the factor \begin{equation} \label{eq19} P_{n,t}^{(N)} (\alpha) = \sum_{\textrm{all paths }p\textrm{ from }\atop (N,0)\textrm{ to }(n,t)} \alpha^{|p|}. \end{equation} Here $\alpha = \frac{1}{2}(3+\sqrt{5}) \approx 2.618$ and the sum is taken over all paths following grey (right or up-right diagonal) arrows in Fig.~\ref{fig1}, which connect $(N,0)$ to $(n,t)$; $|p|$ denotes the number of arrows the path. The factor (\ref{eq19}) amplifies the random perturbation induced by $\xi$ in the variable $u_n^{(N)}(t)$. Let us assume that $\xi$ takes small random values of order $\varepsilon$ and has a sufficiently regular probability density (e.g. Holder continuous). Hence, for observing spontaneous stochasticity at node $(n,t)$, the corresponding error must become large: $P_{n,t}^{(N)}(\alpha) \varepsilon \gg 1$. This yields the condition \begin{equation} \label{eq31} P_{n,t}^{(N)}(\alpha) \gg 1/\varepsilon. \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{fig2.pdf} \caption{Amplification factor $P_{n,t}^{(N)}(\alpha)$ of the initial error evaluated at each point of the lattice in a system with $N = 7$ scales. The color of each rectangle shows (in logarithmic scale) the value of $P_{n,t}^{(N)}(\alpha)$ corresponding to the node $(n,t)$ located in the upper left corner of the rectangle; zero values are shown by white color.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig3.pdf} \caption{Each rectangle shows the probability density functions (darker colors correspond to higher probabilities) for the variable $u_n(t) \in \mathbb{T}^2$ corresponding to the upper left corner of the rectangle. Only the scales $n = 0,1,2$ are demonstrated. The results are obtained by simulating numerically $10^8$ samples of the system with the initial conditions $u_n(0) = (0.7,0.5)$ for $n = 0,\ldots,N$ and the random variable $\xi$ uniformly distributed in the interval $[0,\varepsilon]$: (a) $N = 7$ and $\varepsilon = 10^{-10}$, and (b) $N = 9$ and $\varepsilon = 10^{-1}$. Bold red borders designate variables in the transitional region, $t \sim 2\tau_n$, where the convergence is only exponential in $N$ and is not attained for very small $\varepsilon$. The last panel (c) shows analogous results for $N = 9$ and $\varepsilon = 10^{-10}$ in modified system (\ref{eq4new}). } \label{fig3} \end{figure} We now verify how fast $P_{n,t}^{(N)}(\alpha)$ grows with $N$. For this purpose, we compute the longest path (dominant term) in expression (\ref{eq19}). This path contains the maximum number of arrows at the smallest scale $\ell_N$, supplemented with $N-n$ diagonal arrows (one at every scale) in order to reach the node $(n,t)$; see Fig.~\ref{figA}(c). The number of arrows at scale $N$ is evaluated as $(t-\Delta t)/\tau_N$, where $\tau_N = 2^{-N}$ is the turn-over time for each arrow and $\Delta t$ is the time interval occupied by the diagonal arrows at larger scales. This interval is evaluated as \begin{equation} \label{eqS1} \Delta t = \sum_{j = n}^{N-1}\tau_j = \sum_{j = n}^{N-1} 2^{-j} = 2^{1-n}-2^{1-N} = 2\tau_n-2\tau_N. \end{equation} Therefore, the total number of arrows in the path is found as \begin{equation} \label{eqS2} |p| = \frac{t-\Delta t}{\tau_N}+N-n = \frac{t-2\tau_n}{\tau_N}+2+N-n = 2^N(t-2\tau_n)+N-n+2. \end{equation} Using the longest path (\ref{eqS2}) in expression (\ref{eq19}), yields the lower-bound estimate as \begin{equation} \label{eq32} P_{n,t}^{(N)}(\alpha) \ge \alpha^b,\quad b = 2^N(t-2\tau_n)+N-n+2. \end{equation} This expression suggests that the time $t = 2\tau_n$ is transitional: the factor $P_{n,t}^{(N)}(\alpha) \propto \alpha^N$ grows exponentially in $N$ at $t \sim 2\tau_n$, while the growth becomes double-exponential with $P_{n,t}^{(N)}(\alpha) \propto \left(\alpha^{t-2\tau_n}\right)^{2^N}$ at larger times. To be more specific, we computed the values of $P_{n,t}^{(N)}(\alpha)$ numerically using formula (\ref{eq19}) and presented the results graphically in Fig.~\ref{fig2}. One observes that, in the model with only $N = 7$ scales and utterly small noise of amplitude $\varepsilon \sim 10^{-50}$, the spontaneously stochastic behaviour develops for all variables lying to the right of the transitional (red/yellow) region. Therefore, systems with a moderate number of scales $N$ must demonstrate the spontaneously stochastic behaviour even for extremely small random perturbations. However, larger perturbations are required for convergence in the transitional region. This result is tested numerically in Figs.~\ref{fig3}(a,b). \section{Discussion}\label{sec7} We designed a simple model that demonstrates the Eulerian spontaneous stochasticity (ESS): It is a formally deterministic scale-invariant system with deterministic initial conditions, which has uncountably many non-unique solutions and yields a universal stochastic process when regularized with a small-scale infinitesimal random perturbation. Our work provides the rigorous study of this system proving the existence of spontaneously stochastic solution as well as its universality (independence of the vanishing regularization term). The exceptional and counterintuitive property of this solution is that it assigns equal probability (uniform probability density) to all non-unique solutions. At integer times, the solution represents a Markov process converging to the equilibrium (uniform) state. Our results can be extended to other forms of random regularization, e.g., random variables depending on $N$ or random perturbations added to all variables (noise). Also, one can use this idea for designing spontaneously stochastic systems with different behaviors by modifying the couplings or imposing extra conditions like conserved quantities. For example, Fig.~\ref{fig3}(c) shows the numerical results when Eq.~(\ref{eq4}) is replaced by \begin{equation} \label{eq4new} u_n(t+\tau_n) = Au_n(t)+0.1\left(\cos x_{n+1}(t),\cos y_{n+1}(t)\right), \end{equation} where $u_{n+1}(t) = \left(x_{n+1}(t),y_{n+1}(t)\right) \in \mathbb{T}^2$. We see that model (\ref{eq4new}) yields a more sophisticated spontaneously stochastic solution. The rigorous study of such systems is challenging, leaving important theoretical questions for future study: how to analyse the existence, universality and robustness of spontaneously stochastic solutions in more general multi-scale models? Our model can used as a prototype for a (first) experimental observation of the ESS implemented in a physical system, e.g., an optical or electric circuit. In this experiment, arrows in Fig.~\ref{fig1} represent waveguides, and coupling nodes are identical signal-processing gates. The scaling symmetry is maintained by choosing lengths of connecting waveguides proportional to turn-over times $\tau_n$, exploiting the property that a distance travelled by a signal is proportional to time. The variables $u_n(t)$ can describe phases of propagating signals measured at each node, while the initial conditions are associated with the input signal. A challenge of this setup is in reproducing the coupling relation (\ref{eq4}) or a similar one that leads to the spontaneous stochasticity. Notice that the intrinsic hyperbolicity of Arnold's cat map can also be recreated in a simple mechanical system~\cite{hunt2003anosov,kuznetsov2005example}. The extremely fast convergence, which is double-exponential in a number of scales, suggests that the spontaneous stochasticity in the described experiment will be triggered by a natural microscopic noise from the environment already in systems of moderate size, e.g. $N = 7$ from Fig.~\ref{fig2}. Finally, the proposed model suggests that applications and occurrence of the ESS can be seen in a broader sense. This refers to multi-scale systems defined by deterministic rules but generating complex and genuinely stochastic processes. In real-world systems, this stochasticity may be triggered by a natural microscopic noise. Confirming ESS experimentally would imply that the occurrence of ESS should be studied in a wide range of applications, e.g., hydrodynamic turbulence, random-number generation, neural networks in artificial intelligence or living organisms, etc. \vspace{2mm}\noindent\textbf{Acknowledgments.} The work was supported by CNPq (grants 303047/2018-6, 406431/2018-3). \bibliographystyle{plain}
2,877,628,089,334
arxiv
\section{Introduction} A scalar field is known to be described by the Klein-Gordon theory. Even though this description is part of the extensively verified physical theory of Standard model, it encounters certain difficulties. Space-time localization or, in other words, the definition of general space-time operators is unquestionably one of them. The concept of space-time localization in the context of quantum field theory has been a challenging issue since the advent of the latter. And yet, a definite answer is still lacking \cite{N-W,Wightman,Hegerfeldt,Haag,Barat Kimball,Busch,Terno,Toller,C-K-T}. Indeed, there is no unique way to describe localization of relativistic particles, even when the particles are unambiguously defined. In standard formalism of quantum mechanics, time is never treated as an operator, and therefore has an entirely different description from that of space position \cite{Salecker et al,Busch et al}. It was Pauli \cite{Pauli}, who first pointed out that a construction of a general time operator in quantum mechanics is impossible. Since time translations are generated by the Hamiltonian, any legitimate general time operator, as conjugate to the Hamiltonian, should translate energy arbitrarily. Thus, the definition of time operator contradicts the fact that any realistic Hamiltonian has a spectrum bounded from below. The same argument was generalised by Wightman in relativistic case (where space and time are merged into space-time) for position-time operators \cite{Wightman}. The standard textbook answer to Pauli-Wightman objection is that space-time position in quantum field theory cannot be represented by an operator and is just a parameter, external to the theory \cite{Peres}. Another difficulty is the infinite energy of the ground state or vacuum energy as it is known. Even though the ground state demonstrates that the energy is bounded below, an interesting, and for the same reason disturbing, phenomenon takes place in Klein-Gordon theory (which is common in all standard quantum field theories), the ground state has infinite energy. The standard way to deal with this problem, is to accept the theory only up to a certain ultraviolet momentum $\tp_{max}$ and cut off the very high momentum modes. In the absence of gravity this finite energy has no effect. The values that it can take is completely arbitrary. Traditionally, this energy is discarded by the process of ``normal-ordering''. However, the vacuum fluctuations are very real, having measurable impact in standard physics, see Casimir effect \cite{Casimir}. On top of that, gravity does exist, and the actual value of the vacuum energy has important consequences. It turns out that the energy density of the vacuum measures the cosmological constant. The cosmological constant problem, the discrepancy between the theoretical (from calculations of vacuum energy) and observational values of the cosmological constant, is not related to the fact that quantum theory supply a system with a huge amount of vacuum energy, since this contribution can be renormalized away, the problem is that there is no reason the resulting number to be zero \cite{Carroll}. While, in principle, the ground state, as it is, could have solved the problem of space-time localization in quantum field theory, in the sense that the infinite vacuum energy practically renders Hamiltonian spectrum unbound from below, the fact that the current field theories cannot handle vacuum energy put forward Pauli-Wightman objection setting us incapable of defining space-time operators and therefore forcing us to relegate position $x$ and time $\ttt$ to mere parameters. From the discussion above, one can conclude that with regard to the aforementioned issues, the present description of Klein-Gordon field is no satisfactory and a new context able to theorize the arbitrariness of vacuum energy in terms of a new potential in order to adequately define space-time operator, while remaining consistent with the known laws of physics, would be desirable. In this work we show that there exists a way which accomplishes this end. The method consists of attaching to the momentum space of the free Klein-Gordon field an extra degree of freedom, interpreted as potential energy, associated it to the vacuum energy, and then introducing, in addition to the standard equation of motion of the field (i.e. Klein-Gordon equation) a second equation in the extended momentum space of the field. As we will show, the physical meaning of the new equation is the energy transfer between the free energy $E_\tp$ and the newly-established potential/vacuum energy. However, in practice, it can be interpreted as a new type of quantum fields which provides the space-time where the standard field is localized. In a different context, the same definition of the new field was used to derive Unruh effect \cite{C-K}. More specifically we shall show that for every real field of Klein-Gordon theory, making use of its vacuum energy, it can be defined a second field in its extended momentum space (i.e. free energy and vacuum energy) which has a set of time and space operators complying with the language of second quantization. Consequently, a general multi-particle state, eigenstate of the field Hamiltonian, is also eigenstate of the new time and position operators. The corresponding space and time eigenvalues are proportional to the wave numbers of the new field and the eigenvalues of the Klein-Gordon number operator. On account of these operators, the space-time localization of the Klein-Gordon field momentum states is not, any more, external to the theory, through parameters, as standard approach supports \cite{Peres}, but on some internal degrees of freedom of the field operators. From the kinematic study of the space-time localised quantum particles further insight on the nature of space-time and behaviour of particles states on this space-time is obtained. In particular, it is found that space-time is relativistic in nature satisfying the two postulates of special relativity. And the contribution of vacuum energy in the form of a extra potential energy renders the particle states motion accelerating. In section \ref{Localization scheme} we introduce the new field and define the general position and time operators. We apply the new localization scheme on Klein-Gordon particle states in section \ref{Application to momentum eigenstates}. By studying the kinematics of the new space-time operators' spectrum, in section \ref{Kinematics on the space-time spectrum} we analyze the space-time where particles states are localized on. A final discussion is then presented in section \ref{In lieu of conclusions}. For simplicity, we consider the quantum theory of real Klein-Gordon field in two dimensions, with metric signature $(+,-)$, but our results can easily be generalised to four dimensions (see \ref{In lieu of conclusions}). Furthermore, the units are chosen such that $c = \hbar=1$, unless specified otherwise. \section{Localization scheme}\label{Localization scheme} A massive, scalar field $\Phi$ is known to be described by the Klein-Gordon equation \begin{equation} (\partial_{\ttt}^2 - \partial_{x}^2 + m^2)\Phi(\ttt,x) = 0. \label{Klein-Gordon_equation} \end{equation} $x,\ttt$ are the coordinates of (1+1) space-time where the field is defined. One can (second-)quantize the field by forming the field operator $\hat{\Phi}(\ttt,x)$ in the usual way and expanding in the normal mode solutions $\phi_{\tp}(\ttt,x) = e^{-i(E_{\tp} \ttt - \tp x)}/\sqrt{4 \pi E_{\tp}}$ of Klein-Gordon equation (\ref{Klein-Gordon_equation}), \begin{equation} \hat{\Phi}(\ttt,x) = \int d\tp \left(\hat{a}_{\tp} \phi_{\tp}(\ttt,x) + \hat{a}^\dagger_{\tp} \phi_{\tp}^*(\ttt,x) \right). \label{standard_K-G} \end{equation} For each value of momentum $\tp$, it corresponds the energy eigenvalue $E_{\tp}= \sqrt{\tp^2 + m^2}$. The associated ground state $\ket{0}$ is defined by \begin{equation} \hat{a}_{\tp}\ket{0}=0 \qquad \bra{0}\hat{a}^{\dagger}_{\tp}=0,\quad \forall\tp \end{equation} where $\hat{a}_{\tp}$ and $\hat{a}^{\dagger}_{\tp}$ are the standard annihilation and creation operators of the theory, respectively. The Hamiltonian of the Klein-Gordon field $\Phi$ reads \begin{equation} \hat{H}=\int d\tp \, E_\tp \,\left(\hat{a}_\tp^\dagger\, \hat{a}_\tp +\frac{1}{2} [\hat{a}_\tp, \hat{a}_\tp^\dagger]\right).\label{KG-Hamiltonian} \end{equation} Let us start introducing our localization scheme by considering the momentum space of the Klein-Gordon field $\Phi$ to have as many as two independent degrees of freedom, which can characterize any point in this space. We choose the one to be the momentum $\tp$ of the Klein-Gordon particles and the second to be some abstractly, for now, defined energy $\tE$ (remember that $c=1$ so energy and momentum share the same units). We work with a specific Klein-Gordon field, so we consider the field mass to have a definite value and therefore not to be one more degree of freedom in momentum space. Technically speaking, for now, energy $\tE$ is physically meaningless. The only known energy that the system has is the free energy $E_{\tp}$. But this energy is a function of momentum, $E_{\tp}\equiv E(\tp)$, thus the energy coordinate $\tE$ in momentum space of Klein-Gordon field cannot be that energy. The sole purpose of $\tE$ is to unambiguously label the potential energy, related to the vacuum energy of the system. Apparently, since there is no restriction on the values that the coordinates $\tE,\tp$ can take, the domain of both $\tE$ and $\tp$ is the set of all real numbers. Looking for a way to integrate the energy $\tE$ to the free Klein-Gordon field $\Phi$, we add an interaction between the momentum $\tp$ and $\tE$ in a form of a novel field. Let us assume that the momentum space of the Klein-Gordon field, as described above, has its own fields in the sense that each point in momentum space is associated with a continuous field variable $\wtPsi(\tE,\tp)$, which satisfies the differential equation: \begin{equation} \left(\hbar^2\partial_{\tE}^2-\hbar^2\partial_{\tp}^2 - \frac{1}{\kappa^2} \right)\wtPsi(\tE,\tp) = 0. \label{accelerated_diff.-equation} \end{equation} Let us explain the terms that constitute this equation. First we should notice that temporarily we recover the reduced Planck constant $\hbar$ to avoid confusion in our analysis below. $\kappa\in\Re$ is just a field parameter. Later we will see that it is related to the proper acceleration of localized particles states. The differential operators $\hbar^2\partial_{\tp}^2, \hbar^2\partial_{\tE}^2$ are the square of the linear differential operators $i\hbar{\partial}_{\tp}\equiv i\hbar\frac{\partial}{\partial \tp} $ and $-i\hbar{\partial}_{\tE} \equiv - i\hbar\frac{\partial}{\partial \tE}$, respectively. To comprehends the physical meaning of these operators we need first to solve equation (\ref{accelerated_diff.-equation}). Mathematically speaking, eq.(\ref{accelerated_diff.-equation}) is recognized as a classical wave equation including an extra term, so its solutions are plane waves of the form \begin{equation} \tu(\tE,\tp) = \frac{e^{i(\tE \, \widetilde{k} - \tp\, \widetilde{\omega})}}{\sqrt{4 \pi \widetilde{\omega}}} \label{planewave1}. \end{equation} Factor $1/\sqrt{4\pi\widetilde{\omega}}$ was inserted for later convinience. Plane waves $\tu$ are solutions to eq.(\ref{accelerated_diff.-equation}) in the same sense in which $\phi$ are solutions to Klein-Gordon equation (\ref{Klein-Gordon_equation}). But there is an essential difference between the two cases, which reflects the difference between the fields $\Phi$ and $\wtPsi$: while $\phi$ is defined in position space $x-\ttt$, $\tu$ is defined in momentum space $\tp - \tE$. Accordingly, $\ttt,x$ are the coordinate variables for $\phi$, whereas $\tp,E_{\tp}$ are the components of the wave vector with $E_{\tp}$ to obey the dispersion relation $\tE_p^2=\tp^2+m^2$. The coordinates at $\tu$, instead, are the energy $\tE$ and momentum $\tp$, while $(\widetilde{k},\widetilde{\omega})$ are the components of a wave vector with $\widetilde{\omega}$ to satisfy the dispersion relation \begin{equation} \widetilde{\omega}^2=\widetilde{k}^2+\frac{1}{\hbar^2\kappa^2}\label{dispersion-rel1}. \end{equation} Recalling the standard definition (which applies to plane waves $\phi$), a plane wave is a function of space and time coordinates, $x, \ttt$, which is proportional to \begin{equation} \text{plane wave} = A\, e^{i(kx-\omega \ttt)}, \end{equation} (where $k$ is the ordinary angular wavenumber and $\omega$ is the ordinary angular frequency). Apparently, according to this definition, it is rather opaque our interpretation of (\ref{planewave1}) as plane wave. In case of matter plane wave $\phi$, this has been clarified by the de-Broglie hypothesis \begin{equation} E_{\tp}=\hbar \omega\quad \text{and} \quad \tp=\hbar k. \end{equation} However, the same hypothesis does not apply onto $\tu$, since the energy-momentum vector is already present in the argument. We need an analogous hypothesis to associate the new wave vector $(\widetilde{k},\widetilde{\omega})$ with the space-time coordinates, which is lacking. To this end acting the differential operators $i\hbar{\partial}_{\tp}$ and $-i\hbar{\partial}_{\tE}$ on the plane wave $\tu$ we get \begin{equation} i\hbar{\partial}_{\tp} \,\tu = \hbar\widetilde{\omega}\,\tu \quad \text{and} \quad -i\hbar{\partial}_{\tE} \,\tu = \hbar\widetilde{k}\,\tu. \end{equation} We postulate that the momentum derivative $i\hbar{\partial}_{\tp}$ is the position operator\footnote{In momentum representation of quantum mechanics position operator has exactly this form.}, and therefore we theorize the association of frequency $\widetilde{\omega}$ to a position $\tx_t$: \begin{equation} \tx_t = \hbar \, \widetilde{\omega}.\label{quantum-position} \end{equation} Similarly, we postulate that the energy derivative $-i\hbar{\partial}_{\tE}$ corresponds to a time operator and thus $\widetilde{k}$ is associated to time $\ttt$ \begin{equation} \ttt = \hbar \, \widetilde{k}. \label{quantum-time} \end{equation} Applying eqs (\ref{quantum-position},\ref{quantum-time}) on (\ref{planewave1}) we finally get \begin{equation} \tu(\tE,\tp) = \frac{e^{\frac{i}{\hbar}(\tE \, \ttt - \tp\, \tx_t)}}{\sqrt{4 \pi \tx_t/\hbar}} \label{planewave12}. \end{equation} Evidently, if matter plane waves $\phi_\tp$ ($\hbar=1$) represent a free particle that carries momentum $\tp$ and energy $E_{\tp}$, then $\tu$ above, according to de-Broglie formulas, represents a particle with energy $-\tE$ and momentum $-\tp$. The potential energy $\tE$ is justified because the particle is not anymore free - in section \ref{Kinematics on the space-time spectrum} we will demonstrate that the particle state has proper acceleration. The position $\tx_t$ of the particle is restricted now to the spacetime hyperboloid \begin{equation} \tx_t^2 - \ttt^2 = \frac{1}{\kappa^2} \label{dispersion-rel2}. \end{equation} Hyperboloid (\ref{dispersion-rel2}) is nothing else than the dispersion relation (\ref{dispersion-rel1}) after making use of (\ref{quantum-position},\ref{quantum-time}). The analysis above suggests that the determination of position and time of a quantum particle turns into the eigenvalue problems: \begin{equation} i\hbar{\partial}_{\tp} \,\tu = \tx_t\,\tu,\quad \text{and} \quad -i\hbar{\partial}_{\tE} \,\tu = \ttt\,\tu, \label{position-time_eigenequations} \end{equation} where $\tx_t$ are the position eigenvalues and $\ttt$ are the time eigenvalues. Equations (\ref{position-time_eigenequations}) say that the position operator $i\hbar{\partial}_{\tp}$ is the generator of momentum change and the time operator $-i\hbar{\partial}_{\tE}$ is the generator of (potential) energy change. More specifically, if we consider the plane wave $\tu(\tE, \tp+\tq)$ and expand it into power series, we obtain \begin{eqnarray} \tu(\tE, \tp+\tq) &=& \tu(\tE, \tp) + \tq \partial_{\tp}\tu(\tE, \tp)+\frac{1}{2}\left(\tq \partial_{\tp}\right)^2\tu(\tE, \tp)+\ldots \nonumber\\ &=& e^{-\frac{i\tq}{\hbar}\cdot i\hbar{\partial}_{\tp}} \,\tu(\tE, \tp)\nonumber \\ &=& e^{-\frac{i}{\hbar}\,\tq\cdot \tx_t} \,\tu(\tE, \tp).\label{unitay-momentum} \end{eqnarray} Similarly, the plane wave $\tu(\tE-e, \tp)$ expanded into power series gives \begin{eqnarray} \tu(\tE-e, \tp) &=& \tu(\tE, \tp) -e \partial_{\tE}\tu(\tE, \tp)+\frac{1}{2}\left(-e \partial_{\tE}\right)^2\tu(\tE, \tp)+\ldots \nonumber\\ &=& e^{-\frac{i e}{\hbar}\cdot (-i\hbar{\partial}_{\tE})} \,\tu(\tE, \tp)\nonumber \\ &=& e^{-\frac{i}{\hbar}\,e\cdot \ttt} \,\tu(\tE, \tp).\label{unitay-energy} \end{eqnarray} The unitary operators $U_{\tx_t}(\tq):=e^{-\frac{i\tq}{\hbar}\cdot i\hbar{\partial}_{\tp}}$ and $U_{\ttt}(e):=e^{-\frac{i e}{\hbar}\cdot (-i\hbar{\partial}_{\tE})}$ built by the position and time operators, respectively, responsible for the transformation of $\tu(\tE,\tp)$, do not commute since position $\tx_t$ and time $\ttt$, are not independent degrees of freedom, see eq.(\ref{dispersion-rel2}). Their non-commutativity will be demonstrated later in terms of creation and annihilation operators. For now we present the relation \begin{equation} \tu(\tE,\tp\pm\tq) = \tu(\tE-f(\tq,\kappa),\tp), \label{unitary_momentum-energy} \end{equation} which is easily derived combining eqs.(\ref{unitay-momentum},\ref{unitay-energy}) with (\ref{dispersion-rel2}), assuming the function \begin{equation} f(\tq,\kappa) = \frac{\pm\tq}{\ttt}\sqrt{\ttt^2 + \frac{1}{\kappa^2}}.\label{fqk} \end{equation} The double sign in front of $\tq$ covers the fact that solving (\ref{dispersion-rel2}) for $\tx_t$ we get $\tx_t=\pm \sqrt{\ttt^2 + 1/\kappa^2}$. Remember that we have justified the introduction of the new field equation ($\ref{accelerated_diff.-equation}$), arguing that in this way the actual momentum $\tp$ of Klein-Gordon particles is integrated with the conjectural potential energy $\tE$. Eq.(\ref{unitary_momentum-energy}) explicitly verifies that this is the case. The plane wave of a particle with potential energy $\tE$ and momentum $\tp$ increased by $\tq$ is the same with the plane wave of a particle with the initial momentum $\tp$ and potential energy decreased by a function of $\tq$ and parameter $\kappa$. So, one can infer that the physical meaning of $\wtPsi$ is the energy transfer between potential energy $\tE$ and free energy $E_{\tp}$ (``kinetical'' in the sense that depends on momentum, $E_{\tp}=E_{\tp}(\tp)$). It is important to underline the fact that we have identified the eigenvalues of time operator $-i\hbar\partial_{\tE}$ as the time parameter which appears in standard Klein-Gordon field equation, by using the same letter, $\ttt$, in both cases. Doing this we assert that the time that appears in Klein-Gordon equation (\ref{Klein-Gordon_equation}) is not anymore a parameter but an operator. We cannot argue the same regarding the position parameter $x$ and position eigenvalue $\tx_t$ for two reasons, both rooted in relation (\ref{dispersion-rel2}). Whereas $x$ and $\ttt$ are independent degrees of freedom, $\tx_t$ and $\ttt$ are not. Secondly, in contrast to $x$ which can take any value from real line, $\tx_t$ is restricted to the spacetime hyperboloid (\ref{dispersion-rel2}). From now on, natural units comprising $\hbar=1$ (in addition to $c=1$) are used allowing time $\ttt$, wavenumber $\widetilde{k}$, length $\tx_t$ and angular frequency $\widetilde{\omega}$ to be used interchangeably. Thus, henceforward we recognize as dispersion relation the equation (\ref{dispersion-rel2}), and as plane wave the relation \begin{equation} \tu_{\ttt}(\tE,\tp) = \frac{e^{i(\ttt \,\tE - \tx_t\, \tp)}}{\sqrt{4 \pi \tx_t}} \label{planewave2}. \end{equation} To write down the most general solution of (\ref{accelerated_diff.-equation}), we need to construct a complete, orthonormal set of plane waves (modes) in terms of which any solution may be expressed. But to make sense of orthonormal we have to define an inner product on the space of solutions to the equation (\ref{accelerated_diff.-equation}). The appropriate inner product is expressed as an integral over a constant-momentum curve ${C}$, \begin{equation} (\tu,\mathtt{v}) := i \int_{{C}} d{\tE} \left(\tu^* \, \partial_{\tp} \mathtt{v} -\mathtt{v} \, \partial_{\tp}\tu^*\right). \label{eq.inner-prod} \end{equation} The functions $\tu$ and $\mathtt{v}$ are plane waves of the form of Eq.(\ref{planewave2}). It is easily verified that the plane waves set up an orthonormal set under this product, \begin{eqnarray} (\tu_{\ttt'},\tu_{\ttt}) &=& \delta(\ttt - \ttt') \nonumber \\ (\tu^*_{\ttt'},\tu^*_{\ttt}) &=& -\delta(\ttt - \ttt') \label{eq.inner-prod2}\\ (\tu_{\ttt'},\tu^*_{\ttt}) &=&0.\nonumber \end{eqnarray} Note that the inner product (\ref{eq.inner-prod}) is not positive definite. From (\ref{eq.inner-prod2}) we can choose the positive-frequency mode as $\tu_{\ttt}$ and, consequently, the negative-frequency one as $\tu^*_{\ttt}$. Based on this inner product, $\wtPsi$ can be expressed as a Fourier expansion of these normal modes \begin{equation} \wtPsi(\tE,\tp) = \int d\ttt \left(\tc_\ttt \, \tu_{\ttt}(\tE,\tp) + \tc^\dagger_\ttt \, \tu_{\ttt}^*(\tE,\tp)\right). \label{Field_expansion-Fourier-K_G-Acc1} \end{equation} with the Fourier coefficients to be defined by \begin{equation} \tc_\ttt = (\tu_{\ttt},\wtPsi) \qquad \text{and} \qquad \tc^\dagger_\ttt = -(\tu^*_{\ttt},\wtPsi) \label{creat-annih1}. \end{equation} The field $\wtPsi$ can be quantized according to the rules of second quantization by promoting $\tc_\ttt, \tc_\ttt^\dagger$ to annihilation and creation operators, respectively, and satisfying the usual commutation relation for the raising and lowering operators, \begin{equation} [\hat{\tc}_\ttt, \hat{\tc}_{\ttt'}^\dagger]=\delta(\ttt-\ttt'). \label{commutation -time} \end{equation} Due to Eq.(\ref{Field_expansion-Fourier-K_G-Acc1}), the commutation relations (\ref{commutation -time}) in coordinate space are equivalent to the commutation relations \begin{equation} \left[\hat{\wtPsi}_{\tp}(\tE),\partial_{\tp} \hat{\wtPsi}_{\tp}(\tE')\right] = i \delta(\tE - \tE'),\label{comm.-rel-momentum} \end{equation} in momentum space. Here, index $\tp$ stands for a curve in momentum space of constant $\tp$. This relation implies that operators at equal momentum commute everywhere except at coincident potential-energy points. Creation operators $\hat{\tc}_\ttt^\dagger$ and annihilation operators $\hat{\tc}_\ttt$ following the general machinery of second quantization should be operator-valued distributions. Furthermore, since $\hat{\wtPsi}(\tE,\tp)$ lives in momentum space of the Klein-Gordon theory, its Fourier coefficients, the quantities $\hat{\tc}_\ttt, \hat{\tc}_\ttt^\dagger$, should live in coordinate space $\ttt-\tx_t$, or the one-dimensional space $\ttt$ due to (\ref{dispersion-rel2}). Remember that we have identified this $\ttt$ with the time parameter appearing in the field operator $\hat{\Phi}(\ttt,x)$, which, again according to second quantization, is an operator-valued distribution. Thus, we infer that the time annihilation and creation operators can be represented by the projections of Klein-Gordon field operator, $\hat{\Phi}(\ttt,x)$ and $\hat{\Phi}^\dagger(\ttt,x)$ respectively, for fixed value of position: \begin{equation} \hat{\Phi}_{x}(\ttt) :=\left.\hat{\Phi}(\ttt,x_0)\right|_{x_0 \rightarrow x} =\hat{\tc}_\ttt \label{Phi_to_tc} \end{equation} and \begin{equation} \hat{\Phi}_{x}^\dagger(\ttt) :=\left.\hat{\Phi}^\dagger(\ttt,x_0)\right|_{x_0 \rightarrow x} =\hat{\tc}^\dagger_\ttt. \label{Phid_to_tcd} \end{equation} Apparently, since $x$ does not appear on the solutions to (\ref{accelerated_diff.-equation}), we assert that (\ref{Phi_to_tc},\ref{Phid_to_tcd}) hold for every $x$. The index notation is used to distinguish the parameter $x$ from the real variable $\ttt$. Another argument that supports the connection between the time creation/annihilation operators and Klein-Gordon field operator is the following. Using equation (\ref{accelerated_diff.-equation}) for $\hat{\wtPsi}$ and the fact that $\tu_{\ttt}$ is solution to the same differential equation one immediately verifies, through an integration by parts, that $\hat{\Phi}_{x}(\ttt)$ and $\hat{\Phi}_{x}^\dagger(\ttt)$ are constant in momentum $\tp$. This is consistent with the fact that the Klein-Gordon field operator in standard framework is independent of momentum since according to (\ref{standard_K-G}) $\hat{\Phi}(\ttt,x)$ is in superposition of all the possibles momentum eigenstates. Operators $\hat{\Phi}(\ttt,x)$ and $\hat{\Phi}^\dagger(\ttt,x)$ acquire a dual role. As Klein-Gordon field operators, from expansion (\ref{standard_K-G}), they create Klein-Gordon particles as superposition of momentum eigenstates, at specific space-time position $(\ttt,x)$. At the same time, as Fourier coefficients in the expansion of the wave operator $\hat{\wtPsi}(\tE,\tp)$, updated to \begin{equation} \hat{\wtPsi}(\tE,\tp) = \int d\ttt \left( \hat{\Phi}_{x}(\ttt) \, \tu_{\ttt}(\tE,\tp) + \hat{\Phi}_{x}^\dagger(\ttt) \, \tu_{\ttt}^*(\tE,\tp)\right), \label{Field_expansion-Fourier-K_G-Acc} \end{equation} they have the function to create and annihilate time eigenstates $\tu_{\ttt}$ and $\tu^*_{\ttt}$. Mathematically, this dual role is translated into two commutation relations that $\hat{\Phi}$ should satisfy. With respect to the first case, the equal-time standard canonical commutation relation \begin{equation} \left[\hat{\Phi}_{\ttt}(x),\partial_\ttt\hat{\Phi}_{\ttt}(x')\right] =i \delta(x-x'),\label{cons-tim_com_rel} \end{equation} and in connection with the second case, the commutation relation \begin{equation} \left[\hat{\Phi}_{x}(\ttt),\hat{\Phi}^\dagger_{x}(\ttt') \right] = \delta(\ttt-\ttt'). \label{cons-pos_com_rel} \end{equation} which is typical for any set of creation and annihilation operators. A key physical question is what are the observable quantities defined by this new field theory? In standard matter fields (like Klein-Gordon, Dirac), the simplest and most important such object is the overall Hamiltonian, which represents the total energy of the system. Passing to the new field $\wtPsi$, the same formalism can be maintained by considering the total ``Hamiltonian'' \begin{equation} \hat{X} = \int d\ttt \, \tx_t\, \hat{\Phi}_{x}^\dagger(\ttt)\,\, \hat{\Phi}_{x}(\ttt), \label{X-position} \end{equation} which has dimensions of length, $\tx_t = \sqrt{\ttt^2 + 1/\kappa^2}$. In addition to the Hamiltonian, from our theory it follows also the operator \begin{equation} \hat{T} = \int d\ttt\, \ttt\,\,\hat{\Phi}_{x}^\dagger(\ttt)\,\, \hat{\Phi}_{x}(\ttt) . \label{T-time} \end{equation} which has dimensions of time, see also \cite{C-K2}. For more details on the derivation see Appendix. \section{Localization of momentum states} \label{Application to momentum eigenstates} We define a $\tN-$particle state vector by \begin{equation} \ket{\tN}:= \left(\hat{\Phi}^\dagger(\ttt,x)\right)^{\tN}\ket{0}, \label{multi-particle states} \end{equation} or, once we apply the expansion (\ref{standard_K-G}), \begin{equation} \ket{\tN} = \int d\tp_1\ldots d\tp_\tN \, \phi_{\tp_1}^*(\ttt,x)\ldots \phi_{\tp_\tN}^*(\ttt,x) \, \ket{\tp_1,\ldots,\tp_\tN}, \label{superposition_momentum_eigenstates} \end{equation} where $\ket{\tp_1,\ldots,\tp_\tN}:=\hat{a}_{\tp_1}^\dagger\ldots \hat{a}_{\tp_\tN}^\dagger\ket{0}$. This state is a superposition of $\tN$-particle momentum eigenstates and according to the standard Klein-Gordon theory describes a system of $\tN$ particles, that at time $\ttt$ are localized in coordinate space at point $x$. Since in standard Klein-Gordon theory there is not exist a legitimate position operator the interpretation above can be confirmed if we introduce the local particle-number operator \cite{Greiner-R} \begin{equation} \hat{\tN}_V(\ttt):=\int_V dx\, \hat{\Phi}^\dagger(x,\ttt)\,\, \hat{\Phi}(x,\ttt). \end{equation} This operator, in contrast to standard number-operator $\hat{\tN}=\int d\tp \,\hat{a}_\tp^\dagger\, \hat{a}_\tp$, will, in general, depend on time due to the possible spreading of the field. The index $V$ on the integral indicates the volume limit that the integration extends over (which of course can be infinitesimally small). Because of commutation relations (\ref{cons-tim_com_rel}) the operator $\hat{\tN}_V$ satisfies the relation \begin{equation} \left[\hat{\tN}_V, \hat{\Phi}^\dagger(x,\ttt)\right] =\left\lbrace \begin{aligned} &\hat{\Phi}^\dagger(x,\ttt)&\quad x\in V \\ &0& \quad x\notin V \end{aligned} \right. \end{equation} From (\ref{multi-particle states}) and the fact that $\hat{\tN}_V\ket{0}=0$, it follows that $\ket{\tN}$ is an eigenstate of the number-operator $\hat{\tN}_V$ with two possible eigenvalues, $1$ if $x$ is contained in $V$, and $0$ if it does not. This is the picture we have regarding the localization of $\ket{\tN}$ based on standard field theory (of the same nature result we obtain working with position POVMs, see for example \cite{C-K-T}). However the notion of localized particle states is considerably improved applying the localization scheme, introduced at the previous section. Considering the action of $\hat{T}\equiv \hat{T}^0$ and $\hat{X}\equiv \hat{T}^1$ (named together as $\hat{T}^\mu$, $\mu=0,1$) on the state $\ket{\tN}$, we find \begin{equation} \hat{T}^\mu\ket{\tN} = t^\mu\tN \ket{\tN}, \label{space-time_eigenequation} \end{equation} where $\ttt^\mu:=(\ttt,\tx_t)$. So we verify that the multi-particle state $\ket{\tN}$ given by (\ref{multi-particle states}) is an eigenstate of space-time operator $\hat{T}^\mu$ with eigenvalues $t^\mu\tN$. The components of the space-time vector $t^\mu$ are comprising by the time parameter $\ttt$ appearing in field operator $\hat{\Phi}^\dagger(\ttt,x)$, and the position $\tx_t=\sqrt{\ttt^2+1/\kappa^2}$ which corresponds to this time. $\tN$ is the eigenvalue of the number operator $\hat{\tN}$ in momentum space, or equivalently to number operator \begin{equation} \hat{\tN}_x =\int d\ttt \, \,\hat{\Phi}_{x}^\dagger(\ttt)\,\, \hat{\Phi}_{x}(\ttt) \label{number_operator-coor_sp} \end{equation} in coordinate space, which is easily concluded from equations (\ref{X-position}) and (\ref{T-time}). Note that contrary to operator $\hat{\tN}_V$, $\hat{\tN}_x$ does not depend on time, since the integration is performed over time, but on position $x$. Identifying the space-time localization of particle state $\ket{\tN}$ with the product $t^\mu \tN$ leads to a remarkable conclusion. If a single particle state, $\ket{1}$, is localized in space-time position $(\ttt,\tx_t)$, then a two-particle state, $\ket{2}$, is localized in position $(2\,\ttt,2\,\tx_t)$, and a $\tN$-particle state, $\ket{\tN}$, is localized in spacetime position $(\tN\,\ttt,\tN\,\tx_t)$. One can gain a more clear picture of this calculating the commutators between the space-time operators $\hat{T}^\mu$ and the field operators $\hat{\Phi}$, $\hat{\Phi}^\dagger$: \begin{equation} \begin{aligned} \left[\hat{T}^\mu,\hat{\Phi}_{x}^\dagger(\ttt)\right]&=\ttt^\mu \hat{\Phi}_{x}^\dagger(\ttt) \\ \left[\hat{T}^\mu,\hat{\Phi}_{x}(\ttt)\right]&= -\ttt^\mu \hat{\Phi}_{x}(\ttt). \end{aligned} \end{equation} These relations tell us that there are the field operators $\hat{\Phi}$ and $\hat{\Phi}^\dagger$ which transfer us between space-time eigenstates. Starting from the eigenstate $\ket{\tN}$ located at $t^\mu\tN$, according to equation (\ref{space-time_eigenequation}), we can construct all the others localized eigenstates by acting with $\hat{\Phi}$ and $\hat{\Phi}^\dagger$, \begin{equation} \begin{aligned} \hat{T}^\mu \, \hat{\Phi}_{x}^\dagger(\ttt) \ket{\tN}&=\left(t^\mu+1\right)\tN\ket{\tN} \\ \hat{T}^\mu \, \hat{\Phi}_{x}(\ttt) \ket{\tN}&=\left(t^\mu-1\right)\tN\ket{\tN}. \end{aligned} \end{equation} So, the field system we study has not only a ladder of energy states (produced by the action of creation and annihilation operators $\hat{a}_\tp^\dagger, \hat{a}_\tp$ on the ground state $\ket{0}$) but also a ladder of space-time states constructed by the action of field operators $\hat{\Phi}^\dagger,\hat{\Phi}$ on the same ground state $\ket{0}$. In other words, we found that $\hat{\Phi}_{x}^\dagger(\ttt)$ ($\hat{\Phi}_{x}(\ttt)$) acting on a particle state adds (subtracts) a space-time interval $t^\mu$ by adding (subtracting) a particle. Note that quantum states, which correspond to quantum particles of different number, cannot be ascribed to the same space-time position. This conclusion is mathematical ensured by the fact that the number operator $\hat{\tN}_x$ commutes with the space-time operator $\hat{T}^\mu$, \begin{equation} [\hat{T}^\mu,\hat{\tN}_x] = 0. \end{equation} We end this section with two important comments. Space-time operators depend on the the position $x$, $\hat{T}^{\mu}_{x}$ (we have already mentioned that this is the case for number operator $\hat{\tN}_x$), since according to their definition (\ref{X-position},\ref{T-time}), we integrate the field operators over time $t$ but not over space $x$. This means that our localization scheme does not localize a particle state on the $x$ dimension. Therefore $x$ remains a parameter, external to our theory. In what follows we omit the subscript $x$ to reduce the clutter. However, as we will show in the next section, space-time constructed merely by the spectrum $t^\mu\tN$ is physical enough since, by definition, is relativistic, in the sense that its structure satisfies both postulates of special relativity. Due to the nature of quantum mechanics we can also get linear combinations of particles states with different number of particles \begin{equation} \ket{\tL}=\sum_{i} \, \tC_i \ket{\tN_i}. \end{equation} where $\sum_i |\tC_i|^2=1$. $\ket{\tN_i}$ is the eigenstate of the total number operator $\hat{\tN}$ (or $\tN_x$), which describes a field theory with $\tN_i$ particles. Thus, in case of superposition of quantum field theories it holds \begin{eqnarray} \langle \hat{\tN}\rangle_\tL&:=&\bra\tL \hat{\tN} \ket{\tL} = \sum_i p_i \tN_i \\ \langle \hat{T}^\mu\rangle_\tL &:=&\bra\tL \hat{T}^\mu \ket{\tL}= \sum_i p_i \, (t^\mu \tN_i) \end{eqnarray} where $p_i=|\tC_i|^2$. The probabilities appeared in the expectation values represent some classical uncertainty about the particle state and consequently about its space-time location. This uncertainty arises because the calculated expectation values are for a superposed state and therefore, should disappear in case of distinct states. Indeed \begin{equation} \langle \hat{\tN} \rangle_\tN = \tN \quad \text{and} \quad \langle \hat{T}^\mu \rangle_\tN = t^\mu \tN. \end{equation} \section{Kinematics on the space-time spectrum}\label{Kinematics on the space-time spectrum} In this section we investigate the kinematics on space-time $t^\mu\tN$ based merely on the field properties of space-time operators $\hat{T}^\mu$, as described in section \ref{Localization scheme}. We will see that this kinematics accommodates both of special relativity postulates: relativity principle and a universal speed limit. More specifically, after we interpret the relativistic notions of inertial observers (or frames of references) and relative velocities in field terms, we derive the derive Lorentz transformations as a direct consequence of our framework. Using the machinery of special relativity then, we proceed further and provide an explicit relation between the field parameter $\kappa$ and proper acceleration. Let us consider a slightly more complex case, taking two copies of the same Klein-Gordon field operator at two different times: $\hat{\Phi}_x^\dagger(\ttt)$ and $\hat{\Phi}_x^\dagger(\ttt')$ (since position $x$ does not affect our new localization procedure, we take it the same in both cases). In this example, the two quantum states we consider are $\ket{\tN}_{\ttt}:= \hat{\Phi}_x^\dagger(\ttt)^{\tN}\ket{0}$ and $\ket{\tN}_{\ttt'}:= \hat{\Phi}_x^\dagger(\ttt')^{\tN}\ket{0}$, which according to our interpretation, particle state $\ket{\tN}_{\ttt}$ is localized at space-time position \begin{equation} t^\mu\tN\equiv(X_\tN,T_\tN), \label{space-time position1} \end{equation} while particle state $\ket{\tN}_{\ttt'}$ is localized at position \begin{equation} t'^\mu\tN\equiv(X'_\tN,T'_\tN).\label{space-time position2} \end{equation} The fact that both quantum states describe the same number particles at different times is consistent with our result at section \ref{Application to momentum eigenstates} that the number operator $\hat{\tN}_x$ does not depend on time. Since the field operator $\Phi^\dagger_x$ is the same in both situations, we claim that the two space-time positions are the space-time coordinates of the same event (i.e. particle state) as measured by two observers in relative motion, each observer using its own coordinate system. Suppose the observer $\mathcal{O}$ uses the coordinates $(X_\tN,T_\tN)$ and that another observer $\mathcal{O}'$ with coordinates $(X'_\tN,T'_\tN)$ is moving with velocity $\tw$ in the positive direction as viewed from $\mathcal{O}$. Evidently, both coordinate systems are quantum observables described by the space-time operators $\hat{T}^\mu$. Since any observer is a coordinate system for space-time, and since all observers measure the same space-time events, it should be possible to draw the coordinate lines of one observer on the coordinate lines of another observer. So, observer $\mathcal{O}'$ will be a point in the coordinate lines of $\mathcal{O}$. This point in a time interval $dT_\tN$ changes its position by $dX_\tN=\tw dT_\tN$. The physical meaning of the relative velocity $\tw$ between the two observers in the field context of section \ref{Localization scheme} can be conceived recalling that the quantities $\ttt,\tx_t$ originally were the wave numbers of field $\wtPsi$. In the language of wave mechanics we know that the derivative of (angular) frequency (i.e. $\tx_t$) with respect to the (angular) wavenumber (i.e. $\ttt$) is equal to the group velocity of the wave. In this regard, the relative velocity $\tw$ between the two observers is nothing else than the group velocity of the field $\wtPsi$. For definite $\tN$, as the case we consider here, $\tw$ reduces to \begin{equation} \tw:=\frac{dX_\tN}{dT_\tN}=\frac{d\tx_t}{d\ttt}. \end{equation} Let us then calculate the ``Lorentz distance'' square of two space-time points: the particle state location and the origin in each coordinate system. Making use of the dispersion relation (\ref{dispersion-rel2}), which of course is satisfied by both coordinate systems $t^\mu$ and $t'^\mu$, we get \begin{equation} \begin{aligned} \mathcal{O}:\quad S^2&:=X^2_\tN-T^2_\tN=\frac{\tN^2}{\kappa^2},\\ \mathcal{O}':\quad S'^2&:=X'^2_\tN-T'^2_\tN=\frac{\tN^2}{\kappa^2}. \end{aligned}\label{spacetime_interval} \end{equation} This result is really interesting. It demonstrates that, although the position and time of the event differ for measurements made by different observers, the spacetime interval of the event from the origin in each observer's coordinate system is the same, provided of course that the number of particles are the same in each frame, which is the only case we examine here. Hence, a new reading of eqs.(\ref{spacetime_interval}) should be the following \begin{equation} S^2 = S'^2,\qquad \forall\kappa,\tN \label{C6} \end{equation} which implies the identity \begin{equation} X^2_\tN -T^2_\tN = X'^2_\tN -T'^2_\tN.\label{C7} \end{equation} In classical mechanics identities of the form $\ref{C7}$ are of fundamental importance since they summarize the two postulates of special relativity \cite{Schutz}. In spite of the traditional and conceptually very convenient use of light signals in the derivation of these identities, in our case the derivation is quite independent of the existence of light signals, or actually of any real-world effect that travels at the speed of light. We could say that it stems from the fact that all spacetime field pairs $(\ttt,\tx_t),(\ttt',\tx_t'),\cdots$ are emanated from a single field $\wtPsi$, or more practically expressed, share the same field parameter $\kappa$. This statement becomes more clear rewriting the dispersion relation (\ref{dispersion-rel2}) we derived from studied $\wtPsi$ as \begin{equation} \tx_t^2 - \ttt^2 = 1/\kappa^2, \quad \forall(\ttt,\tx_t). \end{equation} It is straightforward from the identity (\ref{C7}) to show that the space-time coordinates $T_\tN,X_\tN$ are related to the space-time coordinates $T'_\tN,X'_\tN$ by the transformations \begin{equation} X'_\tN = \tga (X_\tN - \tw\, T_\tN),\qquad T'_\tN = \tga (T_\tN - \tw\, X_\tN)\label{C-LT} \end{equation} where $\tw$ is the relative velocity between the two observers, $\mathcal{O}$ and $\mathcal{O}'$. To construct transformations (\ref{C-LT}), we have introduced the quantity $\tga = (1-\tw^2)^{-1/2}$. If we consider two spacetime events, $\mathcal{A}$ and $\mathcal{B}$, then $\Delta X_\tN$, $\Delta T_\tN$ denote the finite coordinate differences $X^\mathcal{A}_\tN-X^\mathcal{B}_\tN$, $T^\mathcal{A}_\tN-T^\mathcal{B}_\tN$. In that case, by successively replacing the coordinates $\mathcal{A}$ and $\mathcal{B}$ into (\ref{C-LT}) and subtracting, we get the transformation \begin{equation} \label{C-LT3} \begin{aligned} \Delta X'_\tN &= \tga (\Delta X_\tN - \tw \Delta T_\tN),\\ \Delta T'_\tN &= \tga (\Delta T_\tN - \tw \Delta X_\tN). \end{aligned} \end{equation} If, in place of differences, we are forming differentials, we obtain identical transformations with the above but in the differentials: \begin{equation} \label{C-LT4} \begin{aligned} dX'_\tN &= \tga (dX_\tN - \tw dT_\tN),\\ dT'_\tN &= \tga (dT_\tN - \tw dX_\tN). \end{aligned} \end{equation} Evidently, (\ref{C-LT}-\ref{C-LT4}) have the form of classical Lorentz transformations. The derivation of these Lorentz transformations are conceptually very different from the derivation of the similarly named transformations in classical physics. There, they are the logical consequence of the two Einstein's postulates, here it is a consequence of the field $\wtPsi$. Let us consider once again the two observers $\mathcal{O}$ and $\mathcal{O}'$, but this time we put the particle state $\ket{\tN}$ to move non uniformly relative to both frames. The path that the state follows can be considered as a succession of the aforementioned space-time events. Position and time differentials allow us to calculate the velocities $v$ and $v'$, and the accelerations $g$ and $g'$, of the state in $\mathcal{O}$ and $\mathcal{O}'$, respectively. They are simply defined as \begin{eqnarray} \mathcal{O}: \,& v:=\frac{dX_\tN}{dT_\tN} \quad \text{and}\quad g:=\frac{d^2 X_\tN}{dT^2_\tN} \label{C10}\\ \mathcal{O}':& \,\,v':=\frac{dX'_\tN}{dT'_\tN} \quad \text{and}\quad g':=\frac{d^2 X'_\tN}{dT'^2_\tN}.\label{C11} \end{eqnarray} Substituting from (\ref{C-LT4}) into (\ref{C11}), and considering, in particular, the rectilinear motion, which in one dimension reads $\tw=v$, yields the acceleration transformation formula: \begin{equation} \taa= \tga^3\,g. \label{C12} \end{equation} where we have defined the proper acceleration $\taa$ of $\ket{\tN}$ as that which is measured in its rest-reference frame (in our case $\taa=g'$). Noticing that the right-hand side of (\ref{C12}) is equivalent to $d(\tga\tw)/dT_\tN$ and integrating twice, setting the constant of integration equal to zero in both cases, yields the following equation \begin{equation} X^2_\tN-T^2_\tN = \frac{1}{\taa^2}. \label{CC13} \end{equation} This equation represents an hyperbolic path in the coordinate system of observer $\mathcal{O}$. It describes the uniformly accelerated motion of $\ket{\tN}$, and it is of particular importance because when one analyzes the hyperbolic path from the perspective of transformations (\ref{C-LT}), due to the invariance (\ref{C7}), eq.(\ref{CC13}) translates into itself in any reference frame $\mathcal{O}'$, that is $X'^2_\tN-T'^2_\tN = \frac{1}{\taa^2}$. From eq.(\ref{CC13}), using the definition (\ref{space-time position1}) we get \begin{equation} \tx_t^2-\ttt^2 = \frac{1}{\taa^2\tN^2}. \label{C13} \end{equation} Comparing (\ref{C13}) with dispersion relation (\ref{dispersion-rel2}) we find that the field parameter $\kappa$ is proportional to the proper acceleration $\taa$ of the quantum state $\ket{\tN}$ and the eigenvalue of the number operator $\hat{\tN}$: \begin{equation} \kappa=\pm \taa \, \tN.\label{CC14} \end{equation} \section{In lieu of conclusions} \label{In lieu of conclusions} In this work we have proposed a new localization scheme for scalar fields (chosen as the simplest among quantum field theories for our arguments to be more clear) in an attempt to tackle a structural problem of current theory, the lack of general space-time operators. The idea behind our suggested localization mechanism can be summarized as follows: A standard, free, quantum field is defined by a differential equation in position space, or equivalently, (through Fourier transforms) by an algebraic equation in momentum space. Therefore, the resulted quantum particles are described by momentum eigenstates with well-defined momentum and energy but not well-defined position and time. Thus, let us add a second, new field with exactly the opposite properties, namely defined by a differential equation in momentum space and its quantum excitations to be represented by position eigenstates with well-defined posiiton and time. In general there is no any correlation between the two fields, unless one considers the new field in the momentum space of the standard field (or alternatively, the standard field in the posiiton space of the new field). This is the case we have considered here and found that doing so the annihilation (creation) operator of the new field coincide with the Klein-Gordon field operator (its conjugate transpose) and, as a consequence, the position eigenstates of the new field are actually superposed momentum eigenstates of the Klein-Gordon field. In other words, linear superposition of momentum states are localized in space-time since they are also space-time eigenstates. It turns out that the new field $\wtPsi$, defined in the extended momentum space of Klein-Gordon field, is intimately related to the manifestation of space-time on which standard particle states live. In that case, the properties of the derived space-time should be determined from the form of differential equation that defines $\wtPsi$. As we showed in this work, to derive a Minkowski space-time, i.e. space-time that satisfies the two postulates of special relativity, the field equation should be of the form (\ref{accelerated_diff.-equation}). It will be interesting to extend our analysis and investigate under which circumstances particle states can be localized in curved space-time. In this work, we have considered the field $\wtPsi$ in $1+1$ momentum space, $(\tE,\tp)$ and, as consequence, an $1+1$-dimensional quantum space-time, $(T_\tN,X_\tN)$, emerged. However, our argument can be easily generalised to physical dimensions: $(T_\tN:=\tN\ttt,X_\tN:=\tN\tx_t^1,Y_\tN:=\tN\tx_t^2,Z_\tN:=\tN\tx_t^3)$ respectively. All is needed is to extend differential equation (\ref{accelerated_diff.-equation}) to \begin{equation} \left(\hbar^2\partial_{\tE}^2-\hbar^2\partial_{\vec{\tp}}^2 - \frac{1}{\kappa^2} \right)\wtPsi(\tE,\vec{\tp}) = 0, \end{equation} with normal mode solutions modified to \begin{equation} \tu_{\ttt}(\tE,\vec{\tp}) = \frac{e^{i(\ttt \,\tE - \vec{\tx_t}\, \vec{\tp})}}{\sqrt{4 \pi |\vec{\tx_t}|}} . \end{equation} Where $\vec{\tp}=(\tp_1,\tp_2,\tp_3)$ and $\vec{\tx_t}=(\tx_t^1,\tx_t^2,\tx_t^3)$. From this, every part of our analysis can be obtained. The problem we have addressed in this work is not confined only to Klein-Gordon fields. Every field theory shares the same problem. In this sense, our approach presented here, which aims to deal with the difficulties in localization of Klein-Gordon particle states, can be equally well applied to the rest of quantum field theories. The implementation of our approach to Dirac fields is left for a later communication. \noindent \section*{Acknowledgments} I acknowledge financial support from Instituto Nazionale di Ottica - Consiglio Nazionale delle Ricerche (CNR-INO).
2,877,628,089,335
arxiv
\section{Introduction} The chiral magnetic effect (CME) proposed in \cite{kharzeev,kz,Kharzeev:2007jp,fukushima} provides a new probe of the QCD phase transition and the formation of quark-gluon plasma via relativistic heavy ion collisions(RHIC). The physical picture of CME relies on the interplay between the helicity of a quark and the external magnetic field. Consider a quark with positive (negative) helicity, its magnetic moment and the electric current it carries are always parallel (antiparallel) independently of the sign of its electric charge. The magnetic moment tends to be parallel to the magnetic field, so the electric current will be parallel (antiparallel) to the field for positive (negative) helicity. For massless quarks, the helicity coincides with the axial charge, \begin{equation} Q_5=\int d^3\mathbf{r}\bar\psi\gamma_4\gamma_5\psi \label{naive} \end{equation} with the quark spinor $\psi$ carrying both color and flavor indexes. Therefore, for QGP of a nonzero axial charge density, a net electric current will be generated in (opposite to) the direction of the external magnetic field if the positive (negative) helicity is in excess. The conditions that support CME are likely implemented in RHIC. Firstly, for off-central collisions, a strong magnetic field is produced perpendicular to the collision plane; Secondly, because of the high temperature, there may be a sizable probability for the transition to a topologically nontrivial gluon configuration accompanied by a change of the axial charge according to the winding number \cite{Kharzeev:2007jp,sphaleron, gdmoore} \begin{equation} \Delta Q_5=n_W\equiv-\frac{N_fg^2}{32\pi^2}\int d^4x\epsilon_{\mu\nu\rho\lambda}F_{\mu\nu}^lF_{\rho\lambda}^l, \label{deltaq5} \end{equation} where $F_{\mu\nu}^l$ is the strength of the color $SU(N_c)$ field ($N_c=3$) with $l$ the color index and $N_f$ is the number of flavors. Thirdly, the de-confined quarks that carry the chiral magnetic current can travel sufficiently far before hadronization to lead to observable charge asymmetry perpendicular to the collision plane. It has been suggested recently that such a charge asymmetry is correlated with the baryon number asymmetry through a similar mechanism, the chiral vortical effect \cite{khdt, Son}. For the experimental status of CME, see for example \cite{star08, star09, Wang09, Asakawa:2010bu} The chiral magnetic effect for a free quark gas in a static and homogeneous magnetic field $\mathbf{\cal B}$ at thermal equilibrium has been analyzed in great details. With the aid of the grand partition function at a nonzero axial chemical potential $\mu_5$, \begin{equation} Z={\rm Tr}e^{-\beta(H-\mu N-\mu_5Q_5)} \label{grand} \end{equation} with $H$ the Hamiltonian, $N$ the quark number, $\beta$ the inverse temperature and $\mu$ the quark number chemical potential, one obtains the chiral magnetic current $\mathbf{J}=\eta\mathbf{j}$ where \begin{equation} \eta=N_c\sum_fq_f^2 \end{equation} with $q_f$ the charge number of the flavor $f$ and the current per unit charge given by the classical expression \begin{equation} \mathbf{j}=\frac{e^2}{2\pi^2}\mu_5\mathbf{{\cal B}}. \label{classical} \end{equation} The chiral magnetic current at nonzero momentum and frequency has also been calculated via current-current correlator to one loop order within the same grand canonical ensemble defined by (\ref{grand}) \cite{kw}. The same effect has also been examined with holographic models \cite{HUYee,rebhan,gorsky,rubakov,gynther,brits,kirsch} and the lattice simulation \cite{Buividovich:2009wi,Buividovich:2010tn}. The effect of a nonzero quark mass has been considered recently in \cite{wjfu}. A diagrammatic proof of (\ref{classical}) to all orders at high density has been attempted in \cite{hong}. It was pointed out in \cite{rubakov} that the naive axial charge (\ref{naive}) is not the right object to define the grand canonical ensemble since it is not conserved because of the axial anomaly, \begin{equation} \frac{\partial J_{5\mu}}{\partial x_\mu} =i\frac{N_fg^2}{32\pi^2}\epsilon_{\mu\nu\rho\lambda}F_{\mu\nu}^lF_{\rho\lambda}^l +i\eta\frac{e^2}{16\pi^2}\epsilon_{\mu\nu\rho\lambda}F_{\mu\nu}F_{\rho\lambda} =\frac{\partial\Omega_\mu}{\partial x_\mu}, \label{anomaly0} \end{equation} where the axial vector current $J_{5\mu}=i\bar\psi\gamma_\mu\gamma_5\psi$ and $\Omega_\mu$ is a linear combination of the Chern-Simons of QCD and QED, given by \begin{equation} \Omega_\mu=i\frac{N_fg^2}{8\pi^2}\epsilon_{\mu\nu\rho\lambda} A_\nu^l\left(\frac{\partial A_\lambda^l}{\partial x_\rho}-\frac{1}{3}f^{lab}A_\rho^aA_\lambda^b\right) +i\eta\frac{e^2}{4\pi^2}\epsilon_{\mu\nu\rho\lambda}A_\nu\frac{\partial A_\lambda}{\partial x_\rho}. \end{equation} with $A_\mu^l$ and $A_\mu$ the gauge potential of gluons and photons. The integration of (\ref{anomaly0}) gives rise to (\ref{deltaq5}), to which the trivial topology of the electromagnetic field does not contribute. The conserved axial charge to replace $Q_5$ in (\ref{grand}) reads \begin{equation} \label{conserved} \tilde Q_5 = Q_5+i\int d^3\mathbf{r}\Omega_4. \end{equation} In what follows, we shall name $Q_5$ the naive axial charge. Furthermore, the author of \cite{rubakov} argued that the gauge invariance prevents a nonzero chiral magnetic current to be generated from the grand canonical ensemble defined with $Q_5$ and the chiral magnetic current comes solely from the second term of (\ref{conserved}) in the ensemble defined by $\tilde Q_5$. Because this term stems from the anomaly, which is universal to all orders, the classical expression (\ref{classical}) is robust against higher order corrections. In this paper, we shall analyze the chiral magnetic effect via the current-current correlator in the light of Ref. \cite{rubakov}. There are standard recipes to implement gauge invariant regularization schemes via thermal diagrams employed in this work. Higher order corrections can also be included systematically. We find that the validity of the statement in \cite{rubakov} relies on the existence of the infrared limits of the energies and momenta involved, which is not always guaranteed. We shall pinpoint a few exceptions to the statement in \cite{rubakov}, one is caused by the massless poles of the invariant form factors underlying the triangle diagram at $T=0$ and $\mu=0$ and others are related to the noncommutativity between the zero momentum limit and the zero energy limit at $T\ne 0$ and/or $\mu\ne 0$. The latter subtlety is a common feature of thermal field theories. The difference between different orders of limits is likely to be subject to higher order corrections. Since the magnetic field in RHIC is neither homogeneous nor static and the system is not in a complete thermal equilibrium, these issues need to be addressed to assess the robustness of the effect in RHIC phenomenology. In the next section, we shall work out the most general structure of the chiral magnetic current consistent with the rotation symmetry, Bose symmetry and the gauge invariance. We shall restrict our attention to the diagrams that contribute to the same powers of $\mu_5$ and $\mathbf{{\cal B}}$ as (\ref{classical}). The infrared subtlety is allocated to some invariant form factors of three point functions. The one-loop evaluation of the chiral magnetic current will be revisited in the section III with the Pauli-Villars regularization. In the section IV, we shall clarify the relation between the chiral magnetic current and the axial anomaly for an inhomogeneous and time-dependent $\mu_5$, which is related to the QGP off thermal equilibrium. The section V will conclude the paper with some open questions. Throughout the paper, all four momenta will be denoted by capital letters. We shall adopt the Euclidean metric $(1,1,1,1)$ in which a Minkowski four momentum $P=(\mathbf{p},ip_0)$ with $p_0$ real. All gamma matrices are hermitian. \section{The General Structure of the Chiral Magnetic Current} The Lagrange density of a quark matter at nonzero baryon number and axial charge densities is given by \begin{eqnarray} {\cal L} &=& -\frac{1}{4}F_{\mu\nu}^lF_{\mu\nu}^l-\frac{1}{4}F_{\mu\nu}F_{\mu\nu} -\bar\psi\left(\gamma_\mu\frac{\partial}{\partial x_\mu}-igT^lA_\mu^l-ie\hat q A_\mu\right)\psi \\\nonumber &+& \mu\bar\psi\gamma_4\psi+\mu_5\left(\bar\psi\gamma_4\gamma_5\psi +i\Omega_4\right)+J_\mu^{\rm ext.} A_\mu\\\nonumber &+& \hbox{gauge fixing terms and renormalization counter terms} \label{lagrange} \end{eqnarray} where $\hat q$ is the diagonal matrix of electric charge in flavor space, $\mu$ is the quark number chemical potential and $\mu_5$ is the axial charge chemical potential. An external electric current $J_\mu^{\rm ext.}$ has been added to the Lagrange. The generating functional of the connected Green function of photons is the logarithm of the partition function \begin{equation} Z[J^{\rm ext.}]=\int[dA^l][dA][d\psi][d\bar\psi]\exp\left(i\int dt d^3\mathbf{r}{\cal L}\right). \end{equation} For the Matsubara Green functions, the time integral inside $exp(...)$ is along the imaginary axis of the complex $t$-plane extending from 0 to $i\beta=i/T$ subject to periodic (antiperiodic) boundary conditions for bosonic (fermionic) field variables and $-T\ln Z$ is the thermodynamic potential at equilibrium. For the closed time path Green function (CTP), the time $t$ is integrated along the real axis from $-\infty$ to $\infty$ and then from $\infty$ back to $-\infty$ and the thermal equilibrium is implemented by the initial correlations. All fields can take values on either branch of this contour, which doubles the number of degrees of freedom \cite{kcchou,keld,schw, rep-145,PeterH}. See appendix \ref{CTP_appendix} for a brief introduction of the CTP formalism. The external current $J_\mu^{\rm ext.}$ generates a nonzero thermal average of the electromagnetic potential, given by \begin{equation} {\cal A}_\mu(x) =-i\frac{\delta \ln Z}{\delta J_\mu^{\rm ext.}(x)} \end{equation} and its Legrendre transformation reads \begin{equation} \frac{\delta{\cal S}}{\delta{\cal A}_\mu(x)}=-J_\mu^{\rm ext.}(x), \label{legendre} \end{equation} where the effective action \begin{eqnarray} \label{effective} {\cal S}[{\cal A}] &=& -i\ln Z[J^{\rm ext.}]-\int d^4xJ_\mu^{\rm ext.}{\cal A}_\mu\\\nonumber &=& \int d^4x\left(-\frac{1}{4}{\cal F}_{\mu\nu}{\cal F}_{\mu\nu} +\eta\frac{e^2}{4\pi^2}\mu_5{\cal A}_i{\cal B}_i\right) +\Gamma[{\cal A}], \end{eqnarray} with ${\cal F}_{\mu\nu}=\frac{\partial {\cal A}_\nu}{\partial x_\mu} -\frac{\partial {\cal A}_\mu}{\partial x_\nu}$ and ${\cal B}_i=\frac{1}{2}\epsilon_{ijk}{\cal F}_{jk}=(\vec\nabla\times\mathbf{{\cal A}})_i$. In the second line of (\ref{effective}), we have separated the contributions of tree diagrams (first two terms) from that of loop diagrams (third term). Eq.(\ref{legendre}) is equivalent to the Maxwell equation \begin{equation} \frac{\partial{\cal F_{\mu\nu}}}{\partial x_\nu}=J_\mu^{\rm ext.}+J_\mu, \end{equation} where \begin{equation} J_i(x)=\frac{\delta\Gamma}{\delta{\cal A}_i(x)}+\eta\frac{e^2}{2\pi^2}\mu_5{\cal B}_i \label{extJ} \end{equation} represents the induced current in the medium. The functional $\Gamma[{\cal A}]$ can be expanded according to the powers of ${\cal A}$ with the proper vertex functions as coefficients. We have, in momentum space \begin{equation} \Gamma[{\cal A}]=\int\frac{d^4Q}{(2\pi)^4}\Big[ -\frac{1}{2}\Pi_{\mu\nu}(Q){\cal A}_\mu^*(Q){\cal A}_\nu(Q)+O({\cal A}^3)\Big], \label{GammaA} \end{equation} where only the term contributing to the linear response is displayed explicitly. It follows from (\ref{extJ}) that \begin{equation} J_i(Q)={\cal K}_{ij}(Q){\cal A}_j(Q), \label{curr} \end{equation} where \begin{equation} {\cal K}_{ij}(Q)=-\Pi_{ij}(Q)-i\eta\frac{e^2}{4\pi^2}\mu_5\epsilon_{ijk}q_k +O({\cal A}^2) \end{equation} with all QCD and higher order QED corrections contained in the photon self-energy tensor $\Pi_{\mu\nu}(Q)$. The prescription of the functional derivative for the retarded linear response is outlined near the end of the appendix \ref{CTP_appendix}. The antisymmetric part of ${\cal K}_{ij}(Q)$, \begin{equation} {\cal K}_{ij}^A(Q)\equiv\frac{1}{2}[{\cal K}_{ij}(Q)-{\cal K}_{ji}(Q)] \end{equation} which is odd in $\mu_5$, carries odd parity and generates the chiral magnetic current. \begin{figure} \begin{center} \includegraphics[width=0.3\linewidth]{fig1.eps}\\ \caption{The diagrammatic representation of the contribution to the chiral magnetic current from the photon self-energy, where the contribution of each vertex to the Feynman amplitude is indicated explicitly.}\label{fig1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.6\linewidth]{fig2.eps}\\ \caption{The triangle diagram underlying the axial anomaly, where the solid line represents the free quark propagator at $\mu_5=0$}\label{fig2} \end{center} \end{figure} Expanding the response function ${\cal K}_{ij}^A(Q)$ in the powers of $\mu_5$, we have ${\cal K}_{ij}^A(Q)=\mu_5{\cal K}_{ij}^{(1)}(Q)+O(\mu_5^3)$, where \begin{equation} {\cal K}_{ij}^{(1)}(Q)=-\frac{\partial}{\partial\mu_5}\Pi_{\mu\nu}(Q)|_{\mu_5=0} -i\eta\frac{e^2}{2\pi^2}\epsilon_{ijk}q_k \label{K1} \end{equation} and underlies the classical form of the chiral magnetic current (\ref{classical}). The first term of (\ref{K1}) is represented by the 1PI diagram with two external vector vertices and an external axial vector vertex, shown in Fig.1, at $\mu=i$, $\nu=j$ and $\rho=4$. The lowest order of it consists of the usual triangle diagrams in Fig.2. Let the incoming 4-momenta at the photon vertices be $Q_1\equiv(\mathbf{q}_1,i\omega)$ and $Q_2\equiv(\mathbf{q_2},-i\omega)$, the incoming 4-momentum at the axial vertex is then $-Q_1-Q_2=(-\mathbf{q}_1-\mathbf{q}_2,0)$. The amplitude of the diagram $\Delta_{\mu\nu}(Q_1,Q_2)$ consists of a pseudo-tensor $\Delta_{ij}(Q_1,Q_2)$, a pseudo-vector $\Delta_{4j}(Q_1,Q_2)$ and a pseudo-scalar $\Delta_{44}(Q_1,Q_2)$. In the limit $Q_1\to -Q_2$ with $Q_1\equiv Q=(\mathbf{q},i\omega)$, we find that \begin{equation} \frac{\partial}{\partial\mu_5}\Pi_{\mu\nu}(Q)|_{\mu_5=0}= \Delta_{\mu\nu}(Q,-Q). \label{Delta} \end{equation} The rotation invariance and the Bose symmetry \begin{equation} \Delta_{\mu\nu}(Q_1,Q_2)=\Delta_{\nu\mu}(Q_2,Q_1) \end{equation} dictates the following most general tensorial structure \begin{eqnarray} \label{tensor} \Delta_{ij}(Q_1,Q_2) &=& i\eta\frac{e^2}{2\pi^2} [C_0(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega)\epsilon_{ijk}q_{1k} -C_0(q_2^2,q_1^2,\mathbf{q}_1\cdot\mathbf{q}_2;-\omega)\epsilon_{ijk}q_{2k}\\ \nonumber &+& C_1(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega)\epsilon_{jkl}q_{1k}q_{2l}q_{1i} -C_1(q_2^2,q_1^2,\mathbf{q}_1\cdot\mathbf{q}_2;-\omega)\epsilon_{ikl}q_{1k}q_{2l}q_{2j}], \end{eqnarray} \begin{equation} \Delta_{4k}(Q_1,Q_2)= \eta\frac{e^2}{2\pi^2} C_2(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega)\epsilon_{ijk}q_{1i}q_{2j} =\Delta_{k4}(Q_2,Q_1) \label{vector} \end{equation} and $\Delta_{44}(Q_1,Q_2)=0$, where $C_0$, $C_1$ and $C_2$ are dynamical form factors. The time reversal invariance implies that $C_0$, $C_1$ are even functions of $\omega$ and $C_2$ is odd in $\omega$ (This, however, is not required for our purpose). Notice that the tensors $\epsilon_{ikl}q_{1k}q_{2l}q_{2i}$ and $\epsilon_{ikl}q_{1k}q_{2l}q_{1j}$ are not independent and can be reduced to the tensors already included in (\ref{tensor}) via Schouten identity \begin{equation} \epsilon_{ijk}q_l-\epsilon_{lij}q_k+\epsilon_{kli}q_j-\epsilon_{jkl}q_i=0. \end{equation} Furthermore, switching $Q_1\to Q_2$ amounts to $\mathbf{q}_1\to \mathbf{q}_2$ and $\omega\to-\omega$. It follows from (\ref{K1}) and (\ref{tensor}) that \begin{equation} {\cal K}_{ij}^A(Q)=i\eta\frac{e^2}{2\pi^2}\mu_5[F(Q)-1]\epsilon_{ijk}q_k+O(\mu_5^3) \end{equation} with \begin{equation} F(Q)=-C_0(q^2,q^2,-q^2;\omega)-C_0(q^2,q^2,-q^2;-\omega). \label{FinC0} \end{equation} The chiral magnetic current in a constant magnetic field corresponds to the limit $F(0)$, which is subtle as we shall see. The electromagnetic gauge invariance, \begin{equation} Q_{1\mu}\Delta_{\mu\nu}(Q_1,Q_2)=Q_{2\nu}\Delta_{\mu\nu}(Q_1,Q_2)=0 \end{equation} gives rise to the relations \begin{equation} C_0(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega)= -q_2^2C_1(q_2^2,q_1^2,\mathbf{q}_1\cdot\mathbf{q}_2;-\omega) +\omega C_2(q_2^2,q_1^2,\mathbf{q}_1\cdot\mathbf{q}_2;-\omega) \end{equation} and \begin{equation} C_0(q_2^2,q_1^2,\mathbf{q}_1\cdot\mathbf{q}_2;-\omega)= -q_1^2C_1(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega) -\omega C_2(q_1^2,q_2^2,\mathbf{q}_1\cdot\mathbf{q}_2;\omega). \end{equation} and therefore \begin{equation} F(Q)=q^2[C_1(q^2,q^2,-q^2;\omega)+C_1(q^2,q^2,-q^2;-\omega)] +\omega[C_2(q^2,q^2,-q^2;\omega)-C_2(q^2,q^2,-q^2;-\omega)]. \end{equation} If the infrared limit of the dynamical form factors $C_1$ and $C_2$ exists, then $F(0)$=0 and there is no chiral magnetic current associated to the {\it{naive}} axial charge. This is the case in the static limit $q\to 0$ with $Q=(\mathbf{q},0)$ to one-loop order at nonzero $T$ and/or $\mu$. It remains so if there exists an nonperturbative IR cutoff to remove the $\frac{1}{q^2}$ singularities brought about by QCD corrections\cite{linde} (Such kind of singularities is likely to occur for diagrams with more than one quark loops linked by gluon lines). In that case, the chiral magnetic current takes the classical form (\ref{classical}) to all orders. It is a common feature of thermal field theories that the different orders of the double limits $\lim_{q\to 0}\lim_{\omega\to 0}$ and $\lim_{\omega\to 0}\lim_{q\to 0}$ may not agree. While the former order of limits of $C_1(q^2,q^2,-q^2,\omega)$ and $C_2(q^2,q^2,-q^2;\omega)$ converges and leads to the classical form of the chiral magnetic current, the latter order of limits leads to IR divergence. The explicit calculation of the triangle diagram of Fig.2 in the appendix \ref{IR_formfactor_appendix} with $\mu=4$, $\rho=4$ and $\nu=j$ yields \begin{equation} C_2(0,0,0;\omega)=\frac{1}{3\omega} \label{infrared} \end{equation} as $\omega\to 0$ and $\lim_{\omega\to 0}\lim_{q\to 0}F(Q)=\frac{2}{3}$. Consequently, the magnitude of the one-loop chiral magnetic current is reduced to one third of the classical magnitude. This is consistent with the direct one-loop calculation in the literature \cite{kw} and will be reexamined in the next section. Since the form factor $F(Q)$ is not linked to the axial anomaly, the chiral magnetic current in this order of limits is likely to be subject to higher order corrections. The IR singularity also shows up via the massless poles if the zero temperature and zero chemical potential limits are taken prior to the limit $Q\to 0$ and $\Delta_{\mu\nu}(Q_1,Q_2)$ becomes fully covariant then. To the one-loop order, the triangle diagram Fig.2 gives rise to \begin{equation} C_1(q^2,q^2,-q^2;\omega)=\frac{1}{2(q^2-\omega^2)} \label{IRc1} \end{equation} and \begin{equation} C_2(q^2,q^2,-q^2;\omega)=-\frac{\omega}{2(q^2-\omega^2)}. \label{IRc2} \end{equation} (See section IV for details.) Both $C_1$ and $C_2$ are infrared divergent and we find $F(0)=1$ and therefore zero chiral magnetic current for $T=\mu=0$ but $\mu_5\ne 0$. \section{The one-loop contribution} \begin{figure} \begin{center} \includegraphics[width=0.3\linewidth]{fig3.eps}\\ \caption{The one-loop diagram of the photon self-energy. The solid line with a double arrow stands for the free propagator to all orders of $\mu_5$}\label{fig3} \end{center} \end{figure} The one-loop contribution to the chiral magnetic current has been discussed extensively in the literature. In the present section, we shall supplement this calculation with the Pauli-Villars regularization, since the photon self-energy as a whole suffers from the UV divergence. As the regularization respects the gauge invariance, the result will be consistent with the Ref.\cite{rubakov} and the statement of the previous section. The trivial color-flavor factor $\eta$ will be suppressed below. The one-loop photon self-energy tensor at the temperature $T$, shown in Fig.3 is given by \begin{equation} \Pi_{\mu\nu}(Q)=e^2T\sum_{p_0}\int\frac{d^3\mathbf{p}}{(2\pi)^3} \Big[\Xi_{\mu\nu}(P,Q|m)-\sum_sC_s\Xi_{\mu\nu}(P,Q|M_s)\Big], \end{equation} where \begin{equation} \Xi_{\mu\nu}(P,Q|m)={\rm tr}S_F(P+Q|m)\gamma_\mu S_F(P|m)\gamma_\nu. \label{Xi} \end{equation} and the summation in the integrand corresponds to the contribution of the Pauli-Villars regulators that remove all UV divergences. We have \begin{equation} \sum_sC_s=1 \label{PV} \end{equation} and $M_s\longrightarrow\infty$ after the integration. The free quark propagator with a four momentum $P=(\mathbf{p},ip_0)$, a mass $m$, a quark number chemical potential $\mu$ and an axial charge chemical potential $\mu_5$ reads \begin{eqnarray} S_F(P|m)&=&\frac{i}{{\not P}+\mu\gamma_4+\mu_5\gamma_4\gamma_5-m}\\\nonumber &=&\frac{i}{2}[A(P,m,\mu,\mu_5)+A(P,m,\mu,-\mu_5)] +\frac{i}{2}\gamma_5[B(P,m,\mu,\mu_5)-B(P,m,\mu,-\mu_5)] \label{prop} \end{eqnarray} where ${\not P}\equiv \gamma_4 p_0-i\mathbf{\gamma}\cdot\mathbf{p}$ and we have decomposed $S_F(P|m)$ into the parts even and odd in $\mu_5$ with \begin{equation} A(P,m,\mu,\mu_5)=\frac{(p_0+\mu)\gamma_4-i(p+\mu_5)\mathbf{\gamma}\cdot \hat{\mathbf{p}}+m}{(p_0+\mu)^2-(p+\mu_5)^2-m^2} \label{AP} \end{equation} and \begin{equation} B(P,m,\mu,\mu_5)=\frac{-(p+\mu_5)\gamma_4+i(p_0+\mu-m\gamma_4)\mathbf{\gamma} \cdot\hat{\mathbf{p}}}{(p_0+\mu)^2-(p+\mu_5)^2-m^2}. \label{BP} \end{equation} The chiral magnetic current corresponds to the antisymmetric spatial components of $\Pi_{\mu\nu}(Q)$, i.e. \begin{equation} \Pi_{ij}^A(Q)\equiv\frac{1}{2}[\Pi_{ij}(Q)-\Pi_{ji}(Q)]= e^2T\sum_{p_0}\int\frac{d^3\mathbf{p}}{(2\pi)^3} \Big[\Xi_{ij}^A(P,Q|m)-\sum_sC_s\Xi_{ij}^A(P,Q|M_s)\Big], \end{equation} where \begin{eqnarray}\label{XiA} \Xi_{ij}^A(P,Q|m)&=&-\frac{1}{4}{\rm tr}\gamma_5 \lbrace[B(P+Q,m,\mu,\mu_5)-B(P+Q,m,\mu,-\mu_5)]\gamma_i\\\nonumber &\times& [A(P,m,\mu,\mu_5)+A(P,m,\mu,-\mu_5)]\gamma_j\\\nonumber &+&[B(P,m,\mu,\mu_5)-B(P,m,\mu,-\mu_5)]\gamma_i [A(P+Q,m,\mu,\mu_5)+A(P+Q,m,\mu,-\mu_5)]\gamma_j\rbrace. \end{eqnarray} It is straightforward to work out the trace and summation over the Matsubara frequency, $p_0=i(2n+1)\pi T$. To obtain the retarded self-energy, we shall follow the recipe of Baym and Mermin \cite{baym}to extend the Matsubara frequency $q_0$ to the upper edge of the real axis, $q_0\to \omega+i0^+$. The details are shown in the appendix \ref{oneloop_appendix} and we shall report two special cases below. The antisymmetric part of the self-energy tensor is parametrized as \begin{equation} \Pi_{ij}^A(Q)=-i\frac{e^2}{2\pi^2}\mu_5F_1(q,\omega)\epsilon_{ijk}q_k, \label{form} \end{equation} with $F_1(q,\omega)$ at $\mu_5=0$ corresponds to the one-loop approximation of $F(q,\omega)$ as defined in Eq. (\ref{FinC0}). The dependences on the spatial momentum and the energy are indicated separately here. Diagrammatically, Fig.2 corresponds to the linear term of the Taylor expansion of Fig.3 in $\mu_5$. \noindent \subsection{The static limit} At zero frequency, $q_0=0$, we find that \begin{equation} F_1(q,0)=-{\cal F}(q|m)+\sum_sC_s{\cal F}(q|M_s) \label{static} \end{equation} where \begin{eqnarray} \label{calF} {\cal F}(q|m)&=&\frac{1}{2\mu_5q}\int_0^\infty dpp\ln|\frac{2p-q}{2p+q}|\\\nonumber &{}&\lbrace\frac{p+\mu_5}{E_+}[f(E_+-\mu)-f(-E_+-\mu)] -\frac{p-\mu_5}{E_-}[f(E_--\mu)-f(-E_--\mu)]\rbrace \end{eqnarray} with \begin{equation} E_\pm=\sqrt{(p\pm\mu_5)^2+m^2}, \end{equation} and the Fermi distribution function \begin{equation} f(\xi)=\frac{1}{e^{\beta\xi}+1}. \label{distr} \end{equation} It is straightforward to verify that the limit $q\to 0$ at $T\neq 0$ and/or $\mu\neq 0$ yields, \begin{eqnarray} {\cal F}(0|m)&=&-\frac{1}{2\mu_5}\int_0^\infty dp \lbrace[\frac{p+\mu_5}{E_+} [f(E_+-\mu)-f(-E_+-\mu)]-\frac{p-\mu_5}{E_-}[f(E_--\mu)-f(-E_--\mu)]\rbrace \nonumber \\ &=&\frac{1}{2\mu_5\beta}\Big[\ln(1+e^{-\beta(E_+-\mu)}) -\ln(1+e^{-\beta(E_--\mu)})+\ln(e^{\beta(E_++\mu)}+1) -\ln(e^{\beta(E_-+\mu)}+1)\Big]_0^\infty\nonumber \\ &=&\frac{1}{2\mu_5}\lim_{p\to\infty}(E_+-E_-)=1, \label{homo} \end{eqnarray} Then eqs.(\ref{static}) and (\ref{PV}) implies that \begin{equation} \lim_{q\to 0}\lim_{\omega\to 0}F_1(q,\omega)=0. \end{equation} This result is expected according to the discussion in the last section because the nonzero Matsubara frequency, $(2n+1)\pi T$ regularizes the infrared behavior of the quark propagator even in the massless limit. Notice that the regulator contribution \begin{equation} \lim_{M_s\to \infty}{\cal F}(q|M_s)=1 \label{pvlimit} \end{equation} for all $q$ and this is also the case with a non static $Q$. If, on the other hand, $T$ and $\mu$ as well as the quark mass are set to zero first, we find \begin{equation} {\cal F}(q|0)=-\frac{1}{\mu_5 q}\int_0^{|\mu_5|} dpp\ln\left|\frac{2p-q}{2p+q}\right|. \label{covlimit} \end{equation} It follows from (\ref{pvlimit}), (\ref{covlimit}) and (\ref{static}) that $F_1(q,0)=-1$ at $\mu_5=0$, in agreement with the covariant result reported at the end of the last section. \subsection{Massless limit} In the massless limit, $m=0$, the quark propagator (\ref{prop}) reduces to \begin{equation} S_F(P|0)=\frac{i}{{\not P}+(\mu-\mu_5)}\frac{1+\gamma_5}{2} +\frac{i}{{\not P}+(\mu+\mu_5)}\frac{1-\gamma_5}{2} \label{propm0} \end{equation} and $F_1(q,\omega)$ in this case reads \begin{eqnarray} \label{massless} F_1(q,\omega) &=& -\frac{1}{2\mu_5}\int_0^\infty dp p^2\Big[ \frac{J(p,q,\omega)+J(p,-q,-\omega)}{e^{\beta(p-\mu+\mu_5)}+1} -\frac{J(p,q,-\omega)+J(p,-q,\omega)}{e^{\beta(p+\mu-\mu_5)}+1}\nonumber \\ &-&\frac{J(p,q,\omega)+J(p,-q,-\omega)}{e^{\beta(p-\mu-\mu_5)}+1} +\frac{J(p,q,-\omega)+J(p,-q,\omega)}{e^{\beta(p+\mu+\mu_5)}+1} \Big]+1, \end{eqnarray} where \begin{equation} {\rm Re}J(p,q,\omega)=\frac{1}{pq}\Big[-\frac{\omega}{q} +\frac{1}{2}\left(1+\omega\frac{\omega^2-2p\omega-q^2}{2pq^2}\right) \ln|\frac{(\omega-q)(\omega+q-2p)}{(\omega+q)(\omega-q-2p)}|\Big] \end{equation} and \begin{equation} {\rm Im}J(p,q,\omega)=\frac{\pi}{pq}{\rm sign}(\omega) \left(1+\omega\frac{\omega^2-2p\omega-q^2}{2pq^2}\right) \theta\left(1-\frac{|q^2+2p\omega-\omega^2|}{2pq}\right). \end{equation} and the "+1" of eq.(\ref{massless}) comes from the Pauli-Villars regulators. The limit $Q\to 0$ of the PV regulators is independent of the order between $q\to 0$ and $\omega\to 0$ as long as $M_s\to\infty$ is taken first. The same limit of massless part is, however, subtle. We have \begin{equation} \lim_{q\to 0}\lim_{\omega\to 0}F_1(q,\omega)=0 \end{equation} but \begin{equation} \lim_{\omega\to 0}\lim_{q\to 0}F_1(q,\omega)=\frac{2}{3} \end{equation} consistent with the result reported in \cite{kw}. The nonzero value of the latter limit signals infrared divergence of the form factor $C_2(q^2,q^2,-q^2;\omega)$ defined in the last section under the same orders of limits. \section{The Relation to the Triangle Anomaly} In the section 2, we related the chiral magnetic current to the infrared limit of the three point Green's function in Fig.1 with two electric currents and the fourth component of the axial vector current. We analyzed the general structure of the chiral magnetic current as is required by the electromagnetic Ward identity. For the sake of simplicity, we restricted our attention to zero energy flow at the axial vector vertex. To explore the the impact of the anomalous axial current Ward identity, this restriction will be relaxed in the present section. The physics of the diagram of Fig. 1 with $\rho=4$ and an arbitrary $Q_1+Q_2$ corresponds to CME at a space-time dependent $\mu_5$ in a QGP off thermal equilibrium. We shall denote the general Feynman amplitude of Fig.1 by $\Lambda_{\mu\nu\rho}(Q_1,Q_2)$ with $Q_1$ and $Q_2$ the incoming momenta at the vector vertices indexed by $\mu$ and $\nu$. We have \begin{equation} \Lambda_{\mu\nu 4}(Q_1,Q_2)=\Delta_{\mu\nu}(Q_1,Q_2) \end{equation} with $\Delta_{\mu\nu}(Q_1,Q_2)$ defined in the section 2. The incoming momentum at the axial vector vertex is then \begin{equation} K=(\mathbf{k},ik_0)=-Q_1-Q_2. \label{outgoing} \end{equation} We have \begin{equation} Q_{1\mu}\Lambda_{\mu\nu\rho}(Q_1,Q_2)=Q_{2\nu}\Lambda_{\mu\nu\rho}(Q_1,Q_2)=0 \label{gaugeinv} \end{equation} following from the electromagnetic gauge invariance. The triangle anomaly implies that \begin{equation} (Q_1+Q_2)_\rho\Lambda_{\mu\nu\rho}(Q_1,Q_2) =-i\eta\frac{e^2}{2\pi^2}\epsilon_{\mu\nu\alpha\beta}Q_{1\alpha}Q_{2\beta} \label{anomaly} \end{equation} which holds to all orders of interaction at arbitrary temperature and chemical potential \cite{itoyama}. The classical expression of the chiral magnetic current is associated to the component $\Lambda_{ij4}(Q_1,Q_2)$ with the momenta \begin{equation} Q_1=(\mathbf{q},i\omega) \qquad Q_2=(-\mathbf{q},-i\omega). \label{cmemomenta} \end{equation} It is tempting to relate the self-energy contribution to CME with the axial anomaly via the limiting process \begin{equation} \Lambda_{ij4}(Q_1,Q_2)=-i\lim_{k_0\to 0}\frac{1}{k_0} (Q_1^\prime+Q_2^\prime)_\rho\Lambda_{ij\rho}(Q_1^\prime,Q_2^\prime) =-i\eta\frac{e^2}{2\pi^2}\epsilon_{ijk}q_k \label{cmelimit} \end{equation} where $Q_1^\prime\equiv(\mathbf{q},ik_0/2)$ and $Q_2^\prime\equiv(-\mathbf{q},ik_0/2)$. This appears in contradiction with the statement of the absence of CME with the naive axial charge. It does not display the nontrivial energy-momentum dependence of the one-loop result. The reason lies in the infrared singularity and the subtlety of the order of limits $k_0\to 0$ and $\mathbf{k}\to 0$ as we shall analyze below. At $T=0$ and $\mu=0$, however, the order of limits is irrelevant and we always get the RHS of (\ref{cmelimit}), consistent with the one loop result near the end of the subsection 3.1. The most general tensorial decomposition at $T=\mu=0$ consistent with the gauge invariance (\ref{gaugeinv}) and Bose symmetry reads \begin{eqnarray} \Lambda_{\mu\nu\rho}(Q_1,Q_2) &=& i\eta\frac{e^2}{2\pi^2}\lbrace\epsilon_{\mu\nu\alpha\beta}Q_{1\alpha}Q_{2\beta} [Q_{1\rho}D_1(Q_1^2,Q_2^2,Q_1\cdot Q_2)+Q_{2\rho}D_1(Q_2^2,Q_1^2,Q_1\cdot Q_2)] \nonumber\\ &+&(\epsilon_{\nu\rho\alpha\beta}Q_{1\alpha}Q_{2\beta}Q_{1\mu} -Q_1^2\epsilon_{\mu\nu\rho\lambda}Q_{2\lambda})D_2(Q_1^2,Q_2^2,Q_1\cdot Q_2) \\\nonumber &-&(\epsilon_{\mu\rho\alpha\beta}Q_{1\alpha}Q_{2\beta}Q_{2\nu} -Q_2^2\epsilon_{\mu\nu\rho\lambda}Q_{1\lambda})D_2(Q_2^2,Q_1^2,Q_1\cdot Q_2)\rbrace, \label{*} \end{eqnarray} where the 4D Schouten identity \begin{equation} \epsilon_{\mu\nu\rho\lambda}Q_\alpha+\epsilon_{\alpha\mu\nu\rho}Q_\lambda +\epsilon_{\lambda\alpha\mu\nu}Q_\rho+\epsilon_{\rho\lambda\alpha\mu}Q_\nu +\epsilon_{\nu\rho\lambda\alpha}Q_\mu=0 \end{equation} is employed to reduce the number of terms. It follows from the anomaly equation (\ref{anomaly}) that \begin{eqnarray} &{}&(Q_1+Q_2)\cdot Q_1D_1(Q_1^2,Q_2^2,Q_1\cdot Q_2)+(Q_1+Q_2)\cdot Q_2D_1(Q_2^2,Q_1^2,Q_1\cdot Q_2) \nonumber\\ &-&Q_1^2D_2(Q_1^2,Q_2^2,Q_1\cdot Q_2)-Q_2^2D_2(Q_2^2,Q_1^2,Q_1\cdot Q_2)=-1, \label{constraint} \end{eqnarray} which implies infrared singularities of the dynamical form factors $D_1$ and $D_2$. To the one loop order, we find that \begin{equation} D_1(Q_1^2,Q_2^2,Q_1\cdot Q_2)=-2\int_0^1dx\int_0^{1-x}dy \frac{xy}{Q_1^2x+Q_2^2y-(Q_1x-Q_2y)^2} \end{equation} and \begin{equation} D_2(Q_1^2,Q_2^2,Q_1\cdot Q_2)=2\int_0^1dx\int_0^{1-x}dy \frac{x(1-x-y)}{Q_1^2x+Q_2^2y-(Q_1x-Q_2y)^2}, \label{D2} \end{equation} which satisfy the constraint (\ref{constraint}). For the CME momenta, (\ref{cmemomenta}), we find that $D_2(Q^2,Q^2,-Q^2)=\frac{1}{2Q^2}$ and therefore \begin{equation} \Lambda_{ij4}(Q_1,Q_2)=-i\eta\frac{e^2}{\pi^2}Q^2D_2(Q^2,Q^2,-Q^2)\epsilon_{ijk}q_k =-i\eta\frac{e^2}{2\pi^2}\epsilon_{ijk}q_k. \end{equation} Breaking the tensor (\ref{*}) into spatial and temporal components, we obtain (\ref{IRc1}) and (\ref{IRc2}) via (\ref{D2}). At a nonzero temperature and/or chemical potential, the limit $K\to 0$ becomes very subtle. Because of the discreteness of the energy in the Matsubara Green's function, one has to switch to the real time formalism for the analysis, of which, the closed time path (CTP) Green's function is most convenient. The main ingredients of CTP is summarized in the appendix \ref{CTP_appendix}. Explicit calculations of the triangle diagram via the CTP show that \begin{equation} \lim_{\mathbf{k}\to 0}\lim_{k_0\to 0}\Lambda_{ij4}(Q_1,Q_2) \neq \lim_{k_0\to 0}\lim_{\mathbf{k}\to 0}\Lambda_{ij4}(Q_1,Q_2). \label{subtlety} \end{equation} with $\mathbf{k}$ and $k_0$ defined in (\ref{outgoing}). The limit order on RHS leads to (\ref{cmelimit}), the result dictated by the anomaly, while the limit order on LHS gives rise to result of the last section, obtained from the Matsubara formulation and its analytic continuation to real energy. Therefore, there is no contradiction between the universality of the anomaly and the statement of \cite{rubakov}. \begin{figure} \begin{center} \includegraphics[width=0.2\linewidth]{fig4.eps}\\ \caption{The CTP diagram with one vertex insertion highlighted.}\label{fig4} \end{center} \end{figure} The subtlety of this infrared limit can be explored in general. Consider the CTP diagram in Fig.4 with a vertex insertion of four momentum $K=(\mathbf{k},ik_0)$, summing up both CTP paths. The amputated external legs pertaining to the shaded bubble are suppressed. It follows from the Feynman rules of CTP that the contribution of the two highlighted lines adjacent to the vertex insertion in the Fig. 4 is \begin{equation} S_{1a}(P+K)\Gamma S_{b1}(P)-S_{2a}(P+K)\Gamma S_{b2}(P), \label{ctp12} \end{equation} where $S_{ab}(P)$ is the CTP quark propagator defined in the appendix \ref{CTP_appendix} with $a$, $b$ labeling the two CTP paths and $\Gamma$ is a matrix with respect to the spinor indexes. The spinor indexes as well as the indexes $a$ and $b$ of (\ref{ctp12}) are to be contracted with the contribution from the shaded bubble of Fig. 4. In terms of the retarded(advanced) propagator $S_R(P)$($S_A(P)$) and the correlator $S_C(P)$ defined in Eq.(\ref{3af}), we find that \begin{eqnarray} \label{ctpprod} &{}& S_{1a}(P+K)\Gamma S_{b1}(P)-S_{2a}(P+K)\Gamma S_{b2}(P)\\\nonumber &=& \frac{1}{2}[S_C(P+K)\Gamma S_R(P)+S_A(P+K)\Gamma S_C(P) \pm S_R(P+K)\Gamma S_R(P)\pm S_A(P+K)\Gamma S_A(P)]. \end{eqnarray} with "$\pm$" on RHS depending on the CTP indexes, $a$ and $b$. Therefore the amplitude of the diagram has the following mathematical structure \label{generalG} \begin{eqnarray} {\cal G}(K)&=&\int\frac{d^4P}{(2\pi)^4}\lbrace U(p_0,\mathbf{p};k_0,\mathbf{k}) \Big[\frac{[1-2f(p^\prime)]\delta[(P+K)^2]}{P^2}+\frac{[1-2f(p)]\delta(P^2)}{(P+K)^2} \Big]\\\nonumber &+&V(p_0,\mathbf{p};k_0,\mathbf{k})\rbrace, \end{eqnarray} where $p^\prime\equiv|\mathbf{p}+\mathbf{k}|$ and $f(p)$ stands for the fermion distribution function. For the sake of simplicity, we have set the quark number chemical potential $\mu=0$, but the generalization to a nonzero $\mu$ is straightforward. The quantity inside the bracket on RHS of (\ref{generalG}) comes from the first two terms in the second line of eq.(\ref{ctpprod}) and the contribution from the shaded bubble of Fig.4 is included in the functions $U(p_0,\mathbf{p};k_0,\mathbf{k})$ and $V(p_0,\mathbf{p};k_0,\mathbf{k})$. The function $U(p_0,\mathbf{p};k_0,\mathbf{k})$ is regular at the mass shells \begin{equation} P^2=p^2-p_0^2=0 \qquad (P+K)^2=p^{\prime 2}-p_0^{\prime 2}=0, \label{massshell} \end{equation} and its derivative with respect to $p_0$ will be denoted by $\dot{U}(p_0,\mathbf{p};k_0,\mathbf{k})$ below. So is the function $V(p_0,\mathbf{p};k_0,\mathbf{k})$ and its integral, $I(K)$ is unambiguous in the limit $K\to 0$. Carrying out the energy integral, we find that \begin{eqnarray} {\cal G}(K)=\frac{1}{2}\int\frac{d^3p}{(2\pi)^3} &\lbrace& \frac{1}{p-p'+k_0} \Big[\frac{[1-2f(p')]U(p'-k_0,\mathbf{p};k_0,\mathbf{k})}{p'(p+p'-k_0)} -\frac{[1-2f(p)]U(p,\mathbf{p};k_0,\mathbf{k})}{p(p+p'+k_0)}\Big]\nonumber\\ &+& \frac{1}{p-p'-k_0} \Big[\frac{[1-2f(p')]U(-p'-k_0,\mathbf{p};k_0,\mathbf{k})}{p'(p+p'+k_0)}\nonumber\\ &-&\frac{[1-2f(p)]U(-p,\mathbf{p};k_0,\mathbf{k})}{p(p+p'-k_0)}\Big]\rbrace + I(K). \end{eqnarray} It follows that \begin{eqnarray} \lim_{k_0\to 0}\lim_{\mathbf{k}\to 0}{\cal G}(K) &=& \frac{1}{2}\lim_{k_0\to 0}\frac{1}{k_0}\int\frac{d^3p}{(2\pi)^3} \frac{1-2f(p)}{p}[\frac{U(p-k_0,\mathbf{p};k_0,0)}{(2p-k_0)} -\frac{U(p,\mathbf{p};k_0,0)}{(2p+k_0)}\nonumber\\ &-&\frac{U(-p-k_0,\mathbf{p};k_0,0)}{(2p+k_0)}+\frac{U(-p,\mathbf{p};k_0,0)}{(2p-k_0)} +I(0)\Big]\nonumber\\ &=&\frac{1}{4}\int\frac{d^3p}{(2\pi)^3}\frac{1-2f(p)}{p^2} \Big[-\dot{U}(p,\mathbf{p};0,0)+\frac{U(p,\mathbf{p};0,0)}{p}\nonumber\\ &+& \dot{U}(-p,\mathbf{p};0,0)+\frac{U(-p,\mathbf{p};0,0)}{p}\Big]+I(0) \end{eqnarray} and \begin{eqnarray} \label{limitorder} \lim_{\mathbf{k}\to 0}\lim_{k_0\to 0}{\cal G}(K) &=& \frac{1}{2}\lim_{\mathbf{k}\to 0}\int\frac{d^3p}{(2\pi)^3}\frac{1}{p^2-p^{\prime 2}} \lbrace\frac{[1-2f(p')]U(p',\mathbf{p};0,\mathbf{k})}{p'} -\frac{[1-2f(p)]U(p,\mathbf{p};0,\mathbf{k})}{p}\nonumber\\ &+&\frac{[1-2f(p')]U(-p',\mathbf{p};0,\mathbf{k})}{p'} -\frac{[1-2f(p)]U(-p,\mathbf{p};0,\mathbf{k})}{p}\rbrace+I(0)\nonumber\\ &=& \lim_{k_0\to 0}\lim_{\mathbf{k}\to 0}{\cal G}(K) +\frac{1}{2}\int\frac{d^3p}{(2\pi)^3}\frac{df}{dp}\frac{U(p,\mathbf{p};0,0)+U(-p,\mathbf{p};0,0)}{p^2}. \end{eqnarray} The inequality (\ref{subtlety}) is an example eq.(\ref{limitorder}) for the three point function $\Lambda_{\mu\nu\rho}(Q_1,Q_2)$ with $\Gamma=-i\gamma_5\gamma_4$. Applying (\ref{limitorder}) for the one loop diagrams of $\Lambda_{ij4}(Q_1,Q_2)$ with (\ref{cmelimit}) for the first term in the third line, we recover the CME term of the photon self energy obtained previously. Indeed, the first term in the third line of (\ref{limitorder}) corresponds to the term "+1" on RHS of (\ref{massless}) and the integral of (\ref{limitorder}) in the same line goes to the integral of (\ref{massless}) in the limit $\mu_5\to 0$. Since the only $\mu_5$ dependence of (\ref{massless}) is through the distribution functions, the limit has its integrand proportional to the derivative of the distribution function. \section{Discussions} In this work, we investigated the interplay between the gauge invariance and the infrared limit in the chiral magnetic effect. The part of the induced electric current that is linear in the axial chemical potential $\mu_5$ and the magnetic field $\mathbf{{\cal B}}$ is divided into two terms, i.e. \begin{equation} \mathbf{J}(Q)=-\eta\frac{e^2}{2\pi^2}\mu_5F(Q)\mathbf{{\cal B}}(Q)+\eta\frac{e^2}{2\pi^2}\mu_5\mathbf{{\cal B}}(Q) \label{division} \end{equation} where the first term corresponds to the loop diagrams of the photon self-energy tensor and the second term comes from the Chern-Simons term of the conserved axial charge $\tilde Q_5$, which is dictated by the anomaly. The gauge invariance relates the form factor $F(Q)$ to two form factors, $C_1$ and $C_2$ underlying a three point diagram of two vector current vertices and an axial current vertex. If the infrared limit of these form factors exists, $F(0)=0$ to all orders of coupling and the classical form of the chiral magnetic current in a constant magnetic field, eq. (\ref{classical}) emerges. Our statements are illustrated with explicit one-loop calculations subject to the Pauli-Villars regularization. At zero temperature, however, both $C_1$ and $C_2$ are infrared divergent and $F(0)=1$. Consequently, the two terms on RHS of (\ref{division}) cancel each other and the chiral magnetic current vanishes. At a nonzero temperature and/or a nonzero chemical potential, $F(0)$ depends on how the limit $Q\to 0$ is approached. The magnitude of the chiral magnetic current is reduced if the zero momentum limit is taken prior to the zero energy limit, as is implied by the infrared divergence of $C_2$ under the same order of limits. More subtle is the situation with a coordinate dependent $\mu_5$. If the four momentum associated with $\mu_5$, $K=(\mathbf{k},ik_0)$ is set to zero in the order $\lim_{\mathbf{k}\to 0}\lim_{k_0\to 0}$, the results of sections 2 and 3 are recovered. With the opposite order of the limit, however, $F(0)=1$ as is dictated by the anomaly and the two terms of (\ref{division}) cancel again. Unlike what happens with the axial anomaly, the difference between different orders of the infrared limits is unlikely robust against higher order corrections. Since the ambiguity stems from quasi particle poles, it will disappear when the quasi particle weight is diminished by strong coupling. Then the chiral magnetic current will revert to its classical expression with the order $\lim_{\omega\to 0}\lim_{q\to 0}$ of the infrared limit $Q\to 0$. This is consistent with the holographic result reported in \cite{HUYee}. One complication with a coordinate dependent $\mu_5$ is that the term $\mu_5\tilde Q_5$ of the Lagrangian (\ref{lagrange}) is no longer gauge invariant. One may argue that this term is only defined in a specific gauge, say Coulomb gauge, in which the vector potential \begin{equation} \mathbf{A}=-\frac{1}{\nabla^2}\mathbf{\nabla}\times\mathbf{B} \label{coulomb} \end{equation} is already gauge invariant. So is the Chern-Simons term of $\tilde Q_5$. A possible objection to this approach is the violation of the micro causality, i.e. the commutator between two axial charge densities in Heisenberg representation does not vanish for a space-like separation because of the nonlocality introduced by the inverse Laplacian in (\ref{coulomb}). It remains an open issue to assess the validity of the conserved axial charge in a non equilibrium setup (See \cite{gynther} for some related discussions). Finally, we would like to comment briefly on the derivation of the classical result (\ref{classical}) by summing up the single particle Landau orbitals in a constant magnetic field. It is a one-loop procedure to all orders of the magnetic field. The linear term of the electric and the magnetic field stems from the same photon self energy tensor discussed here and requires a gauge invariant regularization to cancel the UV divergence. In view of the analysis in this paper, we would expect that the summation over the Landau orbitals yields a null result for the chiral magnetic current if the regulator contribution is included. The net current is solely given by the Chern-Simons term of the (\ref{lagrange}). Therefore we do not see any nontrivial effect of a nonzero quark mass claimed in \cite{wjfu}. \section*{Acknowledgments} The work of D. F. H. and H. C. R. is supported in part by NSFC under grant Nos. 10975060, 10735040. The work of Hui Liu is supported in part by NSFC under grant No. 10947002.
2,877,628,089,336
arxiv
\section{Introduction} This paper deals with first-order optimality conditions for general optimization problems of the form \begin{equation}\label{EqGenOptProbl} \min_z f(z)\quad\mbox{subject to}\quad P(z)\in D \end{equation} where the mappings $f:\R^d\to \R$ and $P:\R^d\to\R^s$ are assumed to be continuously differentiable and $D$ is a closed subset of $\R^s$. Note that formally more general problems of the form \begin{eqnarray}\label{EqGenOptProbl1}\min_{z}&&f(z)\\ \nonumber\mbox{subject to}&&0\in P(z)+Q(z), \end{eqnarray} where $Q:\R^d\rightrightarrows\R^s$ is a set-valued mapping with closed graph, can be equivalently written in the form \eqref{EqGenOptProbl} as \begin{equation}\label{EqGenOptProbl2}\min f(z)\quad\mbox{subject to}\quad (z,-P(z))\in\Gr Q.\end{equation} If the objective function in \eqref{EqGenOptProbl} is not continuously differentiable, we can equivalently rewrite the program \eqref{EqGenOptProbl} as \begin{equation}\label{EqGenOptProbl3}\min_{z,\alpha} \alpha\quad\mbox{subject to}\quad (z,\alpha,P(z))\in\epi f\times D.\end{equation} Under some constraint qualification, necessary optimality conditions for the problem \eqref{EqGenOptProbl} at a local minimizer $\zb$ are usually of the form \begin{equation}\label{EqKKT1}0\in \nabla f(\zb)+\nabla P(\zb)^\ast w^\ast,\end{equation} where the multiplier $w^\ast$ belongs to a suitable normal cone to the set $D$ at the point $P(\zb)$, which in turn is often related to the notion of a subdifferential. Among the big number of different normal cones/subdifferential constructions considered in the literature, two stand out by the comprehensive calculus available for them: One is given by the {\em generalized gradient} as introduced by Clarke \cite{Cla73} and the related normal cone, the other one is the {\em limiting (Mordukhovich) normal cone/subdifferential}. Since the Clarke normal cone is the closure of the convex hull of the limiting normal cone, c.f. \cite{RoWe98}, the use of the limiting normal cone yields stronger first-order optimality conditions than an approach based on Clarke's normal cone and for this reason we focus in this paper on first-order optimality conditions related to the limiting normal cone, which are usually called M-stationarity conditions. However, despite the available calculus, it is sometimes very difficult or even impossible to compute the limiting normal cone effectively. As an illustrating example let us consider the following subclass of so-called {\em mathematical programs with equilibrium constraints} (MPECs), where the equilibrium is described by a generalized equation: \begin{align} \label{EqMPEC}\mbox{(MPEC)}\qquad \min_{x,y}\ & F(x,y)\\ \nonumber \mbox{s.t. }&0\in\phi(x,y)+\widehat N_\Gamma(y),\\ \nonumber &G(x,y)\leq 0 \end{align} For this problem, the mappings $F:\R^n\times\R^m\to \R$, $\phi:\R^n\times\R^m\to \R^m$ and $G:\R^n\times\R^m\to\R^p$ are assumed to be continuously differentiable, $\Gamma:=\{y\mv g(y)\leq 0\}$ is given by a $C^2$-mapping $g:\R^m\to\R^q$ and $\widehat N_\Gamma(y)$ denotes the {\em regular (Fr\'echet) normal cone} to $\Gamma$ at $y$, cf. Definition \ref{DefCones} below. The program (MPEC) can be equivalently written in the format \eqref{EqGenOptProbl} as \begin{align} \label{EqMPEC'}\mbox{(MPEC')}\qquad \min_{x,y}\ & F(x,y)\\ \nonumber \mbox{s.t. }&\hat P(x,y):=\left(\begin{array}{c}(y,-\phi(x,y))\\ G(x,y)\end{array}\right)\in\Gr\widehat N_\Gamma\times\R^p_-=:\hat D \end{align} The calculation of the limiting normal cone to $\hat D$ at $\hat P(\xb,\yb)$ involves the one of the limiting normal cone to $\Gr\widehat N_\Gamma$ at $(\yb,-\phi(\xb,\yb)$. The latter task is well-understood, if for the inequalities $g(y)\leq 0$ the {\em linear independence constraint qualification (LICQ)} is fulfilled at $\yb$, cf. \cite{MoOut01}. The situation, unfortunately, becomes substantially more difficult, provided LICQ is relaxed. Such a situation has been investigated under Mangasarian-Fromovitz constraint qualification (MFCQ) in \cite{HenOutSur09} and, under a certain constraint qualification less restrictive than MFCQ, in \cite{GfrOut16a}. In both cases an additional condition is needed to obtain a point based representation of the limiting normal cone to $\Gr\widehat N_\Gamma$ in terms of first-order and second-order derivatives of $g$ at $\yb$ and in \cite{GfrOut16a} a simple example is given that without this additional condition the limited normal cone cannot be entirely expressed in terms of first-order and second-order derivatives of $g$. On the other hand, very recently much progress has been achieved in computing the tangent cone to $\Gr\widehat N_\Gamma$ and to the tangent cone of the feasible region of \eqref{EqMPEC}, see \cite{GfrOut16b, ChiHi17, GfrYe17a}. Under very mild assumptions one obtains a full description of the tangent cone to the feasible region of \eqref{EqMPEC} involving only first-order derivatives of $\phi$, $G$ and derivatives of $g$ up to second-order at a point $(\xb,\yb)$. Thus there must exist also some dual optimality condition in terms of these derivatives showing that the part of the limiting normal cone which is difficult to compute does not play a role in the optimality conditions. At this point let us mention that it might be not feasible to reformulate the MPEC \eqref{EqMPEC} as a {\em mathematical program with complementarity constraints (MPCC)}, \begin{align} \label{EqMPCC}\qquad \min_{x,y,\lambda}\ & F(x,y)\\ \nonumber \mbox{s.t. }&0\in\phi(x,y)+\nabla g(y)^\ast\lambda,\\ \nonumber &0\leq \lambda \perp g(y)\geq 0,\\ \nonumber &G(x,y)\leq 0. \end{align} Of course, if $(\xb,\yb)$ is a local solution of \eqref{EqMPEC} and the system $g(y)\leq 0$ fulfills some constraint qualification at $\yb$ ensuring $\widehat N_\Gamma(\yb)=\{\nabla g(\yb)^\ast\lambda\mv 0\leq \lambda\perp g(\yb)\}$, then it is easy to show that for every multiplier $\lb\geq 0$ fulfilling $0\in \phi(\xb,\yb)+\nabla g(\yb)^\ast\lb$, $\lb^Tg(\yb)=0$ the triple $(\xb,\yb,\lb)$ is a local solution of \eqref{EqMPCC}. However, if LICQ fails to hold for the system $g(y)\leq 0$ at $\yb$, then it can happen that some constraint qualification is fulfilled for the MPEC \eqref{EqMPEC}, but all of the MPCC-tailored constraint qualifications known from the literature are violated for \eqref{EqMPCC}. Thus we cannot apply the known first-order optimality conditions for the program \eqref{EqMPCC} in order to obtain optimality conditions for the program \eqref{EqMPEC}. This was first observed in \cite{AdHenOut17} and further developed in \cite{GfrYe17a}. In the latter paper an example is given where this phenomena occurs for convex quadratic functions $g_i$, $i=1,\ldots, q$ and linear mappings $\phi$ and $G$. To overcome the difficulties arising when computing the limiting normal cone, we remember that the basic task in formulating first-order optimality conditions is the computation of the regular normal cone to the feasible set of \eqref{EqGenOptProbl}. However, for the regular normal cone only very restricted calculus is available and this is the reason why the limiting normal cone is used instead of the regular one. Having in mind that the basic goal is the computation of the regular normal cone to the feasible set, it is not difficult to see that in order to obtain a more accurate approximation we can use the limiting normal cone to the tangent cone of the feasible set. Performing a more accurate analysis we observe that this process can be repeated and we obtain as a final result that the multiplier $w^\ast$ in \eqref{EqKKT1} is a regular normal to a series of tangent cones to tangent cones to the set $D$. Since the new optimality conditions are derived by a repeated linearization procedure, we call the resulting optimality conditions {\em linearized M-stationarity conditions}. The organization of the paper is as follows. In Section \ref{SecVarAna} we recall some basics from variational analysis. The stationarity concepts of B-,S- and M-stationarity and its relations with necessary optimality conditions are considered in Section \ref{SecStat}. Section \ref{SecLM_stat} contains the main results on linearized M-stationarity conditions for the problem \eqref{EqGenOptProbl}. The analysis is done under a very weak constraint qualification: We only require the {\em generalized Guignard constraint qualification (GGCQ)} and the {\em metric subregularity constraint qualification (MSCQ)} for the linearized problem. In particular, both conditions are fulfilled if MSCQ holds for the problem \eqref{EqGenOptProbl}. We apply these results to the MPEC \eqref{EqMPEC} in Section \ref{SecMPEC} and derive the linearized M-stationarity conditions under a certain condition on the lower level system $q_i(y)\leq 0$, $i=1,\ldots,p$, which is weaker than the {\em constant rank constraint qualification} (CRCQ). This also works when we are not able to compute the limiting normal cone to $\Gr \widehat N_\Gamma$ as in \cite{GfrOut16a}. In the concluding Section \ref{SecConcl} we briefly summarize the obtained results and outline some topics for our future research. Throughout the paper we use standard notation of variational analysis and generalized differentiation. For an element $z\in\R^d$ we denote by $[z]$ the subspace $\{\alpha z\mv\alpha\in\R\}$ generated by $z$. Some more special symbols are introduced when appearing first in the text. \if{ \begin{subequations} \label{SubEqMPEC} \begin{align}\label{SubEqMPEC_obj}\min_z& f(z)\\ \label{SubEqMPEC_equil}\text{subject to }&(G(z),H(z))\in \Gr \widehat N_\Gamma,\\ \label{SubEqMPEC_inequ}&g(z)\in Q, \end{align} \end{subequations} where $g:\R^d\to \R^{m_Q}$ and $G,H:\R^d\to \R^{m_\Gamma}$ are continuously differentiable, $Q\subseteq \R^{m_Q}$ is a closed convex set, $\Gamma\subseteq \R^{m_\Gamma}$ is a closed set and $\widehat N_\Gamma$ denotes the regular normal cone mapping to $\Gamma$, whose definition can be found in the next section. The MPEC \eqref{SubEqMPEC} can be obviously written in the format \eqref{EqGenOptProbl} by putting \[P(z):=(G(z),H(z),g(z)),\quad D:=\Gr \widehat N_\Gamma\times Q.\] In case that $\Gamma$ is a closed convex cone $K$, the MPEC \eqref{SubEqMPEC} can be rewritten as a {\em mathematical program with complementarity constraints} (MPCC) \begin{subequations} \label{SubEqMPCC} \begin{align}\label{SubEqMPCC_obj}\min_z\ & f(z)\\ \label{SubEqMPCC_equil}\text{subject to }&G(z)\in K,\ H(z)\in K^\circ,\ \skalp{G(z),H(z)}=0,\\ \label{SubEqMPCC_inequ}&g(z)\in Q, \end{align} \end{subequations} which corresponds to the setting \[P(z)=(G(z),H(z),\skalp{G(z),H(z)},g(z)),\ D:=K\times K^\circ\times \{0\}\times Q.\] The MPCC \eqref{SubEqMPCC} also comprises the standard MPCC with $K=\R^m_-$ and $Q=\R^{m_Q}_-$. In this special case the MPCC \eqref{SubEqMPCC} is a smooth nonlinear programs which, however, do not satisfy most of the standard constraint qualifications. This has led to several weakened stationarity notions that have been introduced in connection with optimality conditions and numerical approaches. Two of these stationary notions play an important role: One is called {\em strong stationarity} (S-stationarity), which guarantees that at the point under consideration no feasible descent direction exists. This is certainly a first-order necessary optimality condition, however the constraint qualifications needed to ensure S-stationarity of a local minimizer appear to be too restrictive for MPECs. Another, weaker stationarity notion is referred to by the identifier {\em M-stationarity}. M-stationarity has the advantage that it requires only very weak constraint qualifications, but it does not preclude the existence of feasible descent directions. S-stationarity was first considered in the monograph by Luo, Pang and Ralph \cite{LuPaRa96} whereas M-stationarity conditions appeared first in the papers by Outrata \cite{Out99} and Ye \cite{Ye99}, respectively. The monikers M-stationarity and S-stationarity were coined in \cite{Sch00,SchSch00}. In this paper we consider the generalization of the notions of S-stationarity and M-stationarity, which were introduced for standard MPCCs, to the more general problem \eqref{EqGenOptProbl} as presented by Flegel, Kanzow and Outrata \cite{FleKanOut07}. This approach is based on modern concepts of variational analysis and generalized differentiation and gives deep insight into the relations between the required constraint qualifications and stationarity conditions. Moreover, this approach gives us also some idea how to strengthen the M-stationarity conditions in order to exclude some spurious feasible descent directions. A first step in this direction has been done in the very recent papers \cite{BeGfr17a,BeGfr16d}, where a new stationarity concept named $\Q$-stationarity was introduced which could be combined with M-stationarity to obtain the stronger notion of $\Q_M$-stationarity. $\Q_M$-stationarity seems to be very useful in case when the set $D$ in \eqref{EqGenOptProbl} has some polyhedral structure. In particular, in \cite{BeGfr16d} also an algorithm is given how to resolve the inherent combinatorial structure of the M-stationarity conditions by means of $\Q$-stationarity. In this paper we consider a different approach. We consider the case when $D$ is non-polyhedral and we qualify the multipliers which fulfill the M-stationarity conditions at a local minimizer. We will show that not every multiplier belonging to the limiting normal cone of $D$ is suitable, but merely regular normals to a series of tangent cones to $D$ associated with some critical directions. Moreover, this has the advantage that we need not to calculate the full limiting normal cone but only a part of it which is easier manageable. As an example we consider the case of an MPEC \eqref{SubEqMPEC} where $\Gamma$ is given by $C^2$-inequalities $q_i(y)\leq 0$, $i=1,\ldots,p$ and $Q=\R^{m_Q}_-$ is the negative orthant. Under a certain condition on the lower level system $q_i(y)\leq 0$, $i=1,\ldots,p$, which is weaker than the {\em constant rank constraint qualification} (CRCQ), we obtain new first-order optimality conditions. This also works when we are not able to compute the limiting normal cone to $\Gr \widehat N_\Gamma$ as in \cite{GfrOut16a}. The organization of the paper is as follows.}\fi \section{Preliminaries from variational analysis\label{SecVarAna}} All the sets under consideration are supposed to be locally closed around the points in question without further mentioning. We recall first the standard constructions of variational analysis used in what follows. \begin{definition}\label{DefCones} Given a set $\Omega\subseteq\mathbb R^d$ and a point $\bar z\in\Omega$, the (Bouligand-Severi) {\em tangent/contingent cone} to $\Omega$ at $\bar z$ is a closed cone defined by \begin{equation*}\label{normalcone} T_\Omega(\bar z) :=\Big\{w\in\mathbb R^d\Big|\;\exists t_k\downarrow 0,\;w_k\to w\;\mbox{ with }\;\bar z+t_k w_k\in\Omega ~\forall ~ k\}. \end{equation*} The (Fr\'{e}chet) {\em regular normal cone} and the (Mordukhovich) {\em limiting/basic normal cone} to $\Omega$ at $\bar z$ are defined by \begin{eqnarray} && \widehat N_\Omega(\bar z):=(T_\Omega(\bar z))^\ast\nonumber\\ \mbox{and } && N_\Omega(\bar z):=\left \{z^\ast \mv \exists z_{k}\stackrel{\Omega}{\to}\zb \mbox{ and } z^\ast_k\rightarrow z^\ast \mbox{ such that } z^\ast_{k}\in \widehat{N}_{\Omega}(z_k) \ \forall k \right \} \nonumber \end{eqnarray} respectively.\\ Further, if $\zb\not\in\Omega$ we define \[T_\Omega(\zb):=\widehat N_\Omega(\zb):=N_\Omega(\zb):=\emptyset.\] \end{definition} When the set $\Omega$ is convex, the tangent/contingent cone and the regular/limiting normal cone reduce to the classical tangent cone and normal cone of convex analysis respectively. The regular normal cone $\widehat N_\Omega(\zb)$ is always convex whereas the limiting normal cone can be non-convex if $\Omega$ is not convex. \begin{lemma}\label{LemInclTangCone} Let $\Omega\subseteq\R^d$ be closed and $\zb\in\Omega$. Then \begin{equation}\label{EqLimNormalDir2}N_\Omega(\zb)\supseteq N_{T_\Omega(\zb)}(0)=\widehat N_\Omega(\zb)\cup\bigcup_{0\not=w\in\R^d}N_{T_\Omega(\zb)}(w).\end{equation} \end{lemma} \begin{proof} The inclusion $N_\Omega(\zb)\supseteq N_{T_\Omega(\zb)}(0)$ in \eqref{EqLimNormalDir2} was shown in \cite[Proposition 6.27]{RoWe98}. It also follows from \cite[Proposition 6.27]{RoWe98} together with $N_{T_\Omega(\zb)}(0)\supseteq \widehat N_{T_\Omega(\zb)}(0)=\widehat N_\Omega(\zb)$ that $N_{T_\Omega(\zb)}(0)\supseteq\widehat N_\Omega(\zb)\cup\bigcup_{0\not=w\in\R^d}N_{T_\Omega(\zb)}(w)$. In order to show the reverse inclusion, consider $w^\ast\in N_{T_\Omega(\zb)}(0)$ together with sequences $w_k\to 0$, ${w_k}^\ast\to w^\ast$ with $w_k^\ast\in \widehat N_{T_\Omega(\zb)}(w_k)$ $\forall k$. If $w_k=0$ holds for infinitely many $k$, then $w^\ast \in \widehat N_{T_\Omega(\zb)}(0)=\widehat N_\Omega(\zb)$ follows because $\widehat N_\Omega(\zb)$ is closed. On the other hand, if $w_k\not=0$ holds for all but finitely many $k$ by passing to a subsequence we can assume that $w_k/\norm{w_k}$ converges to some $w$, and because of $\widehat N_{T_\Omega(\zb)}(w_k)=\widehat N_{T_\Omega(\zb)}(w_k/\norm{w_k})$ we conclude $w^\ast\in N_{T_\Omega(\zb)}(w)$. Hence \eqref{EqLimNormalDir2} is established and this finishes the proof. \end{proof} Usually, the computation of the limiting normal cone to a nonconvex set $\Omega$ is a difficult task. A special case when the limiting normal cone has a comparatively simple description is given by polyhedral sets. \begin{definition}Let $\Omega\subseteq\R^d$. \begin{enumerate} \item We say that $\Omega$ is {\em convex polyhedral}, if the set can be written as the intersection of finitely many halfspaces, i.e. there are elements $(a_i,\alpha_i)\in\R^d\times\R$, $i=1,\ldots,p$ such that $\Omega=\{z\mv \skalp{a_i,z}\leq\alpha_i,\ i=1,\ldots,p\}$. \item We say that $\Omega$ is {\em polyhedral}, if it is the union of finitely many convex polyhedral sets. \item Given a point $\zb\in\Omega$, we say that $\Omega$ is {\em locally polyhedral near $\zb$} if there is a neighborhood $W$ of $\zb$ and a polyhedral set $C$ such that $\Omega\cap W= C\cap W$. \end{enumerate} \end{definition} \begin{lemma} Let $\Omega\subseteq\R^d$ be locally polyhedral near some point $\zb\in \Omega$. Then \begin{equation} \label{EqLimNormalDirPoly}N_\Omega(\zb)=\bigcup_{w\in T_\Omega(\zb)}\widehat N_{T_\Omega(\zb)}(w). \end{equation} \end{lemma} \begin{proof} Follows from \cite[Lemma 2.2]{Gfr14a}. \end{proof} In this paper the notion of {\em metric subregularity} will play an important role. \begin{definition} Let $M:\R^d\rightrightarrows\R^s$ be a set-valued mapping and let $(\zb,\wb)\in\Gr M$. We say that $M$ is {\em metrically subregular} at $(\zb,\wb)$ if there exist a neighborhood $W$ of $\zb$ and a positive number $\kappa>0$ such that \begin{equation}\label{EqMetrSubReg}\dist{z,M^{-1}(\wb)}\leq\kappa\dist{\wb,M(z)}\ \; \forall z\in W. \end{equation} \end{definition} It is well-known that metric subregularity of $M$ at $(\zb,\wb)$ is equivalent with the property of {\em calmness} of the inverse mapping $M^{-1}$ at $(\wb,\zb)$, cf. \cite{DoRo04}. Further, metric subregularity of $M$ at $(\zb,\wb)$ is equivalent with metric subregularity of the mapping $z\to (z,\bar w)-\Gr M$ at $(\zb,(0,0))$, cf. \cite[Proposition 3]{GfrYe17a}. \begin{lemma} \label{LemConicalMult}Let $M:\R^d\rightrightarrows\R^s$ be a set-valued mapping, let $(\zb,\wb)\in\Gr M$ and assume that $\Gr M$ is a closed cone. If $M$ is metrically subregular at $(0,0)$ then there is some $\kappa>0$ such that \[\dist{z,M^{-1}(0)}\leq\kappa\dist{0,M(z)}\ \forall z\in\R^d.\] In particular, $M$ is metrically subregular at every point $(\zb,0)\in\Gr M$. \end{lemma} \begin{proof}According to the definition of metric subregularity, consider a neighborhood $W$ of $0$ and a real $\kappa>0$ such that $\dist{z,M^{-1}(0)}\leq \kappa \dist{0,M(z)}$ for all $z\in W$. Now consider $z\in\R^d$. Then we can find some $\lambda>0$ such that $\lambda z\in W$ and thus $\dist{\lambda z,M^{-1}(0)}\leq \kappa \dist{0,M(\lambda z)}$. Since $\Gr M$ is a cone it follows that $M^{-1}(0)$ is a cone and $M(\lambda z)=\lambda M(z)$. Hence $\lambda\dist{z,M^{-1}(0)}=\dist{\lambda z,M^{-1}(0)}\leq \kappa \dist{0,M(\lambda z)}=\lambda \kappa \dist{0,M(z)}$ and $\dist{z,M^{-1}(0)}\leq\kappa\dist{0,M(z)}$ follows. \end{proof} The following lemma is a special variant of \cite[Proposition 2.1]{Gfr11}. \begin{lemma}\label{LemSubRegLinear}Let $P:\R^d\to\R^s$ be contiunously differentiable, let $D\subseteq\R^s$ be closed and assume that the mapping $z\rightrightarrows P(z)-D$ is metrically subregular at $(\zb,0)$. Then the mapping $u\rightrightarrows \nabla P(\zb)u-T_D(P(\zb))$ is metrically subregular at $(0,0)$. \end{lemma} Given a cone $C\subseteq \R^d$, we denote by $\Lsp(C)$ the largest subspace $L\subseteq \R^d$ such that \[C+L\subseteq C.\] Note that $\Lsp(C)$ is well defined because for two subspaces $L_1,L_2$ fulfilling $C+L_i\subseteq C$, $i=1,2$ we have \begin{equation}\label{EqLineality1}C+L_1+L_2=(C+L_1)+L_2\subseteq C+L_2\subseteq C\end{equation} and we are working in finite dimensional spaces. Note that for every subspace $L$ we have $C+L\supseteq C$ and thus $C+\Lsp(C)=C$. If $C$ is a convex cone, then $\Lsp(C)=C\cap (-C)$ is the so-called {\em lineality space} of $C$, the largest subspace contained in $C$. \begin{lemma}\label{LemLineality} Let $C\subseteq \R^d$ be a closed cone and let $\zb\in C$. Then \[\Lsp(C)+[\zb]\subseteq \Lsp(T_C(\zb)).\] \end{lemma} \begin{proof} We show that both $T_C(\zb)+\Lsp(C)\subseteq T_C(\zb)$ and $T_C(\zb)+[\zb]\subseteq T_C(\zb)$. Then the statement follows from \eqref{EqLineality1}. Consider a tangent $w\in T_C(\zb)$ together with sequences $t_k\downarrow 0$ and $w_k\to w$ with $\zb+t_kw_k\in C$ for all $k$. For fixed $l\in \Lsp(C)$ and for every $k$ we have $t_kl\in \Lsp(C)$ and thus $\zb+t_kw_k+t_kl=\zb+t_k(w_k+l)\in C$. Hence $w+l\in T_C(\zb)$ and $T_C(\zb)+\Lsp(C)\subseteq T_C(\zb)$ follows. Next, let $\gamma\in\R$. By passing to a subsequence we can assume $1+t_k\gamma>0$ and thus \[(1+t_k\gamma)(\zb+t_kw_k)=\zb +t_k(1+t_k\gamma)\left(w_k+\frac\gamma{1+t_k\gamma}\zb\right)\in C\ \forall k.\] Since $t_k(1+t_k\gamma)\downarrow 0$ and $w_k+\frac\gamma{1+t_k\gamma}\zb\to w+\gamma\zb$, we conclude $w+\gamma\zb\in T_C(\zb)$ and the second claimed inclusion $T_C(\zb)+[\zb]\subseteq T_C(\zb)$ follows. This finishes the proof. \end{proof} At the end of this section we recall the definition of the critical cone to a set. \begin{definition}\label{DefCritCone} Given a set $\Omega$ and an element $\zb\in\Omega$ together with a regular normal $\zba\in \widehat N_\Omega(\zb)$ we define the {\em critical cone} to $\Omega$ at $(\zb,\zba)$ as \[\K_{\Omega}(\zb,\zba):=T_\Omega(\zb)\cap [\zba]^\perp.\] \end{definition} \section{Stationarity concepts\label{SecStat}} In this section we recall some basic fact about stationarity concepts for the general problem \eqref{EqGenOptProbl}. We denote by $\Omega$ the feasible region of the problems \eqref{EqGenOptProbl}, i.e. \begin{eqnarray} \label{EqOmega} \Omega&:=&\{z\in\R^d\mv P(z)\in D\} \end{eqnarray} Further, given $\zb\in\Omega$ we denote by \[\TlinO(\zb):=\{u\in\R^d\mv \nabla P(\zb)u\in T_D(P(\zb))\}\] the {\em linearized tangent cone to $\Omega$ at $\zb$}. Recall that there always holds \begin{equation}\label{EqInclGACQ} T_\Omega(\zb)\subseteq \TlinO(\zb). \end{equation} We use the notation $\TlinO(\zb)$ to indicate that the linearized tangent cone depends on $P$ and $D$, i.e., if we have two equivalent representations \begin{equation}\label{EqEquivRepres}\Omega=\{z\mv P_1(z)\in D_1\}=\{z\mv P_2(z)\in D_2\}\end{equation} with continuously differentiable mappings $P_i:\R^d\to\R^{s_i}$ and closed sets $D_i\subseteq R^{s_i}$, $i=1,2$, then we can have $\Tlin_{P_1,D_1}(\zb)\not=\Tlin_{P_2,D_2}(\zb)$. \begin{definition} Let $\zb\in\Omega$. We say that $\zb$ is \begin{enumerate} \item {\em B-stationary (Bouligand stationary)} for the problem \eqref{EqGenOptProbl}, if \[0\in \nabla f(\zb)+\widehat N_\Omega(\zb),\] \item {\em S-stationary (strong stationary)} for the problem \eqref{EqGenOptProbl}, if \begin{eqnarray*}0\in \nabla f(\zb)+\nabla P(\zb)^\ast\widehat N_D(P(\zb)),\end{eqnarray*} \item {\em M-stationary} for the problem \eqref{EqGenOptProbl}, if \begin{eqnarray*}0\in \nabla f(\zb)+\nabla P(\zb)^\ast N_D(P(\zb)).\end{eqnarray*} \end{enumerate} \end{definition} Note that S- and M-stationarity depend on $P$ and $D$ used for describing of $\Omega$ whereas B-stationarity is independent of the representation of $\Omega$. B-stationarity can be equivalently expressed as \[\skalp{\nabla f(\zb),w}\geq 0\ \forall w\in T_\Omega(\zb).\] By saying that a {\em feasible descent direction} for the program \eqref{EqGenOptProbl} at $\zb$ is a direction $w\in T_\Omega(\zb)$ with $\skalp{\nabla f(\zb),w}<0$, we see that B-stationarity conveys the fact that no feasible descent direction exists. It is well known that every local minimizer is also B-stationary, cf. \cite[Theorem 6.12]{RoWe98}. Conversely, if $\zb$ is B-stationary for the program \eqref{EqGenOptProbl}, then by \cite[Theorem 6.11]{RoWe98} there exists a smooth mapping $\tilde f:\R^d\to\R$ such that $\tilde f(\zb)=f(\zb)$, $\nabla \tilde f(\zb)=\nabla f(\zb)$ and $\zb$ is a global minimizer of the program \[\min_z\tilde f(z)\quad\mbox{subject to}\quad P(z)\in D.\] Thus, if the available first-order information at the point $\zb$ is provided solely by $T_\Omega(\zb)$ and $\nabla f(\zb)$, then B-stationarity constitutes the best possible first-order optimality condition and thus characterizing B-stationarity is the primary goal. However, the computation of the regular normal cone $\widehat N_\Omega(\zb)$ appearing in the definition of B-stationarity can be a very difficult task for general sets $D$ and therefore, besides other stationary concepts, the notions of S- and M-stationarity have been introduced. S-stationarity was first considered in the monograph by Luo, Pang and Ralph \cite{LuPaRa96} whereas M-stationarity conditions appeared first in the papers by Outrata \cite{Out99} and Ye \cite{Ye99}, respectively. The monikers M-stationarity and S-stationarity were coined in \cite{Sch00,SchSch00} for MPCC and then carried over in \cite{FleKanOut07} to the general problem \eqref{EqGenOptProbl}. By applying \cite[Theorem 6.14]{RoWe98} we readily obtain the inclusion \begin{equation}\label{EqInclRegNormal}\widehat N_\Omega(\zb) \supseteq \nabla P(\zb)^\ast\widehat N_D(P(\zb)).\end{equation} Hence we deduce from the definition that S-stationarity of $\zb$ implies B-stationarity. However, the reverse implication is only valid under comparatively strong assumptions. We state here the following result due to Gfrerer and Outrata \cite[Theorem 4]{GfrOut16b}. \begin{theorem}\label{ThSuffS_Stat}Assume that $\zb$ is feasible for the problem \eqref{EqGenOptProbl}, assume that the mapping $z\rightrightarrows P(z)-D$ is metrically subregular at $(\zb,0)$ and assume that \[ \nabla P(\zb)\R^d +\Lsp(T_D(P(\zb)))=\R^s.\] Then \eqref{EqInclRegNormal} holds with equality. In particular, if $\zb$ is B-stationary then it is S-stationary as well. \end{theorem} It is well known that B-stationarity implies M-stationarity under mild constraint qualification conditions. \begin{definition} Let $P(\zb)\in D$. \begin{enumerate} \item (cf. \cite{FleKanOut07}) We say that the {\em generalized Abadie constraint qualification} (GACQ) holds at $\zb$ if \begin{equation} \label{EqGACQ}T_\Omega(\zb)=\TlinO(\zb). \end{equation} \item (cf. \cite{FleKanOut07}) We say that the {\em generalized Guignard constraint qualification} (GGCQ) holds at $\zb$ if \begin{equation} \label{EqGGCQ}\widehat N_\Omega(\zb)=\big(\TlinO(\zb)\big)^\ast. \end{equation} \item (cf. \cite{GfrMo15a}) We say that the {\em metric subregularity constraint qualification (MSCQ)} holds at $\zb$ if the set-valued map $M(z):=P(z)-D$ is metrically subregular at $(\zb,0)$. \end{enumerate} \end{definition} We always have \[\mbox{MSCQ}\ \Longrightarrow\ \mbox{GACQ}\ \Longrightarrow\ \mbox{GGCQ}.\] Indeed, the first implication follows from \cite[Proposition 1]{HenOut05} whereas the second implication obviously holds true. Note that all these constraint qualifications depend on the representation of $\Omega$ by $P$ and $D$. GGCQ seems to be indispensable for verifying B-stationarity solely with first-order derivatives of the problem functions. We state here the following result from the recent paper by Benko and Gfrerer \cite[Proposition 3]{BeGfr17a}. \begin{theorem}\label{ThLinM_Stat}Assume that $\zb$ is feasible for the problem \eqref{EqGenOptProbl} and assume that GGCQ is fulfilled, while the mapping $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ is metrically subregular at $(0,0)$. Then \begin{equation} \label{EqUpperInclRegNormal}\widehat N_\Omega(\zb)\subseteq \nabla P(\zb)^\ast N_{T_D(P(\zb))}(0) \subseteq \nabla P(\zb)^\ast N_D(P(\zb)). \end{equation} \end{theorem} \begin{remark}\label{RemSubReg} Note that the assumptions of Theorem \ref{ThLinM_Stat} are fulfilled if MSCQ holds at $\zb$. Indeed, MSCQ implies GGCQ and metric subregularity of $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ at $(0,0)$ follows from Lemma \ref{LemSubRegLinear}. \end{remark} If $\zb$ is B-stationary and the assumptions of Theorem \ref{ThLinM_Stat} are fulfilled, it follows from the second inclusion in \eqref{EqUpperInclRegNormal} and the definition that $\zb$ is M-stationary. Other constraint qualifications ensuring M-stationarity can be found in \cite{Ye05}. However, from the first inclusion in \eqref{EqUpperInclRegNormal} we also derive the necessary optimality condition \begin{equation}\label{EqLinMStat}0\in \nabla f(\zb)+ \nabla P(\zb)^\ast N_{T_D(P(\zb))}(0)\end{equation} and this is stronger than M-stationarity because we always have \[N_{T_D(P(\zb))}(0)\subseteq N_D(P(\zb))\] by \cite[Proposition 6.27]{RoWe98}. \section{Linearized M-stationarity conditions\label{SecLM_stat}} One of the basic statements of this section is provided by the following proposition, which can be considered as a refinement of the necessary condition \eqref{EqLinMStat}. \begin{proposition}\label{PropStrongFirstOrder} Let $\zb$ be B-stationary for the optimization problem \eqref{EqGenOptProbl} and assume that GGCQ is fulfilled, while the mapping $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ is metrically subregular at $(0,0)$. Then one of the following two conditions is fulfilled: \begin{enumerate} \item There is $w\in T_D(P(\zb))$ and a multiplier $w^\ast \in\widehat N_{T_D(P(\zb))}(w)$ such that \begin{equation} \label{EqKKT}\nabla f(\zb) +\nabla P(\zb)^\ast w^\ast =0. \end{equation} \item There is $\ub\in \TlinO(\bar z)$ such that \begin{eqnarray} \label{EqNotZero}&&\nabla P(\zb)\ub\not\in \Lsp(T_D(P(\zb))),\\ \label{EqKKTCritDir} &&\skalp{\nabla f(\zb), \ub}=0,\\ \label{EqKKTNormal} &&0\in \nabla f(\zb)+\widehat N_{\TlinO(\bar z)}(\ub) \end{eqnarray} and $T_D(P(\zb))$ is not locally polyhedral near $\nabla P(\zb)\ub$. \end{enumerate} \end{proposition} Before proving this theorem we discuss some of its issues. We will call a direction $u\in\TlinO(\zb)$ satisfying \eqref{EqKKTCritDir} a {\em critical direction} for the problem \eqref{EqGenOptProbl}. Now assume that the first statement of Proposition \ref{PropStrongFirstOrder} fails to hold and thus there exist $\ub$ fulfilling the second statement. Let us rename $\ub$ by $u_1$. From \eqref{EqNotZero} it follows that $\nabla P(\zb)u_1\not=0$ and thus $u_1\not=0$ as well. Further, since $u_1$ is a critical direction and $\zb$ is assumed to be B-stationary for the problem \eqref{EqGenOptProbl}, it follows that $u_1$ is a global solution of the program \begin{equation}\label{EqLinProbl}\min \skalp{\nabla f(\zb),u}\quad\mbox{subject to}\quad\nabla P(\zb)u\in T_D(P(\zb))\end{equation} and \eqref{EqKKTNormal} is the corresponding B-stationarity condition. This is not really surprising, but the important point is that we can apply Proposition \ref{PropStrongFirstOrder} once more to the problem \eqref{EqLinProbl} at $u_1$. Indeed, since the mapping $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ is assumed to be metrically subregular at $(0,0)$ and its graph is a closed cone, by Lemma \ref{LemConicalMult} it is metrically subregular at $(u_1,0)$ as well. By taking into account Remark \ref{RemSubReg} we see that GGCQ holds for the system $\nabla P(\zb)u\in T_D(P(\zb))$ at $u_1$ and the linearized mapping $u\rightrightarrows \nabla P(\zb)u- T_{T_D(P(\zb))}(\nabla P(\zb)u_1)$ is metrically subregular at $(0,0)$. Thus we can apply Proposition \ref{PropStrongFirstOrder} to obtain either the existence of some direction $w\in T_{T_D(P(\zb))}(\nabla P(\zb)u_1)$ and some multiplier $w^\ast \in \widehat N_{T_{T_D(P(\zb))}(\nabla P(\zb)u_1)}(w)$ such that \eqref{EqKKT} holds or the existence of some direction $u_2\in \TlinOk{1}(\zb;u_1):=\{u\mv \nabla P(\zb)u\in T_{T_D(P(\zb))}(\nabla P(\zb)u_1)\}$ such that \begin{eqnarray*} &&\nabla P(\zb)u_2\not\in \Lsp(T_{T_D(P(\zb))}(\nabla P(\zb)u_1)),\\ &&\skalp{\nabla f(\zb), u_2}=0,\\ &&0\in \nabla f(\zb)+\widehat N_{\Tlin_{\nabla P(\zb),T_D(P(\zb))}(u_1)}(u_2) \end{eqnarray*} and $T_{T_D(P(\zb))}(\nabla P(\zb)u_1)$ is not locally polyhedral near $\nabla P(\zb)u_2$. Again, if the first case does not emerge we can repeat the procedure. Let us recursively define for $\yb\in D$ and directions $v_1,v_2,\ldots$ the following $k$-th order tangent cones to $D$ by \[T^0_D(\yb):=T_D(\yb),\ T^k_D(\yb;v_1,\ldots,v_k):=T_{T^{k-1}_D(\yb;v_1,\ldots,v_{k-1})}(v_k),\ k\geq 1.\] Note that by the definition of the tangent cone we have $T^k_D(\yb;v_1,\ldots,v_k)=\emptyset$ if $v_k\not\in T^{k-1}_D(\yb;v_1,\ldots,v_{k-1})$. Then we can also define the following $k$-th order linearized tangent cones to $\Omega$ by \begin{align*}&\TlinOk{0}(\zb)=\TlinO(\zb),\\ &\TlinOk{k}(\zb;u_1,\ldots,u_k):=\{u\mv \nabla P(\zb)u\in T^k_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k)\},\ k\geq 1. \end{align*} When we apply Proposition \ref{PropStrongFirstOrder} the k-th time we find either a direction \[w\in T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1})\] together with a multiplier \[w^\ast \in\widehat N_{T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1})}(w)\] such that $\nabla f(\zb)+\nabla P(\zb)^\ast w^\ast =0$ or a direction $u^k\in \TlinOk{k-1}(\zb;u_1,\ldots,u_{k-1})$ such that \begin{align} \label{EqAux1}&\nabla P(\zb)u_k\not\in \Lsp(T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1})),\\ &\skalp{\nabla f(\zb), u_k}=0,\\ &0\in \nabla f(\zb)+\widehat N_{\TlinOk{k-1}(\zb;u_1,\ldots,u_{k-1})}(u_k) \end{align} and $T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1})$ is not locally polyhedral near $\nabla P(\zb)u_k$. Next observe that we cannot infinitely often apply Proposition \ref{PropStrongFirstOrder}. By Lemma \ref{LemLineality} we have \begin{align*}\lefteqn{\Lsp(T^k_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k))}\\ &\supseteq\Lsp(T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1}))+[\nabla P(\zb)u_k]\end{align*} and together with \eqref{EqAux1} we obtain \begin{align*}\lefteqn{\dim \Lsp(T^k_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k))}\\ &\geq \dim\Lsp(T^{k-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{k-1}))+1. \end{align*} Since we work in finite dimensions the finiteness of $k$ follows. Summing up we have shown the following theorem. \begin{theorem} \label{ThStrongFirstOrder} Let $\zb$ be B-stationary for the optimization problem \eqref{EqGenOptProbl} and assume that GGCQ is fulfilled, while the mapping $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ is metrically subregular at $(0,0)$. Then there exists a natural number $k\geq 0$, directions $u_1,\ldots,u_k$ and $w\in T^k_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k)$ and a multiplier $w^\ast \in\widehat N_{T^k_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k)}(w)$ such that \[\nabla f(\zb)+\nabla P(\zb)^\ast w^\ast =0.\] Moreover, for every $l=1,\ldots,k$ we have \begin{align} \label{Eq_u_l1} &u_l\in \TlinOk{l-1}(\zb;u_1,\ldots,u_{l-1})\\ \label{Eq_u_l2} &\nabla P(\zb)u_l\not\in \Lsp(T^{l-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{l-1})),\\ \label{Eq_u_l3} &\skalp{\nabla f(\zb),u_l}=0 \end{align} and $T^{l-1}_D(P(\zb);\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_{l-1})$ is not locally polyhedral near $\nabla P(\zb)u_l$. \end{theorem} It is easy to see that Theorem \ref{ThStrongFirstOrder} considerably strengthen the necessary optimality condition \eqref{EqLinMStat}, which in turn is stronger than the usual M-stationary condition. As candidates for the multipliers $w^\ast $ fulfilling the first-order optimality condition \eqref{EqKKT1} we consider multipliers fulfilling \begin{equation}\label{EqMultIncl}w^\ast \in \widehat N_{T^k_D(P(\zb),\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k)}(w)\end{equation} for some $w\in T^k_D(P(\zb),\nabla P(\zb)u_1,\ldots,\nabla P(\zb)u_k)$, where the directions $u_l$, $l=1,\ldots,k$ fulfill the conditions of Theorem \ref{ThStrongFirstOrder}. By applying the following lemma we immediately obtain that the set on the right hand side of the inclusion \eqref{EqMultIncl} is contained in $N_{T_D(P(\zb))}(0)\subseteq N_D(P(\zb))$. \begin{lemma}Let $\yb\in D$. Then for every collection of directions $v_1,\ldots,v_l\in\R^s$ we have \[\widehat N_{T^{l-1}_D(\yb;v_1,\ldots,v_{l-1})}(v_l)\subseteq N_{T^{l-1}_D(\yb;v_1,\ldots,v_{l-1})}(v_l)\subseteq N_{T_D(\yb)}(0)\subseteq N_D(\yb).\] \end{lemma} \begin{proof} We will show the lemma by induction with respect to the number of directions $l$. Indeed, for $l=1$ the claimed inclusions hold true because for all $v_1$ we have $\widehat N_{T_D^0(\yb)}(v_1)\subseteq N_{T_D^0(\yb)}(v_1)\subseteq N_{T_D(\yb)}(0)\subseteq N_D(\yb)$ by the definitions of the regular/limiting normal cone and \eqref{EqLimNormalDir2}. Now assume that the claim holds true for some number $l\geq 1$ and consider arbitrary directions $v_1,\ldots,v_{l+1}$. Then by the definitions of the regular/limiting normal cone, \eqref{EqLimNormalDir2} and the induction hypothesis we obtain \begin{align*} \widehat N_{T^l_D(\yb;v_1,\ldots,v_l)}(v_{l+1})&\subseteq N_{T^l_D(\yb;v_1,\ldots,v_l)}(v_{l+1})= N_{T_{T^{l-1}_D(\yb;v_1,\ldots,v_{l-1})}(v_l)}(v_{l+1})\\ &\subseteq N_{T_{T^{l-1}_D(\yb;v_1,\ldots,v_{l-1})}(v_l)}(0)\subseteq N_{T^{l-1}_D(\yb;v_1,\ldots,v_{l-1})}(v_l)\\ &\subseteq N_{T_D(\yb)}(0)\subseteq N_D(\yb) \end{align*} and the lemma is proved. \end{proof} We do not know so much about the order $k$ appearing in Theorem \ref{ThStrongFirstOrder}. By using \eqref{Eq_u_l2} and Lemma \ref{LemLineality}, a rough upper estimate for $k$ is given by $\dim(\nabla P(\zb)\R^d)-\dim(\Lsp(T_D(P(\zb))\cap \nabla P(\zb)\R^d)$. However, in many examples we found that this bound is too pessimistic and the necessary optimality conditions of Theorem \ref{ThStrongFirstOrder} hold with small $k$, say $k=0,1$ or $2$. More research has to be done to investigate this circumstance. Recall that a local minimizer $\zb$ for \eqref{EqGenOptProbl} is called a {\em sharp minimum} if there is a constant $\alpha>0$ such that \[f(z)\geq f(\zb)+\alpha\norm{z-\zb}\] holds for all feasible $z$ close to $\zb$. \begin{lemma}\label{LemSharpMin} Assume that at $\zb$ GGCQ is fulfilled. Then $\zb$ is a sharp minimum if and only if there is some $\alpha'>0$ such that \begin{equation}\label{EqSharpMin}\skalp{\nabla f(\zb),u}\geq \alpha'\norm{u}\ \forall u\in\TlinO(\zb).\end{equation} \end{lemma} \begin{proof}In order to show the sufficiency of \eqref{EqSharpMin} for $\zb$ being a sharp minimum, assume on the contrary that there is a sequence $z_k$ of feasible points converging to $\zb$ satisfying \[\liminf_{k\to\infty}\frac{f(z_k)-f(\zb)}{\norm{z_k-\zb}}=\liminf_{k\to\infty} \skalp{\nabla f(\zb),\frac{z_k-\zb}{\norm{z_k-\zb}}}\leq 0\] By passing to a subsequence we can assume that $\frac{z_k-\zb}{\norm{z_k-\zb}}$ converges to some $u$. Then $\skalp{\nabla f(\zb),u}\leq 0$ and $u\in T_\Omega(\zb)\subset \TlinO(\zb)$ contradicting \eqref{EqSharpMin}. To prove necessity of \eqref{EqSharpMin}, assume that $\zb$ is a sharp minimum and consider a tangent $u\in T_\Omega(\zb)$ together with sequences $t_k\downarrow 0$ and $u_k\to u$ satisfying $P(\zb+t_ku_k)\in D$. Then \[f(\zb+t_ku_k)-f(\zb)=t_k\skalp{\nabla f(\zb),u_k}+\oo(t_k\norm{u_k})\geq \alpha t_k\norm{u_k}\] and by dividing by $t_k$ and passing to the limit we obtain $\skalp{\nabla f(\zb),u}\geq \alpha\norm{u}$. Next consider $u\in \co T_\Omega(\zb)$ together with elements $u_1,\ldots u_K$ and positive scalars $\gamma_1,\ldots,\gamma_K$, $\sum_{i=1}^K\gamma_i=1$ such that $u=\sum_{i=1}^K\gamma_iu_i$. Then \[\skalp{\nabla f(\zb), u}=\sum_{i=1}^K\gamma_i\skalp{\nabla f(\zb),u_i}\geq \alpha\sum_{i=1}^K\gamma_i\norm{u_i}\geq \alpha \norm{\sum_{i=1}^K\gamma_iu_i}=\alpha \norm{u}\] and we easily conclude \[\skalp{\nabla f(\zb), u}\geq \alpha \norm{u}\ \forall u\in\cl\co T_\Omega(\zb).\] By dualizing \eqref{EqGGCQ} we have $\cl\co T_\Omega(\zb)=\cl\co \TlinO(\zb)$ and \eqref{EqSharpMin} follows. \end{proof} \begin{corollary}\label{CorSharpMin} Assume that $\zb$ is a sharp minimum for \eqref{EqGenOptProbl} and assume that GGCQ is fulfilled, while the mapping $u\rightrightarrows \nabla P(\zb)u- T_D(P(\zb))$ is metrically subregular at $(0,0)$. Then there is $w\in T_D(P(\zb))$ and a multiplier $w^\ast \in\widehat N_{T_D(P(\zb))}(w)$ such that $\nabla f(\zb) +\nabla P(\zb)^\ast w^\ast =0$. \end{corollary} \begin{proof} The statement follows immediately from Proposition \ref{PropStrongFirstOrder}, because by Lemma \ref{LemSharpMin} the second alternative of Proposition \ref{PropStrongFirstOrder} is not possible. \end{proof} Note that the conclusion of Corollary \ref{CorSharpMin} can also hold in situations when $\zb$ is not a sharp minimum. Besides the cases when there does not exist a direction $\bar u$ fulfilling the conditions of the second alternative of Proposition \ref{PropStrongFirstOrder}, the first alternative of Proposition \ref{PropStrongFirstOrder} holds true if there exists some direction $\bar u$ satisfying $\skalp{\nabla f(\zb),\bar u}=0$, $\nabla P(\zb)\bar u\in T_D(P(\zb))$ such that $\bar u$ is an S-stationary solution of \eqref{EqLinProbl} because then $0\in\nabla f(\zb)+\nabla P(\zb)^\ast\widehat N_{T_D(P(\zb))}(\nabla P(\zb)\bar u)$ by the definition of S-stationarity. By Theorem \ref{ThSuffS_Stat} we know that the condition \[ \nabla P(\zb)\R^d +\Lsp(T_{T_D(P(\zb))})(\nabla P(\zb)\bar u)=\R^s\] is sufficient for S-stationarity of $\bar u$ and since $\Lsp(T_{T_D(P(\zb))})(\nabla P(\zb)\bar u)$ is always larger than $\Lsp(T_D(P(\zb)))$ it is possible that such an S-stationary solution $\bar u$ of \eqref{EqLinProbl} exists even if $\zb$ is not S-stationary for \eqref{EqGenOptProbl}. We now turn to the proof of Proposition \ref{PropStrongFirstOrder}. At first we need some prerequisites. As introduced in the recent paper by Benko and Gfrerer \cite{BeGfr16d}, consider the program \begin{equation}\label{EqAuxProgr}\min_{(u,y)\in\R^d\times\R^s}\skalp{\nabla f(\zb),u} +\frac12 \norm{y}^2\quad \mbox{subject to}\quad\nabla P(\zb)u+y\in T_D(P(\zb)). \end{equation} \begin{lemma}\label{LemAuxProbl}Assume that the assumptions of Proposition \ref{PropStrongFirstOrder} are fulfilled. Then MSCQ holds for the system $\nabla P(\zb)u+y\in T_D(P(\zb))$ at every point $(\bar u,\yb)$ feasible for \eqref{EqAuxProgr}. Further, the program \eqref{EqAuxProgr} is bounded below and every B-stationary solution $(\bar u,\yb)$ is also S-stationary, i.e. there is some multiplier $w^\ast \in \widehat N_{T_D(P(\zb))}(\nabla P(\zb)\bar u+\yb)$ such that \begin{equation}\label{EqSStatAuxProbl}\nabla f(\zb)+\nabla P(\zb)^\ast w^\ast =0,\quad \yb+w^\ast =0.\end{equation} \end{lemma} \begin{proof}Consider the set-valued mapping $M(u,y):=\nabla P(\zb)u+y-T_D(P(\zb))$. Given any $(u,y)\in\R^d\times \R^s$ we can find $v\in M(u,y)$ such that $\norm{v}=\dist{0,M(u,y)}$ because $M(u,y)$ is closed. Then $0\in M(u,y-v)$ showing that \[\dist{(u,y),M^{-1}(0)}\leq \norm{v}=\dist{0,M(u,y)}\] and MSCQ for the system $\nabla P(\zb)u+y\in T_D(P(\zb))$ at every point $(\bar u,\yb)$ feasible for \eqref{EqAuxProgr} follows. In order to show the boundedness of the program \eqref{EqAuxProgr} assume on the contrary that \eqref{EqAuxProgr} is unbounded below and consider a sequence $(u_k,y_k)$ with $\nabla P(\zb)u_k+y_k\in T_D(P(\zb))$ and $\skalp{\nabla f(\zb),u_k}+\frac 12\norm{y_k}^2\to -\infty$. Since the mapping $u\rightrightarrows \nabla P(\zb)u-T_D(P(\zb))$ is assumed to be metrically subregular and its graph is a closed cone, by Lemma \ref{LemConicalMult} we can find another sequence $\tilde u_k$ with $\nabla P(\zb)\tilde u_k \in T_D(P(\zb))$ and \[\norm{\tilde u_k-u_k}\leq\kappa\dist{\nabla P(\zb)u_k,T_D(P(\zb))}\leq \kappa\norm{y_k}.\] Because $\zb$ is B-stationary for the program \eqref{EqGenOptProbl} we have $\skalp{\nabla f(\zb),\tilde u_k}\geq 0$, implying \[\skalp{\nabla f(\zb),u_k}+\frac 12\norm{y_k}^2\geq \skalp{\nabla f(\zb),u_k-\tilde u_k} +\frac 12\norm{y_k}^2\geq -\kappa\norm{\nabla f(\zb)}\norm{y_k}+\frac 12 \norm{y_k}^2\to -\infty,\] which is obviously not possible. Hence, \eqref{EqAuxProgr} is bounded below. Finally, the last statement about S-stationarity of B-stationary solutions follows immediately from Theorem \ref{ThSuffS_Stat} applied to \eqref{EqAuxProgr}. \end{proof} \begin{lemma}\label{LemSolQP}Consider the program \begin{equation} \label{EqQP} \min_{z\in\R^d}q(z):=\frac 12 z^T Bz+b^T z\quad\mbox{subject to}\quad Az\in C, \end{equation} where $B$ denotes a positive semidefinite $d\times d$-matrix, $b\in\R^d$, $A$ is an $s\times d$ matrix and $C\subset \R^s$ is a polyhedral set. Then exactly one of the following alternatives can occur: \begin{enumerate} \item The program \eqref{EqQP} is infeasible \item The program \eqref{EqQP} is unbounded below, i.e. there is a sequence $z_k$ satisfying $Az_k\in C$ and $\lim_{k\to\infty} q(z_k)=-\infty$. \item There exists a global solution $\bar z$. \end{enumerate} \end{lemma}\begin{proof} It suffices to show that the program \eqref{EqQP} has a global solution if it is feasible and bounded below. Let $C$ be the union of the convex polyhedral sets $C_1,\ldots C_p$ and consider for each $i$ the convex quadratic program \[\min_z q(z)\quad\mbox{subject to}\quad Az\in C_i.\] If this program is feasible, then it must possess a global solution $\zb_i$, since otherwise by \cite[Lemma 4]{BeGfr16d} there would exist a direction $w$ satisfying $Aw\in 0^+C_i$ (the recession cone of $C_i$), $Bw=0$ and $b^T w<0$ contradicting the boundedness of \eqref{EqQP}. Then the one of the $\zb_i$ who has the samllest objective function value is a global solution of \eqref{EqQP}. \end{proof} \begin{proof}[Proof of Proposition \ref{PropStrongFirstOrder}] Assuming that the first condition \eqref{EqKKT} of Proposition \ref{PropStrongFirstOrder} is not fulfilled we will show that the second condition must be fulfilled. If the first condition is not fulfilled, then problem \eqref{EqAuxProgr} cannot have a global solution, because every global solution $(\bar u,\yb)$ would be B-stationary and therefore also fulfilling the S-stationary conditions \eqref{EqSStatAuxProbl} and consequently also the first condition \eqref{EqKKT} of Proposition \ref{PropStrongFirstOrder}. On the other hand, the program \eqref{EqAuxProgr} is bounded below and hence we can find a sequence $(u_k,y_k)$ satisfying $\nabla P(\zb)u_k+y_k\in T_D(P(\zb))$ $\forall k$ and \begin{equation}\label{EqMinSequ}\lim_{k\to\infty} \skalp{\nabla f(\zb),u_k}+\frac 12 \norm{y_k}^2=\gamma:=\inf\{\skalp{\nabla f(\zb),u}+\frac 12 \norm{y}^2\mv \nabla P(\zb)u+y\in T_D(P(\zb))\}.\end{equation} It follows that $\gamma<0$ and without loss of generality we can assume that $\skalp{\nabla f(\zb),u_k}<0$ for all $k$ implying $y_k\not=0$ by B-stationarity of $\zb$. Next we can assume without loss of generality that $u_k$ is the element $u$ with minimal norm fulfilling $\skalp{\nabla f(\zb),u}=\skalp{\nabla f(\zb),u_k}, \nabla P(\zb)u+y_k\in T_D(P(\zb))$. The sequence $u_k$ must be unbounded because otherwise the sequence $y_k$ must be bounded as well and thus $(u_k,y_k)$ possesses some limit point $(\bar u,\yb)$ which would be a global solution of \eqref{EqAuxProgr}. Thus by passing to a subsequence we can assume that $\lim_k\norm{u_k}=\infty$ and that $u_k/\norm{u_k}$ converges to some $\bar u$. From \[0=\limsup_{k\to\infty}\frac{\gamma}{\norm{u_k}^2}= \limsup_{k\to\infty}\Big(\frac{\skalp{\nabla f(\zb),u_k}}{\norm{u_k}^2}+\frac{\norm{y_k}^2}{2\norm{u_k}^2}\Big)=\limsup_{k\to\infty}\frac{\norm{y_k}^2}{2\norm{u_k}^2}\] we conclude $\norm{y_k}/\norm{u_k}\to 0$. Hence \begin{eqnarray*}&&\skalp{\nabla f(\zb),\bar u}=\lim_{k\to\infty}\frac{\skalp{\nabla f(\zb),u_k}}{\norm{u_k}}\leq0,\\ &&\nabla P(\zb)\bar u=\lim_{k\to \infty}\frac 1{\norm{u_k}}\big(\nabla P(\zb)u_k+y_k)\in T_D(P(\zb)), \end{eqnarray*} implying $\bar u\in\TlinO(\zb)$. Since $\zb$ is B-stationary for \eqref{EqGenOptProbl} it follows from GGCQ that $\skalp{\nabla f(\zb),\bar u}=0$ and that $\bar u$ is a global solution of the program \[\min_u \skalp{\nabla f(\zb),u}\quad \mbox{subject to}\quad u\in \TlinO(\zb).\] Hence the B-stationarity condition \eqref{EqKKTNormal} follows. Next we show \eqref{EqNotZero} by contraposition. Assuming that $\nabla P(\zb)\ub\in\Lsp(T_D(P(\zb)))$, we have $\skalp{\nabla f(\zb),u_k-\norm{u_k}\ub}=\skalp{\nabla f(\zb),u_k}$ and $\nabla P(\zb)(u_k-\norm{u_k}\ub)+y_k\in T_D(P(\zb))$. Since $\norm{(u_k-\norm{u_k}\ub)}=\norm{u_k}\norm{\frac{u_k}{\norm{u_k}}-\ub}<\norm{u_k}$ for $k$ sufficiently large, we get a contradiction to our choice of $u_k$ and therefore $\nabla P(\zb)\ub\not\in\Lsp(T_D(P(\zb)))$. There remains to show that $T_D(P(\zb))$ is not locally polyhedral near $\nabla P(\zb)\bar u$. Assuming on the contrary that $T_D(P(\zb))$ is locally polyhedral near $\nabla P(\zb)\bar u$, we can find a polyhedral set $C$ and a neighborhood $W$ of $\nabla P(\zb)\bar u$ such that $T_D(P(\zb))\cap W=C\cap W$. We can choose the neighborhood $W$ as a convex polyhedral set, e.g. as a sufficiently small ball around $\nabla P(\zb)\bar u$ with respect to the maximum norm. Hence we can assume that $C\cap W$ is polyhedral and is the union of the convex polyhedral sets $C_1,\ldots,C_q$ having the representations $C_i=\{w\mv \skalp{a_{ij},w}\leq\alpha_{ij}, j=1,\ldots,p_i\}$. Consider the set \[\bigcup_{\beta\geq 1}\beta C_i=\pi(\{(w,\beta)\mv \skalp{a_{ij},w}-\beta\alpha_{ij}\leq 0, j=1,\ldots,p_i,\ \beta\geq 1\}),\] where $\pi(w,\beta):=w$. By \cite[Theorem 19.3]{Ro70} this set is a convex polyhedral set, implying that the set \[\bigcup_{\beta\geq 1}\beta(T_D(P(\zb))\cap W)=\bigcup_{\beta\geq 1}\beta(C\cap W)=\bigcup_{i=1}^p\bigcup_{\beta\geq 1} \beta C_i\] is polyhedral. Consider the optimization problem \begin{equation}\label{EqAuxOptProbl1}\min_{u,y}\skalp{\nabla f(\zb),u}+\frac 12 \norm{y}^2\quad\mbox{subject to}\quad\nabla P(\zb)u+y\in \bigcup_{\beta\geq 1}\beta(T_D(P(\zb))\cap W).\end{equation} Since $\bigcup_{\beta\geq 1}\beta(T_D(P(\zb))\cap W)\subset \bigcup_{\beta\geq 1}\beta T _D(P(\zb)))=T_D(P(\zb))$, we conclude from Lemma \ref{LemAuxProbl} that the problem \eqref{EqAuxOptProbl1} is bounded below and thus by Lemma \ref{LemSolQP} it possesses a global solution $(\tilde u,\tilde y)$. By the construction of $\bar u$ we have $(\nabla P(\zb)u_k+y_k)/\norm{u_k}\in C\cap W$ for all $k$ sufficiently large and thus $(\nabla P(\zb)u_k+y_k)\in \bigcup_{\beta\geq 1}\beta(T_D(P(\zb))\cap W)$. This shows $\skalp{\nabla f(\zb),\tilde u}+\frac 12 \norm{\tilde y}^2\leq \skalp{\nabla f(\zb),u_k}+\frac 12 \norm{y_k}^2$ and from \eqref{EqMinSequ} we obtain that $(\tilde u,\tilde y)$ is a global solution of \eqref{EqAuxProgr}, a contradiction. Therefore, $T_D(P(\zb))$ is not locally polyhedral near $\nabla P(\zb)\bar u$ and this completes the proof. \end{proof} For the sake of completeness we state also the following extension of Proposition \ref{PropStrongFirstOrder}, which exploits some additional features in case of problems of the form \eqref{EqGenOptProbl1}. Rewriting this problem in the form \eqref{EqGenOptProbl}, the set $D$ is the graph of $Q$ and then the tangent cone to $D$ is the graph of another multifunction, the so-called graphical derivative. \begin{proposition}\label{PropStrongFirstOrderGraph} In addition to the assumptions of Theorem \ref{ThStrongFirstOrder} assume that $T_D(P(\zb))$ is the graph of a set-valued mapping $M=M_c+M_p$, where $M_c,M_p:\R^r\rightrightarrows \R^{s-r}$ are set-valued mappings whose graphs are closed cones, $M_p$ is polyhedral and there is some real $C$ such that \begin{equation}\label{EqBoundM_C} \norm{t}\leq C \norm{w}\ \forall (w,t)\in\Gr M_c.\end{equation} Then either there is $w\in T_D(P(\zb))$ and a multiplier $w^\ast \in\widehat N_{T_D(P(\zb))}(w)$ fulfilling \eqref{EqKKT} or there is some $\bar u\in \TlinO(\bar z)$ fulfilling \eqref{EqNotZero},\eqref{EqKKTCritDir} and \eqref{EqKKTNormal} such that $T_D(P(\zb))$ is not locally polyhedral near $\nabla P(\zb)\bar u$ and there is some $\bar w\not=0$ with \begin{equation} \label{EqKKT_W_NotZero} \nabla P(\zb)\bar u\in\{\bar w\}\times M(\bar w). \end{equation} \end{proposition} \begin{proof} We only have to show \eqref{EqKKT_W_NotZero} and we can proceed quite similar as in the proof of Proposition \ref{PropStrongFirstOrder}. Assuming that we cannot fulfill \eqref{EqKKT}, let $(u_k,y_k)$ denote a sequence satisfying $\nabla P(\zb)u_k+y_k\in T_D(P(\zb))$ and \eqref{EqMinSequ}. Let $w_k$ and $t_k\in M_c(w_k)$ be given by $\nabla P(\zb)u_k+y_k\in \{w_k\}\times (t_k+ M_p(w_k))$ and consider for each $k$ the problem \begin{equation}\label{EqAuxProgr_k}\min \skalp{\nabla f(\zb),u}+\frac 12\norm{y}^2\ \mbox{subject to}\ \nabla P(\zb)u+y\in (w_k,t_k+M_p(w_k)).\end{equation} Since $M_p(w_k)$ is a polyhedral set, by Lemma \ref{LemSolQP} this problem has a global solution and we now claim that there is also a global solution $(\tilde u_k,\tilde y_k)$ fulfilling \[ \norm{(\tilde u_k,\tilde y_k)}\leq \gamma_1+\gamma_2(\norm{t_k}+\norm{w_k}),\] where $\gamma_1,\gamma_2$ do not depend on $k$. Indeed, let $\Gr M_p$ be the union of the convex polyhedral sets $C_i$, $i=1,\ldots,p$ with representation \[C_i=\{(w,t_p)\mv \skalp{a_{ij},w}+\skalp{b_{ij},t_p}\leq \alpha_{ij},\ j=1,\ldots,q_i\}\] and consider for each $i$ and each index set $J\subset \{1,\ldots,q_i\}$ the set $S(i,J,w_k,t_c)$ consisting of all $(u,y,t_p,\mu_1,\mu_2,\lambda)\in\R^d\times\R^s\times\R^{s-r} \times\R^r\times\R^{s-r}\times \R^{q_i}$ satisfying the system of linear equalities and linear inequalities \begin{eqnarray}\label{EqSubKKT1}&&\nabla P(\zb)^\ast \left(\begin{array}{c}\mu_1\\\mu_2 \end{array}\right)=-\nabla f(\zb),\ y+\left(\begin{array}{c}\mu_1\\\mu_2 \end{array}\right)=0\\ \label{EqSubKKT2}&&-\mu_2+\sum_{j\in J}\lambda_i b_{ij}=0, \lambda_j\geq 0,\ j\in J, \lambda_j=0,\ j\in\{1,\ldots,q_i\}\setminus J\\ \label{EqSubKKT3}&&\nabla P(\zb)u+y -(0,t_p)=(w_k,t_c)\\ \label{EqSubKKT4}&&\skalp{b_{ij},t_p}\begin{cases} =\alpha_{ij}-\skalp{a_{ij},w_k}&\mbox{if $j\in J$},\\ \leq\alpha_{ij}-\skalp{a_{ij},w_k}&\mbox{if $j\not\in J$}. \end{cases} \end{eqnarray} By Hoffman's error bound there is some constant $\gamma^{i,J}$ such that \[\dist{0, S(i,J,w_k,t_c)}\leq \gamma^{i,J}(\norm{\nabla f(\xb)}+\norm{w_k}+\norm{t_c}+\sum_{j=1}^{q_i}\vert \alpha_{ij}-\skalp{a_{ij},w_k}\vert\] whenever $S(i,J,w_k,t_c)\not=\emptyset$. Note that for every $(u,y,t_p,\mu_1,\mu_2,\lambda)\in S(i,J,w_k,t_c)$ the triple $(u,y, t_p)$ is a global solution of the convex quadratic program \begin{equation}\label{EqSubQP}\min \skalp{\nabla f(\zb),u}+\frac 12\norm{y}^2\ \mbox{subject to}\ \nabla P(\zb)u+y-(0,t_p)= (w_k,t_c), (w_k,t_p)\in C_i\end{equation} because the equations \eqref{EqSubKKT1}-\eqref{EqSubKKT4} constitute the Karush-Kuhn-Tucker conditions for this problem. Conversely, for every solution of $(u,y, t_p)$ of this program there must exist multipliers $(\mu_1,\mu_2,\lambda)$ such that $(u,y,t_p,\mu_1,\mu_2,\lambda)$ fulfills the Karush-Kuhn-Tucker conditions and thus $(u,y,t_p,\mu_1,\mu_2,\lambda)\in S(i,J,w_k,t_c)$ with $J:=\{j\mv \lambda_j>0\}$. Now let $(u,y)$ denote a global solution of \eqref{EqAuxProgr_k} and let $t_p\in M_p(w_k)$ be given by $\nabla P(\zb)u+y-(0,t_p)= (w_k,t_c)$. Consider $i$ such that $(w_k,t_p)\in C_i$. Then the triple $(u,y,t_p)$ is a global solution of \eqref{EqSubQP} and we can find some index set $J$ such that $S(i,J,w_k,t_c)\not=\emptyset$. Obviously this set is closed and thus we can find $(\tilde u,\tilde y,\tilde t_p,\tilde \mu_1,\tilde \mu_2)\in S(i,J,w_k,t_c)$ such that $\norm{(\tilde u,\tilde y,\tilde t_p,\tilde \mu_1,\tilde \mu_2)}=\dist{0,S(i,J,w_k,t_c)}$, implying \begin{eqnarray*}\norm{(\tilde u,\tilde y)}&\leq& \norm{(\tilde u,\tilde y,\tilde t_p,\tilde \mu_1,\tilde \mu_2)}\leq \gamma^{i,J}(\norm{\nabla f(\xb)}+\norm{w_k}+\norm{t_c}+\sum_{j=1}^{q_i}\vert \alpha_{ij}-\skalp{a_{ij},w_k}\vert)\\ &\leq& \gamma^{i,J}(\norm{\nabla f(\xb)}+\sum_{j=1}^{q_i}\vert \alpha_{ij}\vert)+\gamma^{i,J}(\norm{t_c}+(1+\sum_{j=1}^{q_i}\norm{a_{ij}})\norm{w_k}. \end{eqnarray*} Since both $(\tilde u,\tilde y,\tilde t_p)$ and $(u,y,t_p)$ constitute global solutions of \eqref{EqSubQP} and $(u,y)$ is a global solution of \eqref{EqAuxProgr_k}, $(\tilde u,\tilde y)$ is a global solution of \eqref{EqAuxProgr_k} and our claim follows with $(\tilde u_k,\tilde y_k)=(\tilde u,\tilde y)$ and \[\gamma_1=\max_{i,J}\gamma^{i,J}(\norm{\nabla f(\xb)}+\sum_{j=1}^{q_i}\vert \alpha_{ij}\vert),\ \gamma_2=\max_{i,J}\gamma^{i,J}(1+\sum_{j=1}^{q_i}\norm{a_{ij}}).\] Together with \eqref{EqBoundM_C} we obtain \begin{equation}\label{EqBndW_kU_k}\norm{(\tilde u_k,\tilde y_k)}\leq \gamma_1+\gamma_2(1+C)\norm{w_k}.\end{equation} Since $(u_k,y_k)$ is feasible for the problem \eqref{EqAuxProgr_k}, we have $\skalp{\nabla f(\zb),\tilde u_k}+\frac 12 \norm{\tilde y_k}^2\leq \skalp{\nabla f(\zb), u_k}+\frac 12 \norm{ y_k}^2$ and thus $(\tilde u_k,\tilde y_k)$ is another sequence fulfilling \eqref{EqMinSequ}. We can proceed as in the proof of Proposition \ref{PropStrongFirstOrder} to show that, after passing to a subsequence, the sequence $\tilde u_k/\norm{\tilde u_k}$ converges to some $\bar u\in\TlinO(\zb)$ fulfilling \eqref{EqNotZero},\eqref{EqKKTCritDir} and \eqref{EqKKTNormal} and $T_D(P(\zb))$ is not locally polyhedral near $\nabla P(\zb)\bar u$. Because $\nabla P(\zb)\bar u=\lim_{k\to\infty}(\nabla P(\zb)\tilde u_k+\tilde y_k)/\norm{\tilde u_k}$ and \[(\nabla P(\zb)\tilde u_k+\tilde y_k)/\norm{\tilde u_k}\in\frac 1{\norm{\tilde u_k}}\Big(\{w_k\}\times M(w_k)\Big)=\{\frac{w_k}{\norm{\tilde u_k}}\}\times M\Big(\frac{w_k}{\norm{\tilde u_k}}\Big)\] we conclude that $\frac{w_k}{\norm{\tilde u_k}}$ converges to some $\bar w$ such that $\nabla P(\zb)\bar u\in\{\bar w\}\times M(\bar w)$. From \eqref{EqBndW_kU_k} we obtain $1\leq \gamma_2(1+C)\norm{\bar w}$ implying $\norm{\bar w}>0$. This completes the proof. \end{proof} \section{Application to MPEC\label{SecMPEC}} In this section we want to demonstrate that the linearized M-stationarity conditions can be applied to the MPEC \eqref{EqMPEC'} when it is impossible to compute the limiting normal cone effectively. Recall that this program is given by \begin{align*} \mbox{(MPEC')}\qquad \min_{x,y}\ & F(x,y)\\ \nonumber \mbox{s.t. }&\hat P(x,y):=\left(\begin{array}{c}(y,-\phi(x,y))\\ G(x,y)\end{array}\right)\in\Gr\widehat N_\Gamma\times\R^p_-=:\hat D, \end{align*} where $F:\R^n\times\R^m\to \R$, $\phi:\R^n\times\R^m\to \R^m$ and $G:\R^n\times\R^m\to\R^p$ are continuously differentiable and $\Gamma:=\{y\mv g(y)\leq 0\}$ is given by a $C^2$-mapping $g:\R^m\to\R^q$. For the rest of the section let $(\xb,\yb)$ denote a B-stationary solution for the program (MPEC') such that the following assumption is fulfilled: \begin{assumption}\label{Ass1} \begin{enumerate}\item MSCQ holds for the lower level system $g(y)\in\R^q_-$ at $\yb$. \item GGCQ holds at $(\xb,\yb)$ and the mapping \begin{eqnarray*}(u,v)&\rightrightarrows& \nabla \hat P(\xb,\yb)(u,v)-T_{\hat D}(\hat P(\xb,\yb))\end{eqnarray*} is metrically subregular at $((0,0),0)$. \end{enumerate} \end{assumption} Note that by Remark \ref{RemSubReg} the second part of Assumption \ref{Ass1} is fulfilled if MSCQ holds for the system $\hat P(x,y)\in \hat D$ at $(\xb,\yb)$. A point-based sufficient condition for the validity of MSCQ for this system is given by \cite[Theorem 5]{GfrYe17a}. We need some more notation. We set $\yba:=-\phi(\xb,\yb)$ and denote by \[\KbG:=\K_\Gamma(\yb,\yba)\] the critical cone for $\Gamma$ at $(\yb,\yba)$. Further we define the {\em multiplier set} \[\Lb:=\{\lambda\in N_{\R^q_-}(g(\yb))\mv \nabla g(\yb)^\ast \lambda=\yba\}\] and for every $v\in \KbG$ the {\em directional multiplier set} \[\Lbv:=\argmax\{v^T \nabla^2(\lambda^T g)(\yb)v\mv \lambda\in \Lb\}.\] By \cite[Proposition 4.3(iii)]{GfrMo15a} we have $\Lbv\not=\emptyset$ $\forall v\in\KbG$ thanks to Assumption \ref{Ass1}(1). By \cite[Proposition 1]{GfrYe17a} we have \[T_{\hat D}(\hat P(\xb,\yb))=T_{\Gr \widehat N_\Gamma}(\yb,\yba)\times T_{\R^p_-}(G(\xb,\yb)).\] In order to compute the tangent cone $T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ we use the following theorem: \begin{theorem}[{cf. \cite[Theorem 4]{GfrYe17a}}]\label{ThTanConeGrNormalCone}Assume that MSCQ holds at $\yb$ for the system $g(y)\in\R^q_-$. Then there is a real $\kappa>0$ such that the tangent cone to the graph of $\widehat N_\Gamma$ at $(\yb,\yba)$ can be calculated by \begin{eqnarray}\label{EqTanConeGrNormalCone} \lefteqn{T_{\Gr \widehat N_\Gamma}(\yb,\yba)}\\ \nonumber &=&\big\{(v,v^\ast )\in\R^{2m}\mv\exists\,\lambda\in\Lbv\;\mbox{ with }\; v^\ast \in\nabla^2(\lambda^T g)(\yb)v+N_{\KbG}(v)\big\}\\ \nonumber&=&\big\{(v,v^\ast )\in\R^{2m}\mv\exists\,\lambda\in\Lbv\cap \kappa\norm{\yba} \B_{\R^q}\;\mbox{ with }\; v^\ast \in\nabla^2(\lambda^T g)(\yb)v+N_{\KbG}(v)\big\}. \end{eqnarray} \end{theorem} We see that the tangent cone $T_{\hat D}(\hat P(\xb,\yb))$ is the graph of the multifunction $M(v)=M_c(v)+M_p(v)$, where \[M_p(v):=N_{\KbG}(v)\times T_{\R^p_-}(G(\xb,\yb))\] is a polyhedral multifunction and \[ M_c(v):=\{\nabla^2(\lambda^T g)(\yb)v\mv \lambda\in\Lbv\cap \kappa\norm{\yba} \B_{\R^q}\}\times \{0\}.\] fulfills \eqref{EqBoundM_C}. \begin{proposition}\label{PropPolyhedr}Let a critical direction $\bar v\in \KbG$ be given. If there is an open neighborhood $V$ of $\bar v$ and a set $\tilde\Lambda\subset \Lb$ such that \begin{equation}\label{EqConstLambda}\Lb(v)=\tilde\Lambda\ \forall v\in (\KbG\setminus\{\bar v\})\cap V\end{equation} then \begin{equation}\label{EqTanConePoly}T_{\Gr \widehat N_\Gamma}(\yb,\yba)\cap(V\times\R^m) =\{\big(v,\nabla ^2(\tilde\lambda^T g)(\yb)v+z^\ast \big)\mv z^\ast \in N_{\KbG}(v)\}\cap (V\times\R^m),\end{equation} where $\tilde\lambda\in\tilde\Lambda$ is an arbitrarily fixed multiplier. In particular, $T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ is locally polyhedral near $(\bar v,\vba)$ for every $\vba$ satisfying $(\bar v,\vba)\in T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ and \begin{equation}\label{EqRegNormalPoly}\widehat N_{T_{\Gr \widehat N_\Gamma}(\yb,\yba)}(\bar v,\vba)=\big\{(w^\ast ,w)\mv (w^\ast +\nabla^2(\tilde\lambda^T g)(\yb)w, w)\in \big(\K_\KbG(\bar v,\bar z^\ast )\big)^\ast\times \K_\KbG(\bar v,\bar z^\ast )\big\},\end{equation} where $\bar z^\ast :=\vba-\nabla^2(\tilde\lambda^T g)(\yb)\bar v$. \end{proposition} \begin{proof}Let $\tilde \lambda\in\tilde\Lambda$ be arbitrarily fixed. We claim that for every $v\in(\KbG\setminus\{\bar v\})\cap V$ we have \begin{equation}\label{EqAuxClaim1}\big\{\nabla^2(\lambda^T g)(y)v\mv \lambda\in\Lb(v)\big\}+ N_{\KbG}(v)=\nabla^2(\tilde\lambda^T g)(y)v+ N_{\KbG}(v)\end{equation} Indeed, consider $v^\ast =\nabla^2(\lambda^T g)(y)v+z^\ast $ with $\lambda\in\Lb(v)$ and $z^\ast \in N_{\KbG}(v)$. Since $\KbG$ is a convex polyhedral set, for every $w\in T_{\KbG}(v)$ we have $v+\alpha w\in (\KbG\setminus\{\bar v\})\cap V$ for all $\alpha\geq 0$ sufficiently small and therefore $(v+\alpha w)^T \nabla^2(\lambda^T g)(\yb)(v+\alpha w)=(v+\alpha w)^T \nabla^2(\tilde \lambda^T g)(\yb)(v+\alpha w)$. Because we also have $v^T \nabla^2(\lambda^T g)(\yb)v =v^T \nabla^2(\tilde \lambda^T g)(\yb)v$ we conclude $v^T \nabla^2\big((\lambda-\tilde\lambda)^T g\big)(\yb)w=0$ $\forall w\in T_{\KbG}(v)$ and consequently $\nabla^2\big((\lambda-\tilde\lambda)^T g\big)(\yb)v\in \Lsp(N_{\KbG}(v))$. Thus \begin{eqnarray*}v^\ast &=&\nabla^2(\tilde \lambda^T g)(y)v+\nabla^2\big((\lambda-\tilde\lambda)^T g\big)(\yb)v+z^\ast \\ &\in& \nabla^2(\tilde \lambda^T g)(y)v+\Lsp(N_{\KbG}(v))+N_{\KbG}(v)= \nabla^2(\tilde \lambda^T g)(y)v+N_{\KbG}(v)\end{eqnarray*} and \[\big\{\nabla^2(\lambda^T g)(y)v\mv \lambda\in\Lb(v)\big\}+ N_{\KbG}(v)\subset\nabla^2(\tilde\lambda^T g)(y)v+ N_{\KbG}(v)\] follows. Since the reverse inclusion obviously holds, our claim \eqref{EqAuxClaim1} is verified. We next show that \eqref{EqAuxClaim1} holds for $v=\bar v$ as well. If $\bar v=0$ then \eqref{EqAuxClaim1} obviously holds for $v=\bar v$. On the other hand, if $\bar v\not=0$, we can find some $\alpha\not=1$ sufficiently close to $1$ such that $\alpha\bar v\in (\KbG\setminus\{\bar v\})\cap V$, implying \begin{eqnarray*}\lefteqn{\alpha\Big(\big\{\nabla^2(\lambda^T g)(y)\bar v\mv \lambda\in\Lb(\bar v)\big\}+ N_{\KbG}(\bar v)\Big)= \big\{\nabla^2(\lambda^T g)(y)\alpha\bar v\mv \lambda\in\Lb(\alpha \bar v)\big\}+ N_{\KbG}(\alpha\bar v)}\\ &=& \nabla^2(\tilde\lambda^T g)(y)\alpha \bar v+ N_{\KbG}(\alpha \bar v)=\alpha\Big(\nabla^2(\tilde\lambda^T g)(y)\bar v+ N_{\KbG}(\bar v)\Big),\hspace{3.5cm}\end{eqnarray*} where we have used the relations $\Lb(\alpha \bar v)=\Lb(\bar v)$ and $N_{\KbG}(\bar v)=N_{\KbG}(\alpha\bar v)=\alpha N_{\KbG}(\bar v)$. Thus \eqref{EqAuxClaim1} holds for all $v\in \KbG\cap V$ and the representation \eqref{EqTanConePoly} follows from \eqref{EqTanConeGrNormalCone}. Since the graph of the normal cone mapping to a convex polyhedral set is a polyhedral set \cite{Rob79}, $\Gr N_\KbG$ is the union of polyhedral convex sets $C_1,\ldots, C_l\subset\R^m\times\R^m$. By taking into account \cite[Theorem 19.3]{Ro70} we obtain that $\{\big(v,\nabla ^2(\tilde\lambda^T g)(\yb)v+z^\ast \big)\mv z^\ast \in N_{\KbG}(v)\}$ is the union of the polyhedral convex sets $\{\big(v,\nabla ^2(\tilde\lambda^T g)(\yb)v+z^\ast \big)\mv (v,z^\ast )\in C_i\}$, $i=1,\ldots,l$. Now it follows from \eqref{EqTanConePoly} that $T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ is locally polyhedral near $(\bar v,\vba)$ for every $\vba$ satisfying $(\bar v,\vba)\in T_{\Gr \widehat N_\Gamma}(\yb,\yba)$. By virtue of \eqref{EqTanConePoly}, for every pair $(v,v^\ast )\in T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ close to $(\bar v,\vba)$ there is a unique element $z^\ast \in N_{\KbG}(v)$ with $v^\ast =\nabla^2(\tilde\lambda^T g)(\yb)v+z^\ast $. Thus \begin{eqnarray*}\lefteqn{(w^\ast ,w)\in \widehat N_{T_{\Gr \widehat N_\Gamma}(\yb,\yba)}(\bar v,v^\ast )\Longleftrightarrow \limsup_{(v,v^\ast )\longsetto{{T_{\Gr \widehat N_\Gamma}(\yb,\yba)}}(\bar v,\vba)}\frac{\skalp{w^\ast ,v-\bar v}+\skalp{w,v^\ast -\vba}}{\norm{(v,v^\ast )-(\bar v,\vba)}}\leq 0}\\ &\Longleftrightarrow& \limsup_{(v,z^\ast )\longsetto{{\Gr N_\KbG}}(\vb,\bar z^\ast )}\frac{\skalp{w^\ast ,v-\vb}+\skalp{w,\nabla^2(\tilde\lambda^T g)(\yb)v+z^\ast -\nabla^2(\tilde\lambda^T g)(\yb)\vb-\bar z^\ast }}{\norm{(v,\nabla^2(\tilde\lambda^T g)(\yb)v+z^\ast )-(\vb,\nabla^2(\tilde\lambda^T g)(\yb)\vb+\bar z^\ast )}}\leq 0\\ &\Longleftrightarrow& \limsup_{(v,z^\ast )\longsetto{{\Gr N_\KbG}}(\vb,\bar z^\ast )}\frac{\skalp{w^\ast +\nabla^2(\tilde\lambda^T g)(\yb)w,v-\vb}+\skalp{w,z^\ast -\bar z^\ast }}{\norm{(v,z^\ast )-(\vb,\bar z^\ast )}}\leq 0\\ &\Longleftrightarrow&(w^\ast +\nabla^2(\tilde\lambda^T g)(\yb)w, w)\in \widehat N_{\Gr N_\KbG}(\vb,\bar z^\ast ) \end{eqnarray*} and \eqref{EqRegNormalPoly} follows from the identity $\widehat N_{\Gr N_\KbG}(\vb,\bar z^\ast )= \big(\K_\KbG(\bar v,\bar z^\ast )\big)^\ast\times \K_\KbG(\bar v,\bar z^\ast )$, cf. \cite[Equation (13)]{DoRo96}. \end{proof} We are now in the position to state the main result of this section. \begin{theorem}\label{ThMPECNecCond} Assume that $(\xb,\yb)$ is B-stationary for the program (MPEC'), assume that Assumption \ref{Ass1} is fulfilled and that there is a set $\tilde\Lambda\subset \Lb$ such that \begin{equation}\label{EqConstLambda1}\Lb(v)=\tilde\Lambda\ \forall v\in \KbG\setminus\{0\}.\end{equation} Then for every $\tilde\lambda\in\tilde\Lambda$ there are $v\in \KbG$, $z^\ast \in N_\KbG(v)$ and multipliers $w\in \K_\KbG( v,\bar z^\ast )$, $\mu\in N_{\R^p_-}(G(\xb,\yb))$ such that \begin{align*}&0=\nabla_x F(\xb,\yb) -\nabla_x\phi(\xb,\yb)^\ast w+\nabla_x G(\xb,\yb)^\ast \mu\\ &0\in \nabla_y F(\xb,\yb) -\nabla^2(\tilde \lambda^T g)(\yb) w -\nabla_y\phi(\xb,\yb)^\ast w+\nabla_y G(\xb,\yb)^\ast \mu +\big(\K_\KbG(v,\bar z^\ast )\big)^\ast. \end{align*} \end{theorem} \begin{proof}By \eqref{EqConstLambda1} and Proposition \ref{PropPolyhedr} we obtain that $T_{\Gr \widehat N_\Gamma}(\yb,\yba)$ is locally polyhedral near every $(v,v^\ast)\in T_{\Gr \widehat N_\Gamma}(\yb,\yba)$. Since $T_{\R^p_-}(G(\xb,\yb))$ is a convex polyhedral set, $T_{\hat D}(\yb,\yba,G(\xb,\yb))$ is polyhedral near every direction $(v,v^\ast,t)\in T_{\hat D}(\yb,\yba,G(\xb,\yb))$. Hence, by Proposition \ref{PropStrongFirstOrder} there exists a direction $(v,v^\ast,t)\in T_{\hat D}(\yb,\yba,G(\xb,\yb))$ and a regular normal $(w^\ast,w,\mu)\in \widehat N_{T_{\hat D}(\yb,\yba,G(\xb,\yb))}(v,v^\ast,t)= \widehat N_{T_{\Gr \widehat N_\Gamma}(\yb,\yba)}(v,v^\ast)\times N_{T_{\R^p_-}(G(\xb,\yb))}(t)$ such that \[0=\nabla F(\xb,\yb)+\nabla\hat P(\xb,\yb)^\ast\left(\begin{array} {c} w^\ast\\w\\\mu \end{array}\right) =\left(\begin{array}{c} \nabla_x F(\xb,\yb)-\nabla_x\phi(\xb,\yb)^\ast w+\nabla_x G(\xb,\yb)^\ast \mu\\ \nabla_y F(\xb,\yb) +w^\ast -\nabla_y\phi(\xb,\yb)^\ast w+\nabla_y G(\xb,\yb)^\ast \mu \end{array}\right)\] By utilizing \eqref{EqRegNormalPoly} and the well-known identity $N_{\R^p_-}(G(\xb,\yb))=\bigcup_{t\in T_{\R^p_-}(G(\xb,\yb))} N_{T_{\R^p_-}(G(\xb,\yb))}(t)$ the assertion follows. \end{proof} Recall that the inequalities $g(y)\leq 0$ satisfy the {\em constant rank constraint qualification} (CRCQ) at a feasible point $\yb$ if for each subset $I\subseteq\{i\in\{1,\ldots, q\}\mv g_i(\yb)=0\}$ there is a neighborhood $V$ of $\bar y$ such that the rank of $\{\nabla g_i(y)\mv i\in I\}$ is a constant value on $V$. It was shown in \cite[Proposition 5.3]{GfrMo15a} that CRCQ at $\yb$ is a sufficient condition for \eqref{EqConstLambda1} to hold. By applying \cite[Proposition 5.3]{GfrMo15a} to the system $\tilde g(y)\leq 0$, where \[\tilde g_i(y)=g_i(\yb)+\skalp{\nabla g_i(\yb),y-\yb}+\frac 12 (y-\yb)^\ast \nabla ^2g_i(\yb)(y-\yb),\ i=1,\ldots,q\] it follows that it is sufficient to require CRCQ for the system $\tilde g(y)\leq 0$ in order to guarantee \eqref{EqConstLambda1}. However, it is easy to find examples where the condition \eqref{EqConstLambda1} is fulfilled but CRCQ neither for the system $g(y)\leq 0$ nor the system $\tilde g(y)\leq 0$ holds. The following example demonstrates the benefit of the necessary optimality conditions of Theorem \ref{ThMPECNecCond} \begin{example}Consider the problem \[\min_{x\in\R,y\in\R^3} x-2y_3\quad\mbox{subject to}\quad 0\in (y_1,y_2,-x+y_3)+\widehat N_\Gamma(y)\] with \[\Gamma:=\left\{y\in\R^3\mv \begin{array}{l}g_1(y):=y_3-y_1^3\leq 0\\g_2(y):=y_3-a^3y_2^3\leq 0 \end{array}\right\},\] where $a>0$ denotes a fixed parameter. Then $\xb=0$, $\yb=(0,0,0)$ is a local solution. Obviously MFCQ is fulfilled at $\yb$ and straightforward calculations yield $\yba=(0,0,0)$, $\KbG=\R\times\R\times\R_-$ and \[\Lb=\Lb(v)=\{(0,0)\}\ \forall v\in \KbG.\] Thus condition \eqref{EqConstLambda1} is fulfilled and the first-order optimality condition of Theorem \ref{ThMPECNecCond} must hold. Indeed, taking $\tilde\lambda=(0,0)$, $v=z^\ast=(0,0,0)$ we have $\K_{\KbG}(v,z^\ast)=\KbG$, $\big(\K_{\KbG}(v,z^\ast)\big)^\ast=\{0\}\times\{0\}\times \R_+$ and with $w=(0,0,-1)$ we obtain \begin{align*}&\nabla_x F(\xb,\yb) -\nabla_x\phi(\xb,\yb)^\ast w =1 - (0\ 0\ -1)\left(\begin{array}{c}0\\0\\-1\end{array}\right)=0,\\ &-(\nabla_y F(\xb,\yb) -\nabla^2(\tilde \lambda^T g)(\yb) w -\nabla_y\phi(\xb,\yb)^\ast w)\\ &=-\left(\left(\begin{array}{c}0\\0\\-2\end{array}\right)-\left(\begin{array}{c}0\\0\\0\end{array}\right)-\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)\left(\begin{array}{c}0\\0\\-1\end{array}\right)\right) =\left(\begin{array}{c}0\\0\\1\end{array}\right)\in\big(\K_\KbG(v,\bar z^\ast )\big)^\ast \end{align*} verifying the first-order optimality conditions of Theorem \ref{ThMPECNecCond}. In \cite[Example 1]{GfrOut16a} the limiting normal cone $N_{\Gr \Gamma}(\yb,\yba)$ was computed explicitly. It appears that it depends on the parameter $a$ and thus a point-based representation of the limiting normal cone in terms of first-order and second-order derivatives of $g$ is not possible. This shows the difficulty of verifying the M-stationarity conditions at the solution. \end{example} So far we have only considered linearized M-stationarity conditions for the MPEC \eqref{EqMPEC'} under the assumption \eqref{EqConstLambda1}, which allows the application of Theorem \ref{ThStrongFirstOrder} with $k=0$. In a forthcoming paper we will formulate the linearized M-stationarity conditions for this problem for the general case. Anticipating the main result of that paper we will show with the help of Proposition \ref{PropStrongFirstOrderGraph} that Theorem \ref{ThStrongFirstOrder} holds with $k=1$. \section{\label{SecConcl} Concluding remarks and future research} In this paper we considered new first-order optimality conditions for general optimization problems which are stronger than the commonly used M-stationarity conditions. The key idea is to apply the M-stationarity conditions not to the original problem but to the linearized problem and to repeat this procedure. As a final result we obtain that the multiplier is not only a limiting normal but also a regular normal to tangent cone to a series of tangent cones. Because the optimality conditions are based on a repeated linearization process we use the term {\em linearized M-stationarity conditions}. The applicability of the new optimality conditions are demonstrated at the basis of a special MPEC, where the equilibrium is modeled via a general equation involving the normal cone to a set given by $C^2$-inequalities. Under a certain additional condition we explicitly stated the optimality conditions in terms of the problem data at the reference point. This additional assumption ensures that the linearization process has not to be repeated. We presented an example where the M-stationarity conditions cannot be stated effectively by the difficulty of computing the limiting normal cone, whereas our results fully apply. We plan to drop this additional assumption in a forthcoming paper to obtain the linearized M-stationarity condition for this MPEC in the general case. A further goal is the application of the developed theory to other problem classes, e.g. to MPECs involving the normal cone to sets appearing in second-order cone programming and semidefinite programming. In particular in the latter case we expect that the linearization process has to be eventually repeated more than once. Another direction of future research could be the investigation of the sufficiency of the linearized M-stationarity conditions for B-stationarity. Similar as in \cite{Ye05} one could look for properties of the problem functions which ensure that the reference point is a globally or locally optimal solution. Another approach could be the fulfilment of some linearized M-stationary condition in every nonzero critical direction similar to the concept of {\em extended M-stationarity} used in \cite{Gfr14a}. {\bf Acknowledgements.} The research was partially supported by the Austrian Science Fund (FWF) under grant P29190-N32.
2,877,628,089,337
arxiv
\section{Soft spheres: Model and background} \label{sec:background} We first introduce the soft sphere model and summarize prior results regarding linear elasticity near jamming. \subsection{Model} We perform numerical simulations of the Durian bubble model \cite{durian95}, a mesoscopic model for wet foams and emulsions. The model treats bubbles/droplets as non-Brownian disks that interact via elastic and viscous forces when they overlap. Elastic forces are expressed in terms of the overlap $\delta_{ij} = 1 - r_{ij}/{(R_i + R_j)}$, where $R_i$ and $R_j$ denote radii and ${\vec r}_{ij}$ points from the center of particle $i$ to the center of $j$. The force is repulsive and acts along the unit vector $\hat r_{ij} = \vec{r}_{ij}/r_{ij}$: \begin{equation} \vec{f}^{\rm el}_{ij} = \begin{cases} -k(\delta_{ij}) \, \delta_{ij}\, \hat{r}_{ij} \,, & \delta_{ij} > 0\\ {\vec 0}, & \delta_{ij} < 0. \end{cases} \label{eq:potential} \end{equation} The prefactor $k$ is the contact stiffness, which generally depends on the overlap \begin{equation} k = k_0 \, \delta^{\alpha - 2} \,. \label{eqn:stiffness} \end{equation} Here $k_0$ is a constant and $\alpha$ is an exponent parameterizing the interaction. In the following we consider harmonic interactions ($\alpha = 2$), which provide a reasonable model for bubbles and droplets that resist deformation due to surface tension; we also treat Hertzian interactions ($\alpha = 5/2$), which correspond to elastic spheres. We perform simulations using two separate numerical methods. The first is a molecular dynamics (MD) algorithm that integrates Newton's laws using the velocity-Verlet scheme. Each disk is assigned a uniform mass $m_i = \pi R_i^2$ proportional to its volume. Energy is dissipated by viscous forces that are proportional to the relative velocity $\Delta {\vec v}^{\,c}_{ij}$ of neighboring particles evaluated at the contact, \begin{equation} \vec{f}^{\rm visc}_{ij} = - \tau_0 \, k(\delta_{ij})\, \Delta {\vec v}^{\,c}_{ij} \,, \end{equation} where $\tau_0$ is a microscopic relaxation time. Viscous forces can apply torques, hence particles are allowed to rotate as well as translate. In addition to MD, we also perform simulations using a nonlinear conjugate gradient (CG) routine \cite{vagberg11}, which keeps the system at a local minimum of the potential energy landscape, which itself changes as the system undergoes shearing. The dynamics are therefore quasistatic, i.e.~the particle trajectories correspond to the limit of vanishing strain rate. Bubble packings consist of $N = 128$ to $2048$ disks in a 50:50 bidisperse mixture with a 1.4:1 diameter ratio. Shear is implemented via Lees-Edwards ``sliding brick'' boundary conditions. The stress tensor is given by \begin{equation} \sigma_{\alpha\beta} = \frac{1}{2V} \sum_{ij} f_{ij,\alpha} r_{ij,\beta} - \frac{1}{V} \sum_i m_i v_{i,\alpha} v_{i,\beta} \,, \label{eqn:stress} \end{equation} where $V$ is the volume (area in two dimensions) of the packing, $\vec{f}_{ij}$ is the sum of elastic and viscous contact forces acting on particle $i$ due to particle $j$, and $\vec{v}_{i}$ is the velocity of particle $i$. Greek indices label components along the Cartesian coordinates $x$ and $y$. The confining pressure is $p = - (1/D)(\sigma_{xx} + \sigma_{yy}$), where $D = 2$ is the spatial dimension, while the shear stress is $\sigma = \sigma_{xy} $. The second term on the righthand side of Eq.~(\ref{eqn:stress}) is a kinetic stress, which is always negligible in the parameter ranges investigated here. Initial conditions are isotropic with a targeted pressure $p$, prepared using CG and ``shear stabilized'' in the sense of Dagois-Bohy et al.~\cite{dagois-bohy12}, which guarantees that the initial slope of the stress-strain curve is positive. Stresses and times are reported in dimensionless units constructed from $k_0$, $\tau_0$, and the average particle diameter. \subsection{Distance to jamming} We use the confining pressure $p$ as a measure of the distance to jamming. The excess volume fraction $\Delta \phi = \phi - \phi_c$ and excess mean contact number $\Delta z = z - z_c$, where $\phi_c$ and $z_c$ refer to the respective values at jamming, are also frequently used for this purpose\cite{vanhecke10,ohern03,katgert10b}. These three alternative order parameters are related via \begin{equation} \frac{p}{k} \sim \Delta \phi \sim \Delta z^2 \,. \label{eqn:order} \end{equation} Here $k$ should be understood as a typical value of the contact stiffness in Eq.~(\ref{eqn:stiffness}). The harmonic case ($\alpha = 2$) is straightforward because the contact stiffness is a constant. For other values of $\alpha$, however, $k$ depends on the pressure. As the typical force trivially reflects its bulk counterpart, $f \sim p$, the contact stiffness scales as $k \sim {f}/{\delta} \sim p^{(\alpha - 2)/(\alpha - 1)} $. In the following, all scaling relations will specify their dependence on $k$ and the time scale $\tau_0$. In the present work $\tau_0$ is independent of the overlap between particles (as in the viscoelastic Hertzian contact problem \cite{ramirez99}), but we include $\tau_0$ because one could imagine a damping coefficient $k \tau_0$ with more general overlap dependence than the form treated here. \begin{figure} \includegraphics[width=\columnwidth]{relaxplot.pdf} \caption{The ensemble-averaged relaxation modulus $G_r$ at pressure $p = 10^{-4.5}$ for four values of the strain amplitude $\gamma_0$. In all four cases, $G_r$ displays an initial plateau corresponding to affine particle motion (inset a), followed by a power law decay as the particle displacements become increasingly non-affine (b). At long times the stress is fully relaxed and the final particle displacements are strongly non-affine (c). } \label{fig:stressrelax} \end{figure} \subsection{Shear modulus and the role of contact changes} In large systems the linear elastic shear modulus $G_0$ vanishes continuously with pressure, \begin{equation} G_0/k \sim (p/k)^{\mu} \,, \label{eqn:G0} \end{equation} with $\mu = 1/2$. Hence jammed solids' shear stiffness can be arbitrarily weak. The scaling of $G_0$ has been determined multiple times, both numerically: \cite{ohern03,zhang05,ellenbroek06} and theoretically: \cite{wyartannales,zaccone11,tighe11}; it is verified for our own packings in Fig.~\ref{fig:FS}a, as discussed in Section \ref{sec:relax}. There are two standard approaches to determining $G_0$. The first, which we employ, is to numerically impose a small shear strain and relax the packing to its new energy minimum \cite{ohern03,zhang05}. In the second approach one writes down the $DN$ equations of motion and linearizes them about a reference state, which results in a matrix equation that can be solved for the response to an infinitesimally weak shear \cite{silbert05,wyart05,ellenbroek06,zaccone11,tighe11,dagois-bohy12}. This latter approach allows access to the zero strain limit, but it is blind to the influence of contact changes. Van Deen et al.~\cite{vandeen14} verified that the two approaches agree, provided that the strain amplitude is small enough that the packing neither forms new contacts, nor breaks existing ones. They further found that the typical strain at the first contact change depends on pressure and system size as \begin{equation} \gamma_{\rm cc}^{(1)}\sim \frac{(p/k)^{1/2}}{N} \,. \label{eqn:cc} \end{equation} Similar to the findings of Schreck et al.~\cite{schreck11}, this scale vanishes in the large system limit, even at finite pressure. \section{Stress relaxation} \label{sec:relax} We will characterize mechanical response in jammed solids using stress relaxation and flow start-up tests, two standard rheometric tests. In the linear regime they are equivalent to each other and to other common tests, including creep response and oscillatory rheology, as complete knowledge of the results of one test permits calculation of the others \cite{barnes}. This equivalence breaks down once the response becomes nonlinear. We employ stress relaxation tests to access the time scale $\tau^*$ over which viscous effects are significant, and we use flow start-up tests to determine the strain scale $\gamma^\dag$ beyond which the stress-strain curve becomes nonlinear. We consider stress relaxation first. In a stress relaxation test one measures the time-dependent stress $\sigma(t,\gamma_0)$ that develops in a response to a sudden shear strain with amplitude $\gamma_0$, i.e. \begin{equation} \gamma(t) = \left \lbrace \begin{array}{cl} 0 & t< 0 \\ \gamma_0 & t \ge 0 \,. \end{array} \right. \end{equation} The relaxation modulus is \begin{equation} G_r(t, \gamma_0) \equiv \frac{\sigma(t,\gamma_0)}{\gamma_0} \,. \end{equation} We determine the relaxation modulus by employing the shear protocol of Hatano \cite{hatano09}. A packing's particles and simulation cell are affinely displaced in accordance with a simple shear with amplitude $\gamma_0$. E.g.~for a simple shear in the $\hat x$-direction, the position of a particle $i$ initially at $(x_i, y_i)$ instantaneously becomes $(x_i + \gamma_0 y_i, y_i)$, while the Lees-Edwards boundary conditions are shifted by $\hat \gamma_0 L_y$, where $L_y$ is the height of the simulation cell. Then the particles are allowed to relax to a new mechanical equilibrium while the Lees-Edwards offset is held fixed. The main panel of Fig.~\ref{fig:stressrelax} illustrates four relaxation moduli of a single packing equilibrated at pressure $p = 10^{-4.5}$ and then sheared with strain amplitudes varying over three decades. All four undergo a relaxation from an initial plateau at short times to a final, lower plateau at long times. The character of the particle motions changes as relaxation progresses in time. While the particle motions immediately after the deformation are affine (Fig.~\ref{fig:stressrelax}a), they become increasingly non-affine as the stresses relax to a new static equilibrium (Fig.~\ref{fig:stressrelax}b,c). This non-affine motion is a consequence of slowly relaxing eigenmodes of the packing that become increasingly abundant on approach to jamming \cite{tighe11}. These modes favor sliding motion between contacting particles \cite{ellenbroek06}, reminiscent of zero energy floppy modes \cite{alexander}, and play an important role in theoretical descriptions of mechanical response near jamming \cite{wyart05,wyartannales,maloney06b,zaccone11,tighe11}. For sufficiently small strain amplitudes, linear response is obtained and any dependence of the relaxation modulus on $\gamma_0$ is sub-dominant. The near-perfect overlap of the moduli for the two smaller strain amplitudes Fig.~\ref{fig:stressrelax} indicates that they reside in the linear regime. The long-time plateau is then equal to the linear elastic modulus $G_0$. In practice there is a crossover time scale $\tau^*$ such that for longer times $t \gg \tau^*$ viscous damping is negligible and the relaxation modulus is well approximated by its asymptote, $G_r \simeq G_0$. For the data in Fig.~\ref{fig:stressrelax}a the crossover time is $\tau^* \approx 10^{4}\tau_0$. In the following Section we will determine the scaling of $\tau^*$ with pressure. \subsection{Scaling in the relaxation modulus} \label{sec:scaling} \begin{figure} \includegraphics[width=1.0\columnwidth]{finitesize.pdf} \caption{(a) Finite size scaling collapse of the linear shear modulus $G_0$ in harmonic packings with exponent $\mu = 1/2$. (b) Finite size scaling collapse of the relaxation time $\tau^*$ with exponent $\lambda \approx 1.13$. (c) The relaxation modulus $G_r$ collapses to a master curve when $G_r$ and $t$ are rescaled with $G_0$ and $\tau^*$, respectively, as determined in (a) and (b). At short times the master curve decays as a power law with exponent $\theta = \mu/\lambda \approx 0.44$ (dashed line).} \label{fig:FS} \end{figure} We now characterize stress relaxation in linear response by measuring the relaxation modulus, ensemble-averaged over ensembles of packings prepared at varying pressure. We will show that $G_r$ collapses to a critical scaling function governed by the distance to the jamming point, consistent with recent theoretical predictions by Tighe \cite{tighe11}. Our main focus is on numerically measuring the time scale beyond which viscous effects fade and the response becomes quasistatic, which is predicted to scale as $\tau^* \sim {k \tau_0}/{p}$. We showed in Fig.~\ref{fig:stressrelax} that a packing relaxes in three stages. The short-time plateau is trivial, in the sense that viscous forces prevent the particles from relaxing at rates faster than $1/\tau_0$; hence particles have not had time to depart significantly from the imposed affine deformation and the relaxation modulus reflects the contact stiffness, $G_r \sim k$. We therefore focus hereafter on the response on time scales $t \gg \tau_0$. To demonstrate dynamic critical scaling in $G_r$, we first determine the scaling of its long-time asymptote $G_0$. We then identify the time scale $\tau^*$ on which $G_r$ significantly deviates from $G_0$. Finally, we show that rescaling with these two parameters collapses the relaxation moduli for a range of pressures to a single master curve. While we address variations with strain in subsequent Sections, the strain amplitude here is fixed to a value $\gamma_0 = 10^{-5.5}$. We have verified that this strain amplitude is in the linear regime for all of the data presented in this Section. As noted above, at long times the relaxation modulus approaches the linear quasistatic modulus, $G_r(t \rightarrow \infty) \simeq G_0$. We verify the scaling for $G_0$ from Eq.~(\ref{eqn:G0}) in our harmonic packings by repeating the finite size scaling analysis of Goodrich et al.~\cite{goodrich12}, who showed that finite size effects become important when a packing has $O(1)$ contacts in excess of isostaticity, or equivalently when $p/k \sim 1/N^2$ -- c.f.~Eq.~(\ref{eqn:order}). Consistent with their results, we find that ${\cal G} \equiv G_0 N^{2\mu}$ for varying $N$ and $p$ collapses to a master curve when plotted versus $x \equiv pN^2$, as shown in Fig.~\ref{fig:FS}a. The scaling of Eq.~(\ref{eqn:G0}) is verified by this data collapse together with the requirement for the modulus to be an intensive property of large systems. To see this, note that $G_0$ is intensive only if ${\cal G} \sim x^{\mu}$ for large $x$. Again referring to Fig.~\ref{fig:stressrelax}, there is clearly some time scale $\tau^*$ such that for $t < \tau^*$ the relaxation modulus deviates significantly from the quasistatic modulus. To determine the scaling of $\tau^*$ with $p$, we perform the finite size scaling analysis presented in Fig.~\ref{fig:FS}b. The relaxation time is determined from the point where $G_r$, averaged over an ensemble of at least 100 packings per condition, has decayed to within a fraction $\Delta$ of its final value, $G_r(t = \tau^*) = (1+\Delta)G_0$. We present data for $\Delta = 1/e$, but similar scaling results for a range of $\Delta$ \cite{dagois-bohy14}. We require the rescaled pressure to remain $x = pN^2$ and collapse the data by rescaling the relaxation time as $\tau^* / N^{2\lambda}$ for a positive exponent $\lambda$. It follows that $\tau^*$ diverges in large systems near jamming as \begin{equation} {\tau^*} \sim \left(\frac{k}{p}\right)^\lambda \tau_0 \,\,\,{\rm as}\,\,\, N \rightarrow \infty \,. \end{equation} We find the best data collapse for $\lambda = 1.13$, close to but somewhat higher than the value $\lambda = 1$ predicted by theory \cite{tighe11}, although our current numerical results do not exclude this possibility. We now use the linear quasistatic modulus $G_0$ and the characteristic time scale $\tau^*$ to collapse the relaxation modulus to a master curve ${\cal R}(s)$. Fig.~\ref{fig:FS}c plots $ {\cal R} \equiv G_r/G_0$ versus $s \equiv t/\tau^*$ for a range of pressures and system sizes; data from the trivial affine regime at times $t < 10\tau_0$ have been excluded. The resulting data collapse is excellent, and the master curve it reveals has two scaling regimes: ${\cal R} \simeq 1$ for $s \gg 1$, and ${\cal R} \sim s^{-\theta}$ for $s \ll 1$. The plateau at large $s$ occurs by construction and corresponds to the quasistatic scaling $G_r \simeq G_0$. The power law relaxation at shorter times corresponds to $G_r \sim G_0(t/\tau^*)^{-\theta}$ for some exponent $\theta$. By considering a marginal solid prepared at the jamming point, one finds that the prefactor of $t^{-\theta}$ cannot depend on the pressure. Invoking the pressure scaling of $G_0$ and $\tau^*$ in the large $N$ limit, identified above, we conclude that $\theta = \mu/\lambda$. Hence in large systems the relaxation modulus scales as \begin{equation} \frac{G_r(t)}{k} \sim \left \lbrace \begin{array}{cc} \left({\tau_0}/{t} \right)^{\theta} & 1 \ll t/\tau_0 \ll ({k}/p)^\lambda \\ (p/k)^{\mu} & ({k}/p)^\lambda \ll t/\tau_0 \,. \end{array} \right. \label{eqn:Gr} \end{equation} with $\mu = 1/2$, $\lambda \approx 1.13$, and $\theta = \mu/\lambda \approx 0.44$. Anomalous stress relaxation with exponent $\theta \approx 1/2$ was first observed in simulations below jamming \cite{hatano09} and is also found in disordered spring networks \cite{tighe12,sheinman12}. It is related via Fourier transform to the anomalous scaling of the frequency dependent complex shear modulus $G^* \sim (\imath \omega)^{1-\theta}$ found in viscoelastic solids near jamming \cite{tighe11}. We revisit the scaling relation of Eq.~(\ref{eqn:Gr}) in Section \ref{sec:rate}. \section{Finite strain} \label{sec:QS} \begin{figure} \includegraphics[width=\columnwidth]{highstrain.pdf} \caption{Averaged stress-strain curves under quasistatic shear at varying pressure $p$. Solid and dashed curves were calculated using different strain protocols. Dashed curves: fixed strain steps of $10^{-3}$, sheared to a final strain of unity. Solid curves: logarithmically increasing strain steps, beginning at $10^{-9}$ and reaching a total strain of $10^{-2}$ after 600 steps. } \label{fig:plasticflow} \end{figure} When does linear elasticity break down under increasing strain, and what lies beyond? To answer these questions, we now probe shear response at finite strain using flow start-up tests. \subsection{Flow start-up} In a flow start-up test, strain-controlled boundary conditions are used to ``turn on'' a flow with constant strain rate $\dot \gamma_0$ at time $t = 0$, i.e. \begin{equation} \gamma(t) = \left \lbrace \begin{array}{cl} 0 & t< 0 \\ \dot\gamma_0 t & t \ge 0 \end{array} \right. \label{eqn:startup} \end{equation} To implement flow start-up in MD, at time $t =0$ a packing's particles and simulation cell are instantaneously assigned an affine velocity profile $\vec v_i = (\dot \gamma_0 \, y_i,0)^T$ in accordance with a simple shear with strain rate $\dot \gamma_0$; the Lees-Edwards images of the simulation cell are assigned a commensurate velocity. Then the particles are allowed to evolve according to Newton's laws while the Lees-Edwards boundary conditions maintain constant velocity, so that the total strain $\gamma(t)$ grows linearly in time. We also perform quasistatic shear simulations using nonlinear CG minimization to realize the limit of vanishing strain rate. Particle positions are evolved by giving the Lees-Edwards boundary conditions a series of small strain increments and equilibrating to a new minimum of the elastic potential energy. The stress $\sigma$ is then reported as a function of the accumulated strain. For some runs we use a variable step size in order to more accurately determine the response at small strain. Fig.~\ref{fig:window} illustrates the output of both the finite strain rate and quasistatic protocols. \subsection{Quasistatic stress-strain curves} To avoid complications due to rate-dependence, we consider the limit of vanishing strain rate first. Fig.~\ref{fig:plasticflow} plots the ensemble-averaged stress-strain curve $\sigma(\gamma)$ for harmonic packings at varying pressure. Packings contain $N = 1024$ particles, and each data point is averaged over at least 600 configurations. Several features of the stress-strain curves stand out. First, there is indeed a window of initially linear growth. Second, beyond a strain of approximately 5 - 10\% the system achieves steady plastic flow and the stress-strain curve is flat. Finally, the end of linear elasticity and the beginning of steady plastic flow do not generally coincide; instead there is an interval in which the stress-strain curve has a complex nonlinear form. We shall refer to the end of the linear elastic regime as ``softening'' because the stress initially dips {\em below} the extrapolation of Hooke's law. (In the plasticity literature the same phenomenon would be denoted ``strain hardening''.) Moreover, for sufficiently low pressures there is a strain interval over which the stress increases faster than linearly. This surprising behavior is worthy of further attention, but the focus of the present work will be on the end of linear elasticity and the onset of softening. This occurs on a strain scale $\gamma^\dag$ that clearly depends on pressure. \subsection{Onset of softening} \begin{figure} \includegraphics[width=\columnwidth]{xover_harm.pdf} \caption{(main panel) Data from Fig.~\ref{fig:plasticflow}, expressed as a dimensionless effective shear modulus $\sigma/G_0 \gamma $ and plotted versus the rescaled strain $\gamma/p$. (inset) The crossover strain $\gamma^\dag$ where the effective shear modulus has decayed by an amount $\Delta$ in a system of $N = 1024$ particles. } \label{fig:softening} \end{figure} We now determine the pressure and system size dependence of the softening (or nonlinear) strain scale $\gamma^\dag$. Fig.~\ref{fig:softening} replots the quasistatic shear data from Fig.~\ref{fig:plasticflow} (solid curves), now with the linear elastic trend $G_0 \gamma$ scaled out. The rescaling collapses data for varying pressures in the linear regime and renders the linear regime flat. The strain axis in Fig.~\ref{fig:softening}b is also rescaled with the pressure, a choice that will be justified below. The onset of softening occurs near unity in the rescaled strain coordinate for all pressures, which suggests that $\gamma^\dag$ scales linearly with $p$ in harmonic packings ($\alpha = 2$). Unlike the linear relaxation modulus in Fig.~\ref{fig:FS}c, the quasistatic shear data in Fig.~\ref{fig:softening} do not collapse to a master curve; instead the slope immediately after softening steepens (in a log-log plot) as the pressure decreases. As a result, it is not possible to unambiguously identify a correlation $\gamma^\dag \sim p^\nu$ between the crossover strain and the pressure. To clarify this point, the inset of Fig.~\ref{fig:softening} plots the strain where $\sigma/G_0 \gamma$ has decayed by an amount $\Delta$ from its plateau value, denoted $\gamma^\dag(\Delta)$. This strain scale is indeed approximately linear in the pressure $p$ (dashed curves), but a power law fit gives an exponent $\nu$ in the range 0.87 to 1.06, depending on the value of $\Delta$. Bearing the above subtlety in mind, we nevertheless conclude that an effective power law with $\nu = 1$ provides a reasonable description of the softening strain. Section \ref{sec:scaling} presents further evidence to support this conclusion. \subsection{Hertzian packings} In the previous section the pressure-dependence of $\gamma^\dag$ was determined for harmonic packings. We now generalize this result to other pair potentials, with numerical verification for the case of Hertzian packings ($\alpha = 5/2$). Recall that the natural units of stress are set by the contact stiffness $k$, which itself varies with pressure when $\alpha \neq 2$. Based on the linear scaling of $\gamma^\dag$ in harmonic packings, we anticipate \begin{equation} \gamma^\dag \sim \frac{p}{k} \sim p^{1/(\alpha - 1)} \,, \label{eqn:gdag} \end{equation} which becomes $\gamma^\dag \sim p^{2/3}$ in the Hertzian case. To test this relation, we repeat the analysis of the preceding Section; results are shown in Fig.~\ref{fig:hertzian}. We again find a finite linear elastic window that gives way to softening. Softening onset can again be described with a $\Delta$-dependent exponent (see inset). Its value has a narrow spread about $2/3$; power law fits give slopes between 0.63 and 0.74. \begin{figure} \includegraphics[width=\columnwidth]{xover_hertz.pdf} \caption{(main panel) The dimensionless shear modulus of quasistatically sheared Hertzian packings plotted versus the rescaled strain $\gamma/p^{2/3}$. (inset) Pressure-dependence of the crossover strain $\gamma^\dag$. } \label{fig:hertzian} \end{figure} \subsection{Relating softening and contact changes} \label{sec:cc} \begin{figure} \includegraphics[width=\columnwidth]{contacts_scale.pdf} \caption{The contact change density shown for (a) varying system size and (b) varying pressure. (c) Data collapse for pressures $p = 10^{-2} \ldots 10^{-5}$ in half decade steps and system sizes $N = 128 \ldots 1024$ in multiples of 2. Dashed lines indicate slopes of 1 and 1/2. } \label{fig:Ncc} \end{figure} Why does the linear elastic window close when it does? We now seek to relate softening with contact changes on the particle scale \cite{schreck11,vandeen14,knowlton14,keim14,keim15,kawasaki15}. Specifically, we identify a correlation between the softening strain $\gamma^\dag$, the cumulative number of contact changes, and the distance to the isostatic contact number $z_c$. In so doing we will answer the question first posed by Schreck and co-workers \cite{schreck11}, who asked how many contact changes a packing can accumulate while still displaying linear elastic response. We begin by investigating the ensemble-averaged contact change density $n_{\rm cc}(\gamma) \equiv [N_{\rm make}(\gamma) + N_{\rm break}(\gamma)]/N$, where $N_{\rm make}$ and $N_{\rm break}$ are the number of made and broken contacts, respectively, accumulated during a strain $\gamma$. Contact changes are identified by comparing the contact network at strain $\gamma$ to the network at zero strain. In Fig.~\ref{fig:Ncc}a we plot $n_{\rm cc}$ for packings of harmonic particles at pressure $p = 10^{-4}$ and varying system size. The data collapse to a single curve, indicating that $n_{\rm cc}$ is indeed an intensive quantity. The effect of varying pressure is shown in Fig.~\ref{fig:Ncc}b. There are two qualitatively distinct regimes in $n_{\rm cc}$, with a crossover governed by pressure. To better understand these features, we seek to collapse the $n_{\rm cc}$ data to a master curve. By plotting ${\cal N} \equiv n_{\rm cc}/p^{\tau}$ versus $y \equiv \gamma/p$, we obtain excellent collapse for $\tau = 1/2$, as shown in Fig.~\ref{fig:Ncc}b for the same pressures as in Fig.~\ref{fig:Ncc}a and system sizes $N = 128 \ldots 1024$. The scaling function ${\cal N} \sim y$ for small $y$, while ${\cal N} \sim y^\tau$ for $y \gtrsim 1$. The rescaled strain $y$ provides further evidence for a crossover scale $\gamma^\dag \sim p/k$, now apparent at the microscale. Moreover, the fact that data for varying system sizes all collapse to the same master curve is an important indicator that $\gamma^\dag$ is an intensive strain scale that remains finite in the large system size limit. The scaling collapse in Fig.~\ref{fig:Ncc}c generalizes the results of Van Deen et al.~\cite{vandeen14}, who determined the strain scale $\gamma_{\rm cc}^{(1)} \sim (p/k)^{1/2}/N$ associated with the first contact change. To see this, note that the inverse slope $({\rm d}\gamma/{\rm d}n_{\rm cc})/N$ represents the average strain interval between contact changes at a given strain. Hence the initial slope of $n_{\rm cc}$ is fixed by $\gamma_{\rm cc}^{(1)}$: \begin{equation} n_{\rm cc}(\gamma) \simeq \frac{1}{N} \left(\frac{\gamma}{\gamma_{\rm cc}^{(1)}} \right) \,\,\,\,\,\, {\rm as } \,\,\,\,\,\, \gamma \rightarrow 0 \,. \label{eqn:ncc} \end{equation} From Fig.~\ref{fig:Ncc} it is apparent that $n_{\rm cc}$ remains linear in $\gamma$ up to the crossover strain $\gamma^\dag$. We conclude that $\gamma_{\rm cc}^{(1)}$ describes the strain between successive contact changes over the entire interval $0 \le \gamma < \gamma^\dag$. In the softening regime the strain between contact changes increases; it scales as $n_{\rm cc} \sim \gamma^{1/2}$ (see Fig.~\ref{fig:Ncc}c). Let us now re-interpret the softening crossover strain $\gamma^\dag \sim \Delta z^2$ (c.f.~Eq.~(\ref{eqn:order})) in terms of the coordination of the contact network. We recall that $\Delta z = z - z_c$ is the difference between the initial contact number $z$ and the isostatic value $z_c$, which corresponds to the minimum number of contacts per particle needed for rigidity. The excess coordination $\Delta z$ is therefore an important characterization of the contact network. The contact change density at the softening crossover, $n_{\rm cc}^\dag$, can be related to $\Delta z$ via Eq.~(\ref{eqn:ncc}), while making use of Eq.~(\ref{eqn:order}), \begin{equation} n_{\rm cc}^\dag \equiv n_{\rm cc}(\gamma^\dag) \sim \Delta z\,. \end{equation} Hence we have empirically identified a topological criterion for the onset of softening: an initially isotropic packing softens when it has undergone an extensive number of contact changes that is comparable to the number of contacts it initially had in excess of isostaticity. (This does not mean the packing is isostatic at the softening crossover, as $n_{\rm cc}$ counts both made and broken contacts.) \subsection{Rate-dependence} \label{sec:rate} To this point we have considered nonlinear response exclusively in the limit of quasistatic shearing. A material accumulates strain quasistatically when the imposed strain rate is slower than the longest relaxation time in the system. Because relaxation times near jamming are long and deformations in the lab always occur at finite rate, we can anticipate that quasistatic response is difficult to achieve and that rate-dependence generically plays a significant role. Hence it is important to consider shear at finite strain and finite strain rate. We now consider flow start-up experiments in which a finite strain rate $\dot \gamma_0$ is imposed at time $t = 0$, cf.~Eq.~(\ref{eqn:startup}). \begin{figure} \includegraphics[width=\columnwidth]{srate_p4.pdf} \caption{The effective shear modulus during flow start-up for packings of $N = 1024$ particles at pressure $p = 10^{-4}$, plotted versus strain for varying strain rates $\dot \gamma_0$. (inset) The same data collapses for early times when plotted versus $t$, decaying as a power law with exponent $\theta = \mu/\lambda \approx 0.44$ (dashed line). } \label{fig:flowstartup} \end{figure} Fig.~\ref{fig:flowstartup} displays the mechanical response to flow start-up for varying strain rates. To facilitate comparison with the quasistatic data of the previous section, flow start-up data are plotted in terms of the dimensionless quantity $ \sigma(t;\dot \gamma_0)/G_0 \gamma$, which we shall refer to as the effective shear modulus. The data are for systems of $N = 1024$ particles, averaged over an ensemble of around 100 realizations each. Here we plot data for the pressure $p = 10^{-4}$; results are qualitatively similar for other pressures. For comparison, we also plot the result of quasistatic shear (solid circles) applied to the same ensemble of packings. Packings sheared sufficiently slowly follow the quasistatic curve; see e.g.~data for $\dot \gamma_0 = 10^{-11}$. For smaller strains, however, the effective shear modulus is stiffer than the quasistatic curve and decays as $\sigma/\gamma \sim t^{-\theta}$ (see inset). This is rate-dependence: for a given strain amplitude, the modulus increases with increasing strain rate. Correspondingly, the characteristic strain $\gamma^*$ where curves in the main panel of Fig.~\ref{fig:flowstartup} reach the linear elastic plateau ($\sigma/G_0 \gamma \approx 1$) grows with $\dot \gamma_0$. For sufficiently high strain rates there is no linear elastic plateau; for the data in Fig.~\ref{fig:flowstartup} this occurs for $\dot \gamma_0 \approx 10^{-8}$. Hence there is a characteristic strain rate, $\dot \gamma^\dag$, beyond which the linear elastic window has closed: packings sheared faster than $\dot \gamma^\dag$ are always rate-dependent and/or strain softening. To understand the rate-dependent response at small strains, we revisit the relaxation modulus determined in Section \ref{sec:relax}. In linear response the stress after flow start-up depends only on the elapsed time $t = \gamma / \dot \gamma_0$, \begin{equation} \frac{\sigma}{ \gamma} = \frac{1}{t} \, \int_0^{t} G_r(t') \, {\rm d}t' \,. \end{equation} Employing the scaling relations of Eq.~(\ref{eqn:Gr}), one finds \begin{equation} \frac{\sigma}{ \gamma} \sim k \left(\frac{ \tau_0}{t}\right)^{\theta}, \,\,\,\,\,\,\,\,\,\,\,\, \tau_0 < t < \tau^* \,, \end{equation} as verified in Fig.~\ref{fig:flowstartup} (inset). Linear elasticity ${\sigma}/{ \gamma} \simeq G_0$ is only established at longer times, when $\gamma > \dot \gamma_0 \tau^* \sim ({k}/{p})^\lambda\,\dot \gamma_0 \tau_0$. Hence the relaxation time $\tau^*$ plays an important role: it governs the crossover from rate-dependent to quasistatic linear response. The system requires a time $\tau^*$ to relax after a perturbation. When it is driven at a faster rate, it cannot relax fully and hence its response depends on the driving rate. We can now identify the characteristic strain rate $\dot \gamma^\dag$ where the linear elastic window closes. This rate is reached when the bound on quasistaticity, $\gamma > \dot \gamma_0 \tau^*$, collides with the bound on linearity, $\gamma < \gamma^\dag$, giving \begin{equation} \dot \gamma^\dag \sim \frac{(p/k)^{1+\lambda}}{\tau_0} \,, \end{equation} with $1+\lambda \approx 2.1$. This strain rate vanishes rapidly near jamming, and packings must be sheared increasingly slowly to observe a stress-strain curve that obeys Hooke's law. As a practical consequence, experiments near jamming are unlikely to access the linear elastic regime. \begin{figure} \includegraphics[width= 0.7\columnwidth]{cartoon.pdf} \caption{In a flow start-up test, quasistatic linear response ($G \approx G_0$) occupies a strain window $\gamma^* < \gamma < \gamma^\dag$ (shaded regions). For smaller strains the response is rate-dependent, with a crossover strain $\gamma^*$ that depends on both pressure and strain rate. Softening sets in for higher strains, with a crossover $\gamma^\dag$ that depends only on the pressure. The intersection of the rate-dependent and softening crossovers defines a strain rate $\dot \gamma^\dag$ above which there is no quasistatic linear response, i.e.~the shaded region closes. } \label{fig:regimes} \end{figure} \section{Discussion} Using a combination of stress relaxation and flow start-up experiments, we have shown that soft solids near jamming are easily driven out of the linear elastic regime. There is, however, a narrow linear elastic window that survives the accumulation of an extensive number of contact changes. This window is bounded from below by rate-dependent viscoelasticity and bounded from above by the onset of strain softening. Close to the transition these two bounds collide and the linear elastic window closes. Finally, weakly jammed materials are generally rate-dependent and/or strain softening on scales relevant to the laboratory, because the strains and strain rates bounding the linear elastic window vanish rapidly near jamming. Fig.~\ref{fig:regimes} provides a qualitative summary of our results. While our simulations are in two dimensions, we expect the scaling relations we have identified to hold for $D>2$. To the best of our knowledge, all scaling exponents near jamming that have been measured in both 2D and 3D are the same. There is also numerical evidence that $D = 2$ is the transition's upper critical dimension \cite{goodrich12,goodrich14}. Our work provides a bridge between linear elasticity near jamming, viscoelasticity at finite strain rate, and nonlinearity at finite strain amplitude. The measured relaxation modulus $G_r$ is in good agreement with the linear viscoelasticity predicted by Tighe \cite{tighe11}. Consistent with the granular experiments of Coulais et al., we identify a crossover to nonlinear strain softening. Their crossover scales differently with the distance to jamming, possibly due to the presence of static friction. The emulsions of Knowlton et al.~also soften \cite{knowlton14}. They display a crossover strain that is roughly linear in $\Delta \phi$, consistent with both our $\gamma^\dag$ and the results of Otsuki and Hayakawa \cite{otsuki14}, who simulated large amplitude oscillatory shear at finite frequency. The agreement between the crossover strains in our quasistatic simulations and the oscillatory shear simulations of Ref.~\cite{otsuki14} is surprising, as most of their results are for frequencies higher than $\dot \gamma^\dag$, where viscous stresses dominate. There are also qualitative differences between the quasistatic shear modulus, which cannot be collapsed to a master curve (Fig.~\ref{fig:softening}), and the storage modulus in oscillatory shear, which can \cite{otsuki14,dagois-bohy14}. We speculate that there are corresponding microstructural differences between packings in steady state and transient shear \cite{regev13}, similar to those which produce memory effects \cite{keim13}. Soft sphere packings near jamming approach the isostatic state, which also governs the rigidity of closely related materials such as biopolymer and fiber networks \cite{heussinger06,heussinger06b,broedersz11,das12}. It is therefore remarkable to note that, whereas sphere packings soften under strain, quasistatically sheared amorphous networks are strain stiffening beyond a crossover strain that scales as $\Delta z$ \cite{wyart08}, which vanishes more slowly than $\gamma^\dag \sim \Delta z^2$ in packings. Hence nonlinearity sets in later and with opposite effect in networks \cite{tighe14}. We expect that this difference is attributable to contact changes, which are absent or controlled by slow binding/unbinding processes in networks. We have demonstrated that the onset of softening occurs when the system has accumulated a finite number of contact changes correlated with the system's initial distance from the isostatic state. This establishes an important link between microscopic and bulk response. Yet further work investigating the relationship between microscopic irreversibility, softening, and yielding is needed. The inter-cycle diffusivity in oscillatory shear, for example, jumps at yielding \cite{knowlton14,kawasaki15}, but its pressure dependence has not been studied. Shear reversal tests could also provide insight into the connection between jamming and plasticity. While the onset of softening can be probed with quasistatic simulation methods, rate dependent effects such as the strain scale $\gamma^*$ should be sensitive to the manner in which energy is dissipated. The dissipative contact forces considered here are most appropriate as a model for foams and emulsions. Hence useful extensions to the present work might consider systems with, e.g., lubrication forces or a thermostat. \section{Acknowledgments} We thank P.~Boukany, D.~J.~Koeze, M.~van Hecke, and S.~Vasudevan for valuable discussions. JB, DV and BPT were supported by the Dutch Organization for Scientific Research (NWO). ES was supported by the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences. This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative.
2,877,628,089,338
arxiv
\section{Introduction} The scattering transform is a wavelet-based model of convolutional neural networks (CNNs), introduced for signals defined on $\mathbb{R}^n$ by S. Mallat in \cite{mallat:scattering2012}. Like the front end of a CNN, the scattering transform produces a representation of an inputted signal through an alternating cascade of filter convolutions and pointwise nonlinearities. It differs from CNNs in two respects: i) it uses predesigned, wavelet filters rather than filters learned through training data, and ii) it uses the complex modulus $|\cdot|$ as its nonlinear activation function rather than more common choices such as the rectified linear unit (ReLU). These differences lead to a network which provably has desirable mathematical properties. In particular, the Euclidean scattering transform is: i) nonexpansive on $\mathbf{L}^2(\mathbb{R}^n)$, ii) invariant to translations up to a certain scale parameter, and iii) stable to certain diffeomorphisms. In addition to these theoretical properties, the scattering transform has also been used to achieve very good numerical results in fields such as audio processing \cite{anden:scatAudioClass2011}, medical signal processing \cite{talmon:scatManifoldHeart2014}, computer vision \cite{oyallon2015deep}, and quantum chemistry~\cite{hirn:waveletScatQuantum2016}. While CNNs have proven tremendously effective for a wide variety of machine learning tasks, they typically assume that inputted data has a Euclidean structure. For instance, an image is naturally modeled as a function on $\mathbb{R}^2.$ However, many data sets of interest such as social networks, molecules, or surfaces have an intrinsically non-Euclidean structure and are naturally modeled as graphs or manifolds. This has motivated the rise of geometric deep learning, a field which aims to generalize deep learning methods to non-Euclidean settings. In particular, a number of papers have produced versions of the scattering transform for graph \cite{gama:stabilityGraphScat2019,gama:diffScatGraphs2018,gao:graphScat2018,zou:graphScatGAN2019} and manifold \cite{perlmutter:geoScatCompactManifold2019} structured data. These constructions seek to provide a mathematical model of geometric deep learning architectures such as graph neural networks in a manner analogous the way that Euclidean scattering transform models CNNs. In this paper, we will construct two new families of wavelet transforms on a graph $G$ from asymmetric matrices $\mathbf{K}$ and provide a theoretical analysis of both of these wavelet transforms as well as the windowed and non-windowed scattering transforms constructed from them. Because the matrices $\mathbf{K}$ are in general not symmetric, our wavelet transforms will not be nonexpansive frame analysis operators on the standard inner product space $\mathbf{L}^2(G).$ Instead, they will be nonexpansive on a certain weighted inner product space $\mathbf{L}^2(G,\mathbf{M}),$ where $\mathbf{M}$ is an invertible matrix. In important special cases, our matrix $\mathbf{K}$ will be either the lazy random walk matrix $\mathbf{P},$ its transpose $\mathbf{P}^T,$ or its symmetric counterpart given by $\mathbf{T}=\mathbf{D}^{-1/2}\mathbf{P}\mathbf{D}^{1/2}.$ In these cases, $\mathbf{L}^2(G,\mathbf{M})$ is a weighted $\mathbf{L}^2$ space with weights depending on the geometry of $G.$ We will use these wavelets to construct windowed and non-windowed versions of the scattering transform on $G.$ The windowed scattering transform inputs a signal $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ and outputs a sequence of functions which we refer to as the scattering coefficients. The non-windowed scattering transform replaces the low-pass matrix used in the definition of the windowed scattering transform with an averaging operator $\bm{\mu}$ and instead outputs a sequence of scalar-valued coefficients. It can be viewed as the limit of the windowed scattering transform as the scale of the low-pass tends to infinity (evaluated at some fixed coordinate $0\leq i \leq n-1$). Analogously to the Euclidean scattering transform, we will show that the windowed graph scattering transform is: i) nonexpansive on $\mathbf{L}^2(G,\mathbf{M}),$ ii) invariant to permutations of the vertices, up to a factor depending on the scale of the low-pass (for certain choices of $\mathbf{K}$), and iii) stable to graph perturbations. Similarly, we will show that the non-windowed scattering transform is i) Lipschitz continuous on $\mathbf{L}^2(G,\mathbf{M}),$ ii) fully invariant to permutations, and iii) stable to graph perturbations. \subsection{Notation and Preliminaries}\label{sec: notation} Let $G=(V,E,W)$ be a weighted, connected graph consisting of vertices $V$, edges $E$, and weights $W$, with $|V|=n$ the number of vertices. If $\mathbf{x}=(\mathbf{x}(0),\ldots,\mathbf{x}(n-1))^T$ is a signal in $\mathbf{L}^2(G),$ we will identify $\mathbf{x}$ with the corresponding point in $\mathbb{R}^n,$ so that if $\mathbf{B}$ is an $n\times n$ matrix, the multiplication $\mathbf{B}\mathbf{x}$ is well defined. Let $\mathbf{A}$ denote the {\it{weighted}} adjacency matrix of $G$, let $\mathbf{d}=(\mathbf{d}(0),\ldots,\mathbf{d}(n-1))^T$ be the corresponding weighted degree vector, and let $\mathbf{D}=\text{diag}(\mathbf{d}).$ We will let \begin{equation*} \mathbf{N}\coloneqq\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2} \end{equation*} be the normalized graph Laplacian, let $0\leq\omega_0\leq\omega_1\leq \ldots\leq \omega_{n-1} \leq 2$ denote the eigenvalues of $\mathbf{N},$ and let $\mathbf{v}_0,\ldots,\mathbf{v}_{n-1}$ be an orthonormal eigenbasis for $\mathbf{L}^2(G),$ $\mathbf{N}\mathbf{v}_i=\omega_i \mathbf{v}_i.$ $\mathbf{N}$ may be factored as \begin{equation*} \mathbf{N}=\mathbf{V}\Omega\mathbf{V}^T, \end{equation*} where $\Omega=\text{diag}(\omega_0,\ldots,\omega_{n-1}),$ and $\mathbf{V}$ is the unitary matrix whose $i$-th column is $\mathbf{v}_i.$ One may check that $\omega_0=0$ and that we may choose $\mathbf{v}_0=\frac{\mathbf{d}^{1/2}}{\|\mathbf{d}^{1/2}\|_2},$ where $\mathbf{d}^{1/2}=(\mathbf{d}(0)^{1/2},\ldots,\mathbf{d}(n-1)^{1/2})^T.$ We note that since we assume $G$ is connected, it has a positive spectral gap, i.e. \begin{equation}\label{eqn: spectral gap} 0=\omega_0<\omega_1 \end{equation} Our wavelet transforms will be constructed from the matrix $\mathbf{T}_g$ defined by \begin{equation*} \mathbf{T}_g \coloneqq \mathbf{V} g(\Omega)\mathbf{V}^T\coloneqq\mathbf{V}\Lambda_g\mathbf{V}^T, \end{equation*} where $g:[0,2]\rightarrow[0,1]$ is some strictly decreasing spectral function such that $g(0)=1$ and $g(2)=0$, and \begin{equation*}\Lambda_g\coloneqq \text{diag}(g(\omega_0),\ldots,g(\omega_{n-1}))\coloneqq \text{diag}(\lambda_0,\ldots,\lambda_{n-1}) .\end{equation*} We note that $1= \lambda_0> \lambda_1\geq\ldots\lambda_{n-1} \geq 0,$ where the fact that $\lambda_1<\lambda_0=1$ follows from \eqref{eqn: spectral gap}. When there is no potential for confusion, we will supress dependence of $g$ and write $\mathbf{T}$ and $\Lambda$ in place of $\mathbf{T}_g$ and $\Lambda_g.$ As our main example, we will choose $g(t)\coloneqq g_\star(t)\coloneqq1-\frac{t}{2},$ in which case \begin{equation*} \mathbf{T}_{g_\star} =\mathbf{I}-\frac{1}{2}\left(\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\right)=\frac{1}{2}\left(\mathbf{I}+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\right) \end{equation*} In \cite{gama:diffScatGraphs2018}, Gama et al. constructed a graph scattering transform using wavelets which are polynomials in $\mathbf{T}_{g_\star},$ and in \cite{gao:graphScat2018}, Gao et al. defined a different, but closely related, graph scattering transform from polynomials of the lazy random walk matrix \begin{equation*} \mathbf{P}\coloneqq\mathbf{D}^{1/2}\mathbf{T}_{g_\star}\mathbf{D}^{-1/2}=\frac{1}{2}\left(\mathbf{I}+\mathbf{A}\mathbf{D}^{-1}\right) \end{equation*} In order to unify and generalize these frameworks we will let $\mathbf{M}$ be an invertible matrix and let $\mathbf{K}$ be the matrix defined by \begin{equation*} \mathbf{K}\coloneqq\mathbf{M}^{-1}\mathbf{T}\mathbf{M}. \end{equation*} Note that $\mathbf{K}$ depends on the choice of both $g$ and $\mathbf{M}$, and thus includes a very large family of matrices. As important special cases, we note that we may obtain $\mathbf{K} = \mathbf{T}$ by setting $\mathbf{M} = \mathbf{I}$, and we obtain $\mathbf{P}$ and $\mathbf{P}^T$ by setting $g(t)=g_\star(t)$ and letting $\mathbf{M} = \mathbf{D}^{-1/2}$ and $\mathbf{M} = \mathbf{D}^{1/2}$, respectively. In Section \ref{sec: wavelets}, we will construct two wavelet transforms $\mathcal{W}^{(1)}$ and $\mathcal{W}^{(2)}$ from functions of $\mathbf{K}$ and show that these wavelet transforms are non-expansive frame analysis operators on the appropriate Hilbert space. When $\mathbf{M}=\mathbf{I}$ (and therefore $\mathbf{K}=\mathbf{T}$), this Hilbert space will simply be the standard inner product space $\mathbf{L}^2(G).$ However, for general $\mathbf{M},$ the matrix $\mathbf{K}$ will not be self-adjoint on $\mathbf{L}^2(G)$. This motivates us to introduce the Hilbert space $\mathbf{L}^2(G,\mathbf{M}),$ of signals defined on $G$ with inner product defined by \begin{equation*} \langle \mathbf{x},\mathbf{y}\rangle_{\mathbf{M}} = \langle \mathbf{M}\mathbf{x},\mathbf{M}\mathbf{y}\rangle_2, \end{equation*} where $\langle\cdot,\cdot\rangle_2$ denotes the standard $\mathbf{L}^2(G)$ inner product. We note that the norms $\|\mathbf{x}\|_\mathbf{M}^2\coloneqq\langle\mathbf{x},\mathbf{x}\rangle_{\mathbf{M}}$, and $\|\mathbf{x}\|_2^2=\langle\mathbf{x},\mathbf{x}\rangle_2$ are equivalent and that \begin{equation*} \frac{1}{\|\mathbf{M}^{-1}\|_2}\|\mathbf{x}\|_2\leq \|\mathbf{x}\|_{\mathbf{M}}\leq \|\mathbf{M}\|_2\|\mathbf{x}\|_2, \end{equation*} where for any $n\times n$ matrix $\mathbf{B},$ we shall let $\|\mathbf{B}\|_2$ and $\|\mathbf{B}\|_{\mathbf{M}}$ denote its operator norms on $\mathbf{L}^2(G)$ and $\mathbf{L}^2(G,\mathbf{M})$ respectively. The following lemma, which shows that $\mathbf{K}$ is self-adjoint on $\mathbf{L}^2(G,\mathbf{M}),$ will be useful in studying the frame bounds of the wavelet transforms constructed from $\mathbf{K}.$ \begin{lemma}\label{lem: selfadjoint} $\mathbf{K}$ is self-adjoint on $\mathbf{L}^2(G,\mathbf{M}).$ \end{lemma} \begin{proof} By construction, $\mathbf{T}$ is self-adjoint with respect to the standard inner product. Therefore, for all $\mathbf{x}$ and $\mathbf{y}$ we have \begin{align*} \langle\mathbf{K}\mathbf{x},\mathbf{y}\rangle_\mathbf{M} &= \langle\mathbf{M}(\mathbf{M}^{-1}\mathbf{T}\mathbf{M})\mathbf{x},\mathbf{M}\mathbf{y}\rangle_2\\ &=\langle\mathbf{T}\mathbf{M}\mathbf{x},\mathbf{M}\mathbf{y}\rangle_2\\ &=\langle\mathbf{M} \mathbf{x},\mathbf{T}\mathbf{M}\mathbf{y}\rangle_2\\ &=\langle\mathbf{M}\mathbf{x},\mathbf{M}(\mathbf{M}^{-1}\mathbf{T}\mathbf{M})\mathbf{y}\rangle_2\\ &=\langle\mathbf{M}\mathbf{x},\mathbf{M}\mathbf{K}\mathbf{y}\rangle_2\\ &=\langle\mathbf{x},\mathbf{K}\mathbf{y}\rangle_\mathbf{M}. \end{align*} \end{proof} It will frequently be useful to consider the eigenvector decompositions of $\mathbf{T}$ and $\mathbf{K}.$ By definition, we have \begin{equation}\label{eqn: Tfactorization} \mathbf{T}=\mathbf{V}\Lambda\mathbf{V}^T \end{equation} where $\Lambda=g(\Omega)$ and $\{\mathbf{v}_0,\ldots,\mathbf{v}_{n-1}\}$ is an orthonormal eigenbasis for $\mathbf{L}^2(G)$ wih $\mathbf{T}\mathbf{v}_i=\lambda_i\mathbf{v}_i.$ Since the matrices $\mathbf{T}$ and $\mathbf{K}$ are similar with $\mathbf{K}=\mathbf{M}^{-1}\mathbf{T}\mathbf{M},$ one may use the definition of $\langle\cdot,\cdot\rangle_\mathbf{M}$ to verify that the vectors $\{\mathbf{u}_0,\ldots,\mathbf{u}_{n-1}\}$ defined by \begin{equation*} \mathbf{u}_i\coloneqq\mathbf{M}^{-1}\mathbf{v}_i \end{equation*} form an orthonormal eigenbasis for $\mathbf{L}^2(G,\mathbf{M}),$ with $\mathbf{K}\mathbf{u}_i=\lambda_i\mathbf{u}_i.$ One may also verify that \begin{equation*} \mathbf{w}_i\coloneqq\mathbf{M}\mathbf{v}_i \end{equation*} is a left-eigenvector of $\mathbf{K}$ and $\mathbf{w}_i^T\mathbf{K}=\lambda_i\mathbf{w}_i^T$ for all $0\leq i \leq n-1.$ In the following section, we will construct wavelets from polynomials of $p(\mathbf{K}).$ For a polynomial, \newline $p(t)=a_kt^k+\ldots+a_1t+a_0$ and a matrix $\mathbf{B},$ we define $p(\mathbf{B})$ by \begin{equation*} p(\mathbf{B})=a_k\mathbf{B}^k+\ldots+a_1\mathbf{B}+a_0\mathbf{I} \end{equation*} The following lemma uses \eqref{eqn: Tfactorization} to derive a formula for computing polynomials of $\mathbf{K}$ and $\mathbf{T}$ and relates the operator norms of polynomials of $\mathbf{K}$ to polynomials of $\mathbf{T}.$ It will be useful for studying the wavelet transforms introduced in the following section. \begin{lemma}\label{lem: polynomialproperties} For any polynomial $p,$ we have \begin{equation}\label{eqn: Tpolys} p(\mathbf{T})= \mathbf{V} p(\Lambda)\mathbf{V}^T \quad\text{and}\quad p(\mathbf{K}) = \mathbf{M}^{-1} p(\mathbf{T}) \mathbf{M} =\mathbf{M}^{-1} \mathbf{V} p(\Lambda)\mathbf{V}^T \mathbf{M}. \end{equation} Consequently, for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation}\label{polysequivpw} \|p(\mathbf{K})\mathbf{x}\|_\mathbf{M}=\|p(\mathbf{T})\mathbf{M}\mathbf{x}\|_2. \end{equation} \end{lemma} \begin{proof} Since $\mathbf{V}$ is unitary, $\mathbf{V}^{-1}=\mathbf{V}^T,$ and so it follows from \eqref{eqn: Tfactorization} that \begin{equation*} \mathbf{T}^r=\mathbf{V}\Lambda^r\mathbf{V}^T \end{equation*} for all $r\geq 0.$ Moreover, since $\mathbf{K}=\mathbf{M}^{-1}\mathbf{T}\mathbf{M}$ \begin{equation*} \mathbf{K}^r=\left(\mathbf{M}^{-1}\mathbf{T}\mathbf{M}\right)^r=\mathbf{M}^{-1}\mathbf{T}^r\mathbf{M}=\mathbf{M}^{-1}\mathbf{V}\Lambda^r\mathbf{V}^T\mathbf{M}. \end{equation*} Linearity now implies \eqref{eqn: Tpolys}. \eqref{polysequivpw} follows by recalling that $\|\mathbf{x}\|_\mathbf{M}=\|\mathbf{M}\mathbf{x}\|_2,$ and noting therefore that for all $\mathbf{x},$ \begin{equation*} \|p(\mathbf{K})\mathbf{x}\|_\mathbf{M}=\|\mathbf{M}(\mathbf{M}^{-1}p(\mathbf{T})\mathbf{M})\mathbf{x}\|_2=\|p(\mathbf{T})\mathbf{M}\mathbf{x}\|_2. \end{equation*} \end{proof} In light of Lemma \ref{lem: polynomialproperties}, for any polynomial $p,$ we may define $p(\mathbf{T})^{1/2}$ and $p(\mathbf{K})^{1/2}$ by \begin{equation}\label{eqn: defsquareroot} p(\mathbf{T})^{1/2}\coloneqq \mathbf{V}^Tp(\Lambda)^{1/2}\mathbf{V}^T\quad\text{and}\quad p(\mathbf{K})^{1/2}=\mathbf{M}^{-1}\mathbf{V}^Tp(\Lambda)^{1/2}\mathbf{V}^T\mathbf{M}, \end{equation} where the square root of the diagaonal matrix $p(\Lambda)$ is defined entrywise. We may readily verify that \begin{equation*} p(\mathbf{T})^{1/2}p(\mathbf{T})^{1/2}=p(\mathbf{T}) \quad\text{and}\quad p(\mathbf{K})^{1/2}p(\mathbf{K})^{1/2}=p(\mathbf{K}). \end{equation*} \subsection{Related Work}\label{sec: related} Graph scattering transforms have previously been introduced by Gama, Ribeiro, and Bruna. in \cite{gama:stabilityGraphScat2019} and \cite{gama:diffScatGraphs2018}, by Gao, Wolf, and Hirn in \cite{gao:graphScat2018}, and by Zou and Lerman in \cite{zou:graphScatGAN2019}. In \cite{zou:graphScatGAN2019}, the authors construct a family of wavelet convolutions using the spectral decomposition of the unnormalized graph Laplacian and define a windowed scattering transform as an iterative series of wavelet convolutions and nonlinearities. They then prove results analogous to Theorems \ref{thm: nonexpansive}, \ref{thm: conservation of energy}, and \ref{thm: permuatation invariance} of this this paper for their windowed scattering transform. They also introduce a notion of stability to graph perturbations. However, their notion of graph perturbations is significantly different than the one we consider in Section \ref{sec: stability}. In \cite{gama:diffScatGraphs2018}, the authors construct a family of wavelets from polynomials of $\mathbf{T},$ in the case where $g(t)=g_\star(t)=1-\frac{t}{2},$ and showed that the resulting non-windowed scattering transform was stable to graph perturbations. These results were then generalized in \cite{gama:stabilityGraphScat2019}, where the authors introduced a more general class of graph convolutions, constructed from a class of symmetric matrices known as ``graph shift operators.'' The wavelet transform considered in \cite{gama:diffScatGraphs2018} is nearly identical to the $\mathcal{W}^{(2)}$ introduced in Section \ref{sec: wavelets}, in the special case where $g(t) = g_\star(t)$ and $\mathbf{M}=\mathbf{I},$ with the only difference being that our wavelet transform includes a low-pass filter. In \cite{gao:graphScat2018}, wavelets were constructed from the lazy random walk matrix $\mathbf{P}=\mathbf{D}^{1/2}\mathbf{T}\mathbf{D}^{-1/2}.$ These wavelets are essentially the same as the $\mathcal{W}^{(2)}$ in the case where $g(t) = g_\star(t)$ and $\mathbf{M}=\mathbf{D}^{-1/2},$ although similarly to \cite{gama:diffScatGraphs2018}, the wavelets in \cite{gao:graphScat2018} do not use a low-pass filter. In all of these previous works, the authors carry out substantial numerical experiments and demonstrate that scattering transforms are effective for a variety of graph deep learning tasks. Our work here is meant to unify and generalize the theory of these previous constructions. Our introduction of the matrix $\mathbf{M}$ allows us to obtain wavelets very similar to either \cite{gama:diffScatGraphs2018} or \cite{gao:graphScat2018} as special cases. Moreover, the introduction of the tight wavelet frame $\mathcal{W}^{(1)}$ allows us to produce a network with provable conservation of energy and nonexpansive properties analogous to \cite{zou:graphScatGAN2019}. To highlight the generality of our setup, we introduce both windowed and non-windowed versions of the scattering transform using general (wavelet) frames and provide a detailed theoretical analysis of both. In the case where $\mathbf{M}=\mathbf{I}$ (and therefore $\mathbf{K}=\mathbf{T}$) much of this analysis is quite similar to \cite{gama:diffScatGraphs2018}. However, for general $\mathbf{M},$ this matrix $\mathbf{K}$ is asymmetric which introduces substantial challenges. While \cite{gao:graphScat2018} demonstrated that asymmetric wavelets are numerically effective in the case $\mathbf{K}=\mathbf{P}$, this work is the first to produce a theoretical analysis of graph scattering transforms constructed with asymmetric wavelets. We believe that the generality of our setup introduces a couple of exciting new avenues for future research. In particular, we have introduced a large class of scattering transforms with provable stability and invariance guarantees. In the future, one might attempt to learn the matrix $\mathbf{M}$ or the spectral function $g$ based off of data for improved numerical performance on specific tasks. This could be an important step towards bridging the gap between scattering transforms, which act as a model of neural networks, and other deep learning architectures. We also note that a key difference between our work and \cite{zou:graphCNNScat2018} is that we use the normalized graph Laplacian whereas they use the unnormalized Laplacian. It is quite likely that asymmetric wavelet transforms similar to ours can be constructed from the spectral decomposition of the unnormalized Laplacian. However, we leave that to future work. \section{The Graph Wavelet Transform}\label{sec: wavelets} In this section, we will construct two graph wavelet transforms based off of the matrix $\mathbf{K}=\mathbf{M}^{-1}\mathbf{T}\mathbf{M}$ introduced in Section \ref{sec: notation}. In the following sections, we will provide a theoretical analysis of the scattering transforms constructed from each of these wavelet transforms and of their stability properties. Let $J\geq 0,$ and for $0\leq j \leq J+1,$ let $p_j$ be the polynomial defined by \begin{equation*} p_j(t) = \begin{cases} 1-t & \text{if }j=0\\ t^{2^{j-1}}-t^{2^j} &\text{if }1\leq j \leq J\\ t^{2^J}&\text{if }j=J+1 \end{cases}, \end{equation*} and let $q_j(t)=p_j(t)^{1/2}.$ We note that by construction \begin{equation}\label{eqn: sum to 1} \sum_{j=0}^{J+1}p_j(t)=\sum_{j=0}^{J+1}q_j(t)^2=1\text{ for all }0\leq t\leq 1. \end{equation} Using these functions we define two wavelet transforms by \begin{equation*} \mathcal{W}^{(1)}_J=\left\{\Psi^{(1)}_j, \Phi^{(1)}_J\right\}_{0 \leq j \leq J}\quad\text{and}\quad\mathcal{W}^{(2)}_J=\left\{\Psi^{(2)}_j, \Phi^{(2)}_J\right\}_{0 \leq j \leq J}, \end{equation*} where \begin{equation*} \Psi^{(1)}_j=q_j(\mathbf{K}),\quad \Phi^{(1)}_J=q_{J+1}(\mathbf{K}),\quad\Psi^{(2)}_j=p_j(\mathbf{K}),\quad\text{and}\quad\Phi^{(2)}_J=p_{J+1}(\mathbf{K}), \end{equation*} and the $q_j(\mathbf{K})$ are defined as in \eqref{eqn: defsquareroot}. The next two propositions show $\mathcal{W}^{(1)}_J$ is an isometry and $\mathcal{W}^{(2)}_J$ is a nonexpansive frame analysis operator on $\mathbf{L}^2(G,\mathbf{M})$. \begin{proposition}\label{prop: waveletisometries} $\mathcal{W}^{(1)}_J$ is an isometry from $\mathbf{L}^2(G,\mathbf{M})$ to $\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M})).$ That is, for all $\mathbf{x} \in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation*} \left\|\mathcal{W}^{(1)}_J\mathbf{x}\right\|^2_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}\coloneqq \sum_{j=0}^{J}\left\|\Psi^{(1)}_j\mathbf{x}\right\|^2_\mathbf{M} + \left\|\Phi^{(1)}_J\mathbf{x}\right\|^2_\mathbf{M} = \|\mathbf{x}\|^2_\mathbf{M}. \end{equation*} \end{proposition} \begin{proof} Proposition \ref{lem: selfadjoint} shows $\mathbf{K}$ is self-adjoint on $\mathbf{L}^2(G,\mathbf{M}).$ By Lemma \ref{lem: polynomialproperties} and by \eqref{eqn: defsquareroot} we have \begin{equation*} \Psi_j^{(1)}= q_j(\mathbf{K})=\mathbf{M}^{-1}\mathbf{V} q_j(\Lambda)\mathbf{V}^T\mathbf{M} \end{equation*} for $0\leq j \leq J,$ and \begin{equation*} \Phi_J^{(1)}=q_{J+1}(\mathbf{K})= \mathbf{M}^{-1}\mathbf{V} q_{J+1}(\Lambda)\mathbf{V}^T\mathbf{M}. \end{equation*} Thus, $\Psi^{(1)}_0,\ldots,\Psi^{(1)}_J,$ and $\Phi_J^{(1)}$ are all self-adjoint on $\mathbf{L}^2(G,\mathbf{M})$ and are diagonalized in the same basis. Therefore, lower and upper the frame bounds of $\mathcal{W}^{(1)}$ are given by computing \begin{equation*} \min_{0\leq i\leq n-1} Q(\lambda_i)\quad\text{and}\quad\max_{0\leq i\leq n-1} Q(\lambda_i), \end{equation*} where $Q(t)\coloneqq\sum_{j=0}^{J+1} q_j(t)^2.$ The proof follows from recalling that by \eqref{eqn: sum to 1}, we have that $Q(t)=1$ uniformly on $0\leq t\leq 1,$ and therefore, $\mathcal{W}^{(1)}$ is an isometry. \end{proof} \begin{proposition}\label{prop: nonexpansivewaveletframes} $\mathcal{W}^{(2)}_J$ is a nonexpansive frame analysis operator from $\mathbf{L}^2(G,\mathbf{M})$ to $\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M})).$ That is, there exists a constant $C_J>0$ which depends only on $J,$ such that for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation*} C_J\|\mathbf{x}\|_\mathbf{M}^2\leq \left\|\mathcal{W}^{(2)}_J\mathbf{x}\right\|^2_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}\coloneqq \sum_{j=0}^{J}\left\|\Psi^{(2)}_j\mathbf{x}\right\|^2_\mathbf{M} + \left\|\Phi^{(2)}_J\mathbf{x}\right\|^2_\mathbf{M} \leq \|\mathbf{x}\|^2_\mathbf{M}. \end{equation*} We note in particular, that $C_J$ does not depend on $\mathbf{M}$ or on the eignenvalues of $\mathbf{T}.$ \end{proposition} \begin{remark} If we restrict attention to $\mathbf{x}$ such that $\langle\mathbf{x},\mathbf{u}_0\rangle=0,$ then we may use an argument similar to Proposition 4.1 of \cite{gama:diffScatGraphs2018} to get a lower frame bounds for $\mathcal{W}^{(2)}$ which does not depend on $J$, but does depend on the $\lambda_1.$ \end{remark} \begin{proof} By the same reasoning as in the proof of Proposition \ref{prop: waveletisometries}, the frame bounds of $\mathcal{W}^{(2)}$ are given by computing \begin{equation*} \min_{0\leq i\leq n-1} P(\lambda_i)\quad\text{and}\quad\max_{0\leq i\leq n-1} P(\lambda_i), \end{equation*} where $P(t)=\sum_{j=0}^{J+1} p_j(t)^2.$ Since $0\leq \lambda_i\leq 1$ for all $i,$ we have \begin{equation*} \max_i P(\lambda_i) \leq \max_{[0,1]}\sum_{j=0}^{J+1} p_j(t)^2\leq \max_{[0,1]}\left(\sum_{j=0}^{J+1} p_j(t)\right)^2=1 \end{equation*} with the middle inequality following from the fact that $p_j(t)\geq 0$ for all $t\in[0,1],$ and the last equality following from \eqref{eqn: sum to 1}. For the lower bound, we note that \begin{equation*} \min_{0\leq i\leq n-1} P(\lambda_i) \geq \min_{0\leq t\leq 1}\sum_{j=0}^{J+1} p_j(t)^2 \geq \min_{0\leq t\leq 1}\left[p_0(t)^2+p_{J+1}(t)^2\right] =\min_{0\leq t\leq 1}\left[ (1-t)^2+t^{2^{J+1}} \right] \coloneqq C_J> 0. \end{equation*} \end{proof} \section{The Scattering Transform}\label{sec: scattering} In this section, we will construct the scattering transform as a multilayered architecture built off of a frame $\mathcal{W}$ such as the wavelet transforms $\mathcal{W}_J^{(1)}$ and $\mathcal{W}_J^{(2)}$ introduced in Section \ref{sec: wavelets}. We shall see the scattering transform constructed is a continuous operator on $\mathbf{L}^2(G,\mathbf{M})$ whenever $\mathcal{W}$ is nonexpansive. We shall also see that it has desirable conservation of energy bounds when $\mathcal{W}=\mathcal{W}^{(1)}$ due to the fact that $\mathcal{W}^{(1)}$ is an isometry. On the other hand, we shall see in the following section that the scattering transform has much stronger stability guarantees when $\mathcal{W}=\mathcal{W}^{(2)}_J. \subsection{Definitions} Let $G=(V,E,W)$ be a connected weighted graph with $|V|=n,$ let $\mathbf{M}$ be an invertible matrix, and let $\mathcal{J}$ be some indexing set. Assume that \begin{equation*} \mathcal{W}=\{\Psi_j,\Phi\}_{j\in\mathcal{J}} \end{equation*} is a frame on $\mathbf{L}^2(G,\mathbf{M})$ such that \begin{equation} A\|\mathbf{x}\|^2_{\mathbf{M}}\leq\|\mathcal{W}\mathbf{x}\|^2_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}\coloneqq \sum_{j\in\mathcal{J}}\|\Psi_j\mathbf{x}\|^2_{\mathbf{M}}+ \|\Phi\mathbf{x}\|^2_{\mathbf{M}}\leq B\|\mathbf{x}\|^2_{\mathbf{M}},\label{eqn: frameAB} \end{equation} for some $0<A<B<\infty.$ In this paper, we are primarily interested in the case where $\mathcal{J}=\{0,\ldots,J\}$ and $\mathcal{W}$ is either $\mathcal{W}^{(1)}_J$ or $\mathcal{W}^{(2)}_J$. Therefore, we will think of the matrices $\Psi_j$ as wavelets, and $\Phi$ as a low-pass filter. However, we will define the scattering transform for generic frames in order to highlight the relationship between properties of the scattering transform and of the underlying frame. Letting $M:\mathbf{L}^2(G,\mathbf{M})\rightarrow\mathbf{L}^2(G,\mathbf{M})$ be the pointwise modulus function $M\mathbf{x}= ( |\mathbf{x}(0)|, \ldots, |\mathbf{x}(n-1)|)$, we define $\mathbf{U}:\mathbf{L}^2(G,\mathbf{M})\rightarrow \bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))$ by \begin{equation*} \mathbf{U}\mathbf{x} \coloneqq \{\mathbf{U}[\mathbf{j}]\mathbf{x}: m\geq 0, \mathbf{j}=(j_1,\ldots,j_m) \in\mathcal{J}^{m}\}. \end{equation*} Here, $\mathcal{J}^m$ is the $m$-fold Cartesian product of $\mathcal{J}$ with itself, the $\mathbf{U}[\mathbf{j}]\mathbf{x}$ are defined by \begin{equation*} \mathbf{U}[\mathbf{j}]\mathbf{x}=M\Psi_{j_m} \ldots M\Psi_{j_1}\mathbf{x},\end{equation*} for $m\geq 1,$ and we declare that $\mathbf{U}[\mathbf{j_e}]\mathbf{x}=\mathbf{x}$ when $m=0$ and $\mathbf{j_e}$ is the ``empty index." We then define the windowed and non-windowed scattering transforms, $\mathbf{S}:\mathbf{L}^2(G,\mathbf{M})\rightarrow\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))$ and $\overline{\mathbf{S}}:\mathbf{L}^2(G,\mathbf{M})\rightarrow\bm{\ell}^2$ by \begin{equation*} \mathbf{S}\mathbf{x} = \{\mathbf{S}[\mathbf{j}]\mathbf{x}: m\geq 0, \mathbf{j}=(j_1,\ldots,j_m) \in\mathcal{J}^{m}\}\quad\text{and}\quad \overline{\mathbf{S}}\mathbf{x} = \{\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}: m\geq 0, \mathbf{j}=(j_1,\ldots,j_m) \in\mathcal{J}^{m}\}, \end{equation*} where the scattering coefficients $\mathbf{S}[\mathbf{j}]$ and $\overline{\mathbf{S}}[\mathbf{j}]$ are defined by \begin{equation*} \mathbf{S}[\mathbf{j}]\mathbf{x}=\Phi\mathbf{U}[\mathbf{j}]\mathbf{x} \quad\text{and}\quad \overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}=\langle \bm{\mu},\mathbf{U}[\mathbf{j}]\mathbf{x}\rangle_\b \end{equation*} for some weighting vector $\bm{\mu}\in\mathbf{L}^2(G,\mathbf{M}).$ One natural choice is $\bm{\mu}=\left(\mathbf{M}^T\mathbf{M}\right)^{-1}\mathbf{1},$ where $\mathbf{1}$ is the vector of all ones. In this case, one may verify that $\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}=\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_1,$ and we recover a setup similar to \cite{gao:graphScat2018}. Another natural choice is $\bm{\mu}=\mathbf{u}_0,$ in which case we recover a setup similar to \cite{gama:diffScatGraphs2018} if we set $\mathbf{M}=\mathbf{I}.$ In practice, one only uses finitely many scattering coefficients. This motivates us to consider the partial scattering transforms defined for $0\leq \ell\leq L \leq \infty$ by \begin{equation*} \mathbf{S}_\ell^{(L)}\mathbf{x} = \{\mathbf{S}[\mathbf{j}]\mathbf{x}: \mathbf{j}=(j_1,\ldots,j_m) \in\mathcal{J}^{m}, \ell\leq m\leq L\}\end{equation*} and \begin{equation*} \overline{\mathbf{S}}_\ell^{(L)}\mathbf{x} = \{\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}: \mathbf{j}=(j_1,\ldots,j_m) \in\mathcal{J}^{m}, \ell\leq m\leq L\}. \end{equation*} \subsection{Continuity and Conservation of Energy Properties} The following theorem shows that the windowed scattering transform $\mathbf{S}$ is nonexpansive and the non-windowed scattering transform $\overline{\mathbf{S}}$ is Lipschitz continuous when $\mathcal{W}$ is either $\mathcal{W}_J^{(1)}$ or $\mathcal{W}_J^{(2)}$ or, more generally, whenever $\mathcal{W}$ is nonexpansive. \begin{theorem}\label{thm: nonexpansive} If $B\leq 1$ in \eqref{eqn: frameAB}, then the windowed scattering transform $\mathbf{S}$ is a nonexpansive operator from $\mathbf{L}^2(G,\mathbf{M})$ to $\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M})),$ and the non-windowed scattering transform $\overline{\mathbf{S}}$ is a Lipschitz continuous operator from $\mathbf{L}^2(G,\mathbf{M})$ to $\bm{\ell}^2.$ Specifically, for all $\mathbf{x},\mathbf{y}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation}\label{eqn: nonexpanisvewindow} \|\mathbf{S}\mathbf{x}-\mathbf{S}\mathbf{y}\|_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \|\mathbf{x}-\mathbf{y}\|_{\mathbf{M}}, \end{equation} and \begin{equation}\label{eqn: continuitynowindow} \|\overline{\mathbf{S}}\mathbf{x}-\overline{\mathbf{S}}\mathbf{y}\|_{\bm{\ell}^2}\leq \|\bm{\mu}\|_\mathbf{M}\|\bm{\Phi}^{-1}\|_\mathbf{M}\|\mathbf{x}-\mathbf{y}\|_{\mathbf{M}}. \end{equation} \end{theorem} The proof of \eqref{eqn: nonexpanisvewindow} is very similar to analogous results in e.g., \cite{mallat:scattering2012} and \cite{zou:graphScatGAN2019}. The proof of \eqref{eqn: continuitynowindow} uses the relationship $\mathbf{U} \mathbf{x}=\Phi^{-1}\mathbf{S} x$ to show \begin{equation*} \|\overline{\mathbf{S}}\mathbf{x}-\overline{\mathbf{S}}\mathbf{y}\|_{\bm{\ell}^2}\leq \|\bm{\mu}\|_\mathbf{M}\|\bm{\Phi}^{-1}\|_\mathbf{M}\|\mathbf{S}\mathbf{x}-\mathbf{S}\mathbf{y}\|_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}. \end{equation*} Full details are provided in Appendix \ref{sec: The proof of thm: non expansive}. The next theorem shows that if $\mathcal{W}$ is either of the wavelet transforms constructed in Section \ref{sec: wavelets}, then $\mathbf{U}$ experiences rapid energy decay. Our arguments use ideas similar to the proof of Proposition 3.3 of \cite{zou:graphScatGAN2019}, with minor modifications to account for the fact that our wavelet constructions are different. Please see Appendix \ref{sec: proof of energy decay} for a complete proof. \begin{theorem}\label{thm: energydecay} Let $J\geq 0,$ let $\mathcal{J}\coloneqq\{0,\ldots,J\},$ and let $ \mathcal{W}=\{\Psi_j,\Phi\}_{j\in\mathcal{J}} $ be either of the wavelet transforms, $\mathcal{W}_J^{(1)}$ or $\mathcal{W}_J^{(2)},$ constructed in Section \ref{sec: wavelets}. Then for all $\mathbf{x}\in\mathbf{L}^2(G, \mathbf{M})$ and all $m\geq 1$ \begin{equation}\label{eqn: ratio} \sum_{\mathbf{j}\in\mathcal{J}^{m+1}}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_{\mathbf{M}}^2\leq \left(1-\frac{\mathbf{d}_{\min}}{\|\mathbf{d}\|_1}\right) \sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_{\mathbf{M}}^2. \end{equation} Therefore, for all $m\geq 0,$ \begin{equation}\label{eqn: decay} \sum_{\mathbf{j}\in\mathcal{J}^{m+1}}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_{\mathbf{M}}^2\leq \left(1-\frac{\mathbf{d}_{\min}}{\|\mathbf{d}\|_1}\right)^{m} \| \mathbf{x} \|_\mathbf{M}^2. \end{equation} \end{theorem} The next theorem shows that if $\mathcal{W}=\mathcal{W}_J^{(1)},$ then the windowed graph scattering transform conserves energy on $\mathbf{L}^2(G,\mathbf{M}).$ Its proof, which relies on Proposition \ref{prop: waveletisometries}, Theorem \ref{thm: energydecay}, and Lemma \ref{lem: energylevels}, is nearly identical to the proof of Theorem 3.1 in \cite{zou:graphScatGAN2019}. We give a proof in Appendix \ref{sec: the proof of conservation of energy} for the sake of completeness. \begin{theorem}\label{thm: conservation of energy} Let $J\geq 0,$ let $\mathcal{J}\coloneqq\{0,\ldots,J\},$ and let $\mathcal{W}=\mathcal{W}_J^{(1)}.$ Then the non-windowed scattering transform is energy preserving, i.e., for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation*}\left\|\mathbf{S}\mathbf{x}\right\|_{\bm{\ell}^2(\mathbf{L}^2(G,\mathbf{M}))}=\|\mathbf{x}\|_{\mathbf{M}}. \end{equation*} \end{theorem} \subsection{Permutation Invariance and Equivariance} In this section, we will show that both $\mathbf{U}$ and the windowed graph scattering transform are permutation equivariant. As a consequence, we will be able to show that the non-windowed scattering transform is permutation invariant and that under certain assumptions the windowed-scattering transform is permutation invariant up to a factor depending on the scale of the low-pass filter. Let $S_n$ denote the permutation group on $n$ elements, and, for $\Pi\in S_n,$ let $G'=\Pi(G)$ be the graph obtained by permuting the vertices of $G.$ We define $\mathbf{M}',$ which we view as the analog of $\mathbf{M}$ associated to $G',$ by \begin{equation*} \mathbf{M}'=\Pi\mathbf{M}\Pi^T. \end{equation*} To motivate this definition, we note that if $\mathbf{M}$ is the identity, then $\mathbf{M}'$ is also the identity, and if $\mathbf{M}=\mathbf{D}^{1/2},$ the square-root degree matrix, then the square-root degree matrix on $G'$ is given by \begin{equation*} \Pi\mathbf{D}^{1/2}\Pi^T, \end{equation*} with a similar formula holding when $\mathbf{M}=\mathbf{D}^{-1/2}.$ We define $\mathcal{W}'$ and $\bm{\mu}',$ to be the frame and the weighting vector on $G',$ corresponding to $\mathcal{W}$ and $\bm{\mu},$ by \begin{equation}\label{eqn: dWprime} \mathcal{W}'\coloneqq\Pi\mathcal{W}\Pi^T\coloneqq\{\Pi\Psi_j\Pi^T,\Pi\Phi\Pi^T\}_{j\in\mathcal{J}}\quad\text{and}\quad \bm{\mu}'=\Pi\bm{\mu}, \end{equation} and we let $\mathbf{S}'$ and $\overline{\mathbf{S}}'$ denote the corresponding windowed and non-windowed scattering transforms on $G'.$ To understand $\mathcal{W}',$ we note that the natural analog of $\mathbf{T}$ on $G'$ is given by \begin{equation*} \mathbf{T}'=\Pi\mathbf{T}\Pi^T. \end{equation*} Therefore, Lemma \ref{lem: polynomialproperties} implies that for any for any polynomial $p,$ \begin{align*} p\left((\mathbf{M}')^{-1}\mathbf{T}\mathbf{M}'\right)&=(\mathbf{M}')^{-1}p\left(\mathbf{T}'\right)\mathbf{M}'\\&=\left(\Pi\mathbf{M}\Pi^T\right)^{-1}p\left(\Pi\mathbf{T}\Pi^T\right)\left(\Pi\mathbf{M}\Pi^T\right)\\ &=\left(\Pi\mathbf{M}^{-1}\Pi^T\right)\Pi p(\mathbf{T})\Pi^T\left(\Pi\mathbf{M}\Pi^T\right)\\ &=\Pi \mathbf{M}^{-1}p(\mathbf{T})\mathbf{M}\Pi^T\\ &=\Pi p\left(\mathbf{M}^{-1}\mathbf{T}\mathbf{M}\right)\Pi^T. \end{align*} with a similar formula holding $q\coloneqq p^{1/2}.$ Therefore, if $\mathcal{W}$ is either of the wavelet transforms $\mathcal{W}_J^{(1)}$ or $\mathcal{W}_J^{(2)},$ then $\mathcal{W}'$ is analogous wavelet transform constructed from $\mathbf{K}'\coloneqq(\mathbf{M}')^{-1}\mathbf{T}'\mathbf{M}'.$ \begin{theorem}\label{thm: equivariance} Both $\mathbf{U}$ and the windowed scattering transform $\mathbf{S}$ are equivariant to permutations. That is, if $\Pi\in S_n$ is any permutation and $\mathcal{W}'$ is defined as in \eqref{eqn: dWprime}, then for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation*} \mathbf{U}'\Pi\mathbf{x}=\Pi\mathbf{U}\mathbf{x}\quad\text{and}\quad \mathbf{S}'\Pi\mathbf{x}=\Pi\mathbf{S}\mathbf{x}. \end{equation*} \end{theorem} \begin{proof} Let $\Pi$ be a permutation. Since $\Pi(M\mathbf{x})=M(\Pi\mathbf{x})$ and $\Pi^T=\Pi^{-1},$ it follows that for all $j\in\mathcal{J}$ \begin{equation*} \mathbf{U}'[j]\Pi\mathbf{x}=M\Psi'_j\Pi\mathbf{x}= M\Pi \Psi_j\Pi^T\Pi\mathbf{x}=M\Pi \Psi_j\mathbf{x}=\Pi M\Psi_j\mathbf{x}=\Pi \mathbf{U}[j]\mathbf{x}. \end{equation*} For $\mathbf{j}=(j_1,\ldots,j_m),$ we have $\mathbf{U}[\mathbf{j}]=\mathbf{U}[j_1]\ldots\mathbf{U}[j_m].$ Therefore, it follows inductively that $\mathbf{U}$ is equivariant to permutations. Since $\mathbf{S}=\Phi\mathbf{U},$ we have that \begin{equation*} \mathbf{S}'\Pi\mathbf{x}=\Phi'\mathbf{U}'\Pi\mathbf{x}=\Pi\Phi\Pi^T\Pi\mathbf{U}\mathbf{x}=\Pi\mathbf{S}\mathbf{x}. \end{equation*}Thus, the windowed scattering transform is permutation equivariant as well. \end{proof} \begin{theorem}\label{thm: permuationinvariancewindowed} The non-windowed scattering transform $\overline{\mathbf{S}}$ is fully permutation invariant, i.e., for all permutations $\Pi$ and all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation*} \overline{\mathbf{S}}'\Pi\mathbf{x}=\overline{\mathbf{S}}\mathbf{x}. \end{equation*} \end{theorem} \begin{proof} Since $\mathbf{U}$ is permutation equivariant by Theorem \ref{thm: equivariance} and $\bm{\mu}'=\Pi\bm{\mu},$ we may use the fact that $\mathbf{M}'=\Pi\mathbf{M}\Pi^T$ and that $\Pi^T=\Pi^{-1}$ to see that for any $\mathbf{x}$ and any $\mathbf{j},$ \begin{equation*} \overline{\mathbf{S}}'[\mathbf{j}]\Pi\mathbf{x}=\langle\bm{\mu}',\mathbf{U}'[\mathbf{j}]\Pi\mathbf{x}\rangle_{\mathbf{M}'}=\langle\mathbf{M}'\Pi\bm{\mu},\mathbf{M}'\Pi\mathbf{U}[\mathbf{j}]\mathbf{x}\rangle_ =\langle\Pi\mathbf{M}\bm{\mu},\Pi\mathbf{M}\mathbf{U}[\mathbf{j}]\mathbf{x}\rangle_2=\langle\mathbf{M}\bm{\mu},\mathbf{M}\mathbf{U}[\mathbf{j}]\mathbf{x}\rangle_2=\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}. \end{equation*} \end{proof} Next, we will use Theorem \ref{thm: equivariance} to show that if $\mathcal{W}$ is either $\mathcal{W}_J^{(1)}$ or $\mathcal{W}_J^{(2)}$ and $\mathbf{M}=\mathbf{D}^{1/2},$ then the windowed scattering transform is invariant on $\mathbf{L}^2(G,\mathbf{M})$ up to a factor depending on the scale of the low-pass filter. We note that $0<\lambda_1<1.$ Therefore, $\lambda_1^t$ decays exponentially fast as $t\rightarrow\infty,$ and so if $J$ is large, the right hand side of \eqref{eqn: partial invariance} will be nearly zero. We also recall that if our spectral function is given by $g(t)=g_\star(t)$ then this choice of $\mathbf{M}$ will imply that $\mathbf{K}=\mathbf{P}^T.$ \begin{theorem}\label{thm: permuatation invariance} Let $\mathbf{M}=\mathbf{D}^{1/2},$ and let $\mathcal{W}$ be either $\mathcal{W}^{(1)}_J$ or $\mathcal{W}^{(2)}_J.$ Then the windowed-scattering transform is permutation invariant up to a factor depending on $J.$ Specifically, for all $\Pi\in S_n$ and for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation}\label{eqn: partial invariance} \|\mathbf{S}'\Pi\mathbf{x}-\mathbf{S}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \lambda_1^{t}\|\Pi-\mathbf{I}\|_\mathbf{M} \left(1+n\frac{\|\mathbf{d}\|_\infty}{\mathbf{d}_{\min}}\right)^{1/2} \|\mathbf{x}\|_\mathbf{M}, \end{equation} where $t=2^{J-1}$ if $\mathcal{W}=\mathcal{W}^{(1)}$ and $t=2^J$ if $\mathcal{W}=\mathcal{W}^{(2)}.$ \end{theorem} \begin{proof} By Theorem \ref{thm: equivariance}, and the fact that $\mathbf{S}=\Phi\mathbf{U}$ we see that \begin{align} \|\mathbf{S}'\Pi\mathbf{x}-\mathbf{S}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}&= \|\Pi\mathbf{S}\mathbf{x}-\mathbf{S}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\nonumber\\&= \|\Pi\Phi\mathbf{U}\mathbf{x}-\Phi\mathbf{U}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\nonumber\\ &\leq \|\Pi\Phi-\Phi\|_\mathbf{M}\|\mathbf{U}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}.\label{eqn: reducetoU} \end{align} Let $t=2^{J-1}$ if $\mathcal{W}=\mathcal{W}^{(1)},$ and let $t=2^{J}$ if $\mathcal{W}=\mathcal{W}^{(2)}$ so that in either case $\Phi=\mathbf{T}^t.$ Let $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \eqref{eqn: Tpolys} implies that for any $\mathbf{y}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation*} \mathbf{T}^t\mathbf{y}=\sum_{i=0}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{z}\rangle_2\mathbf{v}_i. \end{equation*} Therefore, by Lemma \ref{lem: polynomialproperties} and the relationship $\mathbf{u}_i=\mathbf{M}^{-1}\mathbf{v}_i,$ we have \begin{equation*} \mathbf{K}^t\mathbf{x}=\mathbf{M}^{-1}\mathbf{T}^t(\mathbf{M} \mathbf{x})=\sum_{i=0}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2\mathbf{M}^{-1}\mathbf{v}_i=\sum_{i=0}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2\mathbf{u}_i. \end{equation*} Since $\mathbf{v}_0=\frac{\mathbf{d}^{1/2}}{\|\mathbf{d}^{1/2}\|_2},$ and $\mathbf{u}_i=\mathbf{M}^{-1}\mathbf{v}_i,$ the assumption that $\mathbf{M}=\mathbf{D}^{1/2}$ implies that $\mathbf{u}_0=\frac{1}{\|\mathbf{d}^{1/2}\|_2}\mathbf{1}.$ Therefore, $\Pi\mathbf{u}_0=\mathbf{u}_0,$ and so \begin{equation*} \Pi\mathbf{K}^t\mathbf{x}-\mathbf{K}^t\mathbf{x}=\sum_{i=0}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2(\Pi\mathbf{u}_i-\mathbf{u}_i)=\sum_{i=1}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2(\Pi\mathbf{u}_i-\mathbf{u}_i)=(\Pi-\mathbf{I})\left(\sum_{i=1}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2\mathbf{u}_i\right). \end{equation*} Therefore, since $\{\mathbf{u}_0,\ldots,\mathbf{u}_{n-1}\}$ forms an orthonormal basis for $\mathbf{L}^2(G,\mathbf{M}),$ we have that by Parseval's identity \begin{align} \|\Pi\mathbf{K}^t\mathbf{x}-\mathbf{K}^t\mathbf{x}\|_\mathbf{M}^2&\leq \|\Pi-\mathbf{I}\|^2_\mathbf{M} \left\|\sum_{i=1}^{n-1}\lambda_i^t\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2\mathbf{u}_i\right\|_\mathbf{M}\nonumber\\ &= \|\Pi-\mathbf{I}\|_\mathbf{M}^2 \sum_{i=1}^{n-1}\lambda_i^{2t}|\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2|^2\nonumber\\ &\leq \|\Pi-\mathbf{I}\|_\mathbf{M}^2 \lambda_1^{2t}\sum_{i=1}^{n-1}|\langle\mathbf{v}_i,\mathbf{M}\mathbf{x}\rangle_2|^2\nonumber\\ &\leq \|\Pi-\mathbf{I}\|_\mathbf{M}^2 \lambda_1^{2t}\|\mathbf{M}\mathbf{x}\|_2^2\nonumber\\ &= \|\Pi-\mathbf{I}\|_\mathbf{M}^2 \lambda_1^{2t}\|\mathbf{x}\|_\mathbf{M}^2.\label{eqn: PiKtbound} \end{align} To bound $\|\mathbf{U}\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))},$ we note that by Theorem \ref{thm: energydecay}, \begin{align*} \|\mathbf{U}\mathbf{x}\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}&=\|\mathbf{x}\|_\mathbf{M}^2+ \left(\sum_{m=1}^\infty\sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_\mathbf{M}^2\right)\\ &\leq \|\mathbf{x}\|_\mathbf{M}^2+\left(\sum_{m=1}^\infty\left(1-\frac{1}{n}\frac{\mathbf{d}_{min}}{\|\mathbf{d}\|_\infty}\right)^m\sum_{\mathbf{j}\in\mathcal{J}^1}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_\mathbf{M}^2\right)\\ &\leq \|\mathbf{x}\|_\mathbf{M}^2+n\frac{\|\mathbf{d}\|_\infty}{\mathbf{d}_{\min}}\sum_{\mathbf{j}\in\mathcal{J}^1}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|_\mathbf{M}^2\\ &\leq \left(1+n\frac{\|\mathbf{d}\|_\infty}{\mathbf{d}_{\min}}\right)\|\mathbf{x}\|_\mathbf{M}^2, \end{align*} where the last inequality uses the fact that the modulus operator is nonexpansive and that $B\leq 1$ in \eqref{eqn: frameAB}. Combining this with \eqref{eqn: reducetoU} and \eqref{eqn: PiKtbound} completes the proof. \end{proof} \section{Stability to Graph Perturbations}\label{sec: stability} Let $G=(V,E,W)$ and $G'=(V',E',W')$ be weighted graphs with $|V|=|V'|=n,$ and let $\mathbf{M}$ and $\mathbf{M}'$ be invertible matrices. Throughout this section, for any object $X$ associated to $G,$ we will let $X'$ denote the analogous object on $G',$ so e.g., $\mathbf{D}'$ is the degree matrix of $G'.$ Recall that two important examples of our asymmetric matrix $\mathbf{K}$ are when $g(t)=g_\star(t)=1-\frac{t}{2}$ and $\mathbf{M}=\mathbf{D}^{\pm1/2},$ in which case $\mathbf{K}$ is either the lazy random walk matrix $\mathbf{P}$ or its transpose $\mathbf{P}^{T}.$ In these cases, the matrix $\mathbf{M}$ encodes important geometric information about $G,$ which motivates us to let \begin{equation*} \mathbf{R}_1 \coloneqq \mathbf{M}^{-1}\mathbf{M}'\quad\text{and}\quad \mathbf{R}_2 \coloneqq \mathbf{M}'\mathbf{M}^{-1}, \end{equation*} and consider the quantity \begin{equation*} \kappa(G,G')\coloneqq \max_{i=1,2}\{\max\{\|\mathbf{I}-\mathbf{R}_i\|_{2},\|\mathbf{I}-\mathbf{R}_i^{-1}\|_{2}\}\} \end{equation*} as a measure of how poorly aligned the degree vectors of $G$ and $G'$ are. In the general case, $\kappa(G,G')$ measures how different the $\|\cdot\|_\mathbf{M}$ and $\|\cdot\|_{\mathbf{M}'}$ norms are. It will also be useful to consider \begin{equation*} R(G,G') \coloneqq \max_{i=1,2}\{\max\{\|\mathbf{R}_i\|_2,\|\mathbf{R}_i^{-1}\|_2\}\ \end{equation*} We note that by construction we have $1\leq R(G,G)\leq \kappa(G,G')+1.$ Thus, if the norms $\|\cdot\|_\mathbf{M}$ and $\|\cdot\|_{\mathbf{M}'}$ are well-aligned, we will have $\kappa(G,G')\approx 0$ and consequently $R(G,G')\approx 1.$ We note that we will have $\kappa(G,G')=0$ and $R(G,G')=1,$ if either $\mathbf{M}=\mathbf{I}$ (so that $\mathbf{K}=\mathbf{T}$) or if $\mathbf{M}=\mathbf{D}^{\pm1/2}$ and the graphs $G$ and $G'$ have the same degree vector. The latter situation occurs if e.g. $G$ is a regular graph and $G'$ is obtained by permuting the vertices of $G.$ We also note that if $\mathbf{M}$ is diagonal, e.g. if $\mathbf{M}=\mathbf{D}^{\pm1/2},$ then $\mathbf{R}_1=\mathbf{R}_2.$ We may also measure how far apart two graphs are via their spectral properties. In particular, if we let $\mathbf{V}$ be the unitary matrix whose $i$-th column is given by $\mathbf{v}_i,$ an eigenvector or $\mathbf{T}$ with eigenvalue $\lambda_i,$ we see that two natural quantification's of how poorly aligned the spectral properties of $G$ and $G'$ are given by \begin{equation*} \max_{0\leq i\leq n-1}|\lambda_i-\lambda'_i|\quad\text{and}\quad\|\mathbf{V}-\mathbf{V}'\|_2. \end{equation*}Motivated by e.g., \cite{gama:diffScatGraphs2018}, we also consider the ``diffusion distances'' given by \begin{equation*} \|\mathbf{T}-\mathbf{T}'\|_\mathbf{M} \quad\text{and}\quad \|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}. \end{equation*} \subsection{Stability of the Wavelet Transforms}\label{sec: Wavelet Stabilty} In this section, we analyze the stability of the wavelet transforms $\mathcal{W}_J^{(1)}$ and $\mathcal{W}_J^{(2)}$ constructed in Section \ref{sec: wavelets}. Our first two results provide a stability bounds for $\mathcal{W}^{(1)}_J$ and $\mathcal{W}^{(2)}_J$ in the case where $\mathbf{K}=\mathbf{T}.$ These results will be extended to the general case by Theorem \ref{thm: transferTtoP}. \begin{theorem}\label{thm: wavelet stability1} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ and let $\lambda_1^*=\max\{\lambda_1,\lambda_1'\}.$ Let $\mathbf{M}=\mathbf{I}$ so that $\mathbf{K}=\mathbf{T},$ let $\mathcal{W}$ be the wavelets $\mathcal{W}^{(1)}$ constructed from $\mathbf{T}$ in Section \ref{sec: wavelets}, and let $\mathcal{W}'$ be the corresponding wavelets constructed from $\mathbf{T}'.$ Then there exists a constant $C_{\lambda^*_1},$ depending only on $\lambda_1^*$ such that \begin{equation*} \|(\mathcal{W}-\mathcal{W}')\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G)}^2\leq C_{\lambda_1^*}\left(2^J\sup_{1\leq i\leq n-1}|\lambda_i-\lambda_i'|^2+\|\mathbf{V}-\mathbf{V}'\|_2^2\right), \end{equation*} where as in \eqref{eqn: Tfactorization} \begin{equation*} \mathbf{T}=\mathbf{V}\Lambda \mathbf{V}^T\quad\text{and} \quad\mathbf{T}'=\mathbf{V}'\Lambda' (\mathbf{V}')^T. \end{equation*} \end{theorem} \begin{proof} By \eqref{eqn: defsquareroot} and the fact that $q_j(t)=p_j(t)^{1/2},$ we have that for all $0\leq j\leq J+1,$ \begin{align*} q_j(\mathbf{T})-q_j(\mathbf{T}')&=\mathbf{V} q_j(\Lambda)\mathbf{V}^T-\mathbf{V}'q_j(\Lambda')(\mathbf{V}')^T\\ &=\mathbf{V}(q_j(\Lambda)-q_j(\Lambda'))\mathbf{V}^T+(\mathbf{V}-\mathbf{V}')q_j(\Lambda')(\mathbf{V}')^T+\mathbf{V}'q_j(\Lambda')(\mathbf{V}-\mathbf{V}')^T. \end{align*} Therefore, since $\mathbf{V}$ and $\mathbf{V}'$ are unitary, we have that for all $\mathbf{x}\in\mathbf{L}^2(G)$ \begin{equation*} \|q_j(\mathbf{T})\mathbf{x}-q_j(\mathbf{T}')\mathbf{x}\|_2\leq \|(q_j(\Lambda)-q_j(\Lambda'))\mathbf{V}^T\mathbf{x}\|_2+\|\mathbf{V}-\mathbf{V}'\|_2\|q_j(\Lambda')\mathbf{V}^T\mathbf{x}\|_2+\|q_j(\Lambda)(\mathbf{V}-\mathbf{V}')\mathbf{x}\|_2, \end{equation*} and so summing over $J$ yields \begin{equation*} \|(\mathcal{W}-\mathcal{W}')\mathbf{x}\|_{\ell^2(\mathbf{L}^2(G)}^2 \leq 3\left(\sum_{j=0}^{J+1}\left(\|(q_j(\Lambda)-q_j(\Lambda'))\mathbf{V}^T\mathbf{x}\|^2_2+\|\mathbf{V}-\mathbf{V}'\|^2_2\|q_j(\Lambda')\mathbf{V}^T\mathbf{x}\|^2_2+\|q_j(\Lambda)(\mathbf{V}-\mathbf{V}')^T\mathbf{x}\|^2_2 \right)\right). \end{equation*} For any sequence of diagonal matrices $\mathbf{B}_0,\ldots,\mathbf{B}_{J+1}$ one has that for any $\mathbf{y}\in\mathbf{L}^2(G)$ \begin{equation*} \sum_{j=0}^{J+1}\|\mathbf{B}_j\mathbf{y}\|^2_2=\left\|\left(\sum_{j=0}^{J+1}\mathbf{B}_j^2\right)^{1/2}\mathbf{y}\right\|_2^2. \end{equation*} Therefore, by \eqref{eqn: sum to 1}, \begin{equation*} \sum_{j=0}^{J+1}\|q_j(\Lambda')\mathbf{V}^T\mathbf{x}\|^2_2=\|\mathbf{V}^T\mathbf{x}\|_2^2=\|\mathbf{x}\|_2^2, \end{equation*} and \begin{equation*} \sum_{j=0}^{J+1}\|q_j(\Lambda)(\mathbf{V}-\mathbf{V}')^T\mathbf{x}\|^2_2=\|(\mathbf{V}-\mathbf{V}')^T\mathbf{x}\|^2_2\leq \|\mathbf{V}-\mathbf{V}'\|_2^2\|\mathbf{x}\|_2^2. \end{equation*} Now, since $\|\mathbf{x}\|_2=1$ and $\lambda_0=\lambda_0'=1,$ \begin{align*} \sum_{j=0}^{J+1}\|(q_j(\Lambda)-q_j(\Lambda'))\mathbf{x}\|_2^2&\leq \sup_{0\leq i \leq n-1} \sum_{j=0}^{J+1}|q_j(\lambda_i)-q_j(\lambda'_i)|_2^2\\ &=\sup_{1\leq i \leq n-1} \sum_{j=0}^{J+1}|q_j(\lambda_i)-q_j(\lambda'_i)|_2^2\\ &\leq\sup_{1\leq i \leq n-1}|\lambda_i-\lambda_i'|^2 \sum_{j=0}^{J+1}\sup_{0 \leq t\leq \lambda_1^*}|q'_j(t)|_2^2. \end{align*} When $j=0,$ we have \begin{equation*} |q_0'(t)|=\left|\frac{d}{dt}\sqrt{1-t}\right|=\frac{1}{2}\frac{1}{\sqrt{1-t}}\leq C_{\lambda_1^*}\quad\text{for all } 0\leq t\leq \lambda_1^*. \end{equation*} Likewise, for $j=J+1,$ we have \begin{equation*} |q_{J+1}'(t)|=\left|\frac{d}{dt}t^{2^{J-1}}\right|\leq 2^{J-1}\quad\text{for all } 0\leq t\leq \lambda_1^*. \end{equation*} For $1\leq j\leq J,$ we may write $q_j(t)=q_1(u_j(t)),$ where $u_j(t)=t^{2^{j-1}},$ and use the fact that $|u_j(t)|\leq 1$ for all $0\leq t\leq 1$ to compute \begin{align*} |q_j'(t)|&=|q_1'(u_j(t))u_j'(t)|\\ &=\left|\frac{1-2u_j(t)}{2{t-t^2}}2^{j-1}t^{2^{j-1}-1}\right|\\ &\leq 2^{j-1}\frac{t^{2^{j-1}}}{\sqrt{t-t^2}}\\ &\leq C_{\lambda_1^*}2^{j-1} \end{align*} for all $0\leq t\leq \lambda_1^*.$ Therefore, \begin{equation*} \sum_{j=0}^{J+1}\sup_{0 \leq t\leq \lambda_1^*}|q'_j(t)|_2^2 \leq C_{\lambda_1^*}\left(1+2^J+\sum_{k=1}^{J}2^k\right)\leq C_{\lambda_1^*}2^J. \end{equation*} \end{proof} Our next result provides stability bounds for $\mathcal{W}^{(2)}_J$ in the case where $\mathbf{M}=\mathbf{I}$ (i.e. when $\mathbf{K}=\mathbf{T}$). We note that while $\mathcal{W}^{(1)}$ has the advantage of being a tight frame, $\mathcal{W}^{(2)}$ has stronger stability guarantees, which in particular are independent of $J.$ Our proof, which is closely modeled after the proofs of Lemmas 5.1 and 5.2 in \cite{gama:diffScatGraphs2018}, is given in Appendix \ref{sec: The Proof of wavelet 2 stability}. Due to a small improvement in the derivation, our result appears in a slightly different form than the result stated there. \begin{theorem}\label{thm: wavelet stability2} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ and let $\lambda_1^*=\max\{\lambda_1,\lambda_1'\}.$ Let $\mathbf{M}=\mathbf{I}$ so that $\mathbf{K}=\mathbf{T},$ let $\mathcal{W}$ be the wavelets $\mathcal{W}^{(2)}$ constructed from $\mathbf{T}$ in Section \ref{sec: wavelets}, and let $\mathcal{W}'$ be the corresponding wavelets constructed from $\mathbf{T}'.$ Then \begin{equation*} \|\mathcal{W}- \mathcal{W}'\|_{\ell^2(\mathbf{L}^2(G))}^2 \leq C_{\lambda_1^*}\left(\|\mathbf{T}-\mathbf{T}'\|^2_2+ \|\mathbf{T}-\mathbf{T}'\|_2\right). \end{equation*} \end{theorem} Theorems \ref{thm: wavelet stability1} and \ref{thm: wavelet stability2} show that the wavelets $\mathcal{W}^{(1)}$ and $\mathcal{W}^{(2)}$ are stable on $\mathbf{L}^2(G)$ in the special case that $\mathbf{M}=\mathbf{I}.$ Our next theorem extends this analysis to general $\mathbf{M}.$ More generally, it can be applied to any situation where $\{r_i(\mathbf{T})\}_{i\in\mathcal{I}}$ and $\{r_i(\mathbf{T}')\}_{i\in\mathcal{I}}$ form frames on $\mathbf{L}^2(G)$ and $\mathbf{L}^2(G'),$ $\mathcal{I}$ is some indexing set, and each of the $r_i$ are polynomials or square roots of polynomials \begin{theorem}\label{thm: transferTtoP} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ and let $\mathbf{M}$ and $\mathbf{M}'$ be invertible matrices. Let $\mathcal{I}$ be an indexing set, and for $i\in\mathcal{I},$ let $r_i(\cdot)$ be either a polynomial or the square root of a polynomial. Suppose that $\mathcal{W}^{\mathbf{T}}=\{r_i(\mathbf{T})\}_{i\in\mathcal{I}}$ forms a frame analysis operator on $\mathbf{L}^2(G)$ and that $\mathcal{W}^{\mathbf{T}'} =\{r_i(\mathbf{T}')\}_{j\in\mathcal{J}}$ forms a frame analysis operator on $\mathbf{L}^2(G'),$ and assume $B\leq 1$ in \eqref{eqn: frameAB} for both $\mathcal{W}^{\mathbf{T}}$ and $\mathcal{W}^{\mathbf{T}'}.$ Let $\mathbf{K}=\mathbf{M}^{-1}\mathbf{T}\mathbf{M}$ and let $\mathcal{W}^{\mathbf{K}}$ and $\mathcal{W}^{\mathbf{K}'}$ be the frames defined by $\{r_i(\mathbf{K})\}_{i\in\mathcal{I}}$ and $\{r_i(\mathbf{K}')\}_{i\in\mathcal{I}}.$ Then, \begin{equation*} \left\|\mathcal{W}^{\mathbf{K}}-\mathcal{W}^{\mathbf{K}'}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}^2 \leq 6\left(\left\|\mathcal{W}^{\mathbf{T}}-\mathcal{W}^{\mathbf{T}'}\right\|_{\ell^2(\mathbf{L}^2(G))}^2+\kappa(G,G')^2\big(\kappa(G,G'\big)+1)^2 \right). \end{equation*} \end{theorem} \begin{proof} Let $\|\mathbf{x}\|_\mathbf{M}=1,$ and let $\mathbf{y}=\mathbf{M}\mathbf{x}.$ Note that $\|\mathbf{y}\|_2=\|\mathbf{M}\mathbf{x}\|_2=\|\mathbf{x}\|_\mathbf{M}=1.$ By Lemma \ref{lem: polynomialproperties} and by \eqref{eqn: defsquareroot} we have that for all $i\in\mathcal{I}$ \begin{equation*} r_i(\mathbf{K})=\mathbf{M}^{-1}r_i(\mathbf{T})\mathbf{M}\quad\text{and}\quad r_i(\mathbf{K}')=(\mathbf{M}')^{-1}r_i(\mathbf{T}')\mathbf{M}'. \end{equation*} Therefore, \begin{align*} \|(r_i(\mathbf{K})-r_i(\mathbf{K}'))\mathbf{x}\|_\mathbf{M} &= \|\mathbf{M}\left(\mathbf{M}^{-1}r_i(\mathbf{T})\mathbf{M}-(\mathbf{M}')^{-1}r_i(\mathbf{T}')\mathbf{M}'\right)\mathbf{x}\|_2\\ &=\|r_i(\mathbf{T})\mathbf{M}\mathbf{x}-\mathbf{M}(\mathbf{M}')^{-1}r_i(\mathbf{T}')\mathbf{M}'\mathbf{M}^{-1}\mathbf{M}\mathbf{x}\|_2\\ &=\|r_i(\mathbf{T})\mathbf{y}-\mathbf{R}_2^{-1} r_i(\mathbf{T}')\mathbf{R}_2\mathbf{y}\|_2\\ &\leq \|r_i(\mathbf{T})\mathbf{y}-\mathbf{R}_2^{-1} r_i(\mathbf{T}')\mathbf{y}\|_2+\|\mathbf{R}_2^{-1} r_i(\mathbf{T}')\mathbf{y}-\mathbf{R}_2^{-1} r_i(\mathbf{T}')\mathbf{R}_2\mathbf{y}\|_2\\ &\leq\|(r_i(\mathbf{T})-r_i(\mathbf{T}'))\mathbf{y}\|_2+\|(\mathbf{I}-\mathbf{R}_2^{-1})r_i(\mathbf{T}')\mathbf{y}\|_2+\|\mathbf{R}^{-1}_2\|_2\|r_i(\mathbf{T}')(\mathbf{I}-\mathbf{R}_2)\mathbf{y}\|_2\\ &\leq\|(r_i(\mathbf{T})-r_i(\mathbf{T}'))\mathbf{y}\|_2+\kappa(G,G')\|r_i(\mathbf{T}')\mathbf{y}\|_2+R(G,G')\|r_i(\mathbf{T}')(\mathbf{I}-\mathbf{R}_2^{-1})\mathbf{y}\|_2. \end{align*} Therefore, squaring both sides, summing over $j,$ and using the nonexpansiveness of $\mathcal{W}^{\mathbf{T}}$ and the fact that $\|\mathbf{y}\|_2=1,$ we have \begin{align*} &\sum_{i\in\mathcal{I}}\|(r_i(\mathbf{K})-r_i(\mathbf{K}'))\mathbf{x}\|_\mathbf{M}^2\\ \leq& 3\left(\sum_{i\in\mathcal{I}}\|(r_i(\mathbf{T})-r_i(\mathbf{T}'))\mathbf{y}\|_2^2 + \kappa(G,G')^2\sum_{i\in\mathcal{I}}\|r_i(\mathbf{T}')\mathbf{y}\|^2_2 +R(G,G')^2\sum_{i\in\mathcal{I}}\|r_i(\mathbf{T}')(\mathbf{I}-\mathbf{R}_2)\mathbf{y}\|^2_2\right)\\ \leq& 3\left(\left\|\mathcal{W}^{\mathbf{K}}-\mathcal{W}^{\mathbf{K}'}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}^2+\kappa(G,G')^2+R(G,G)^2\|(\mathbf{I}-\mathbf{R}_2)\mathbf{y}\|_2^2 \right)\\ \leq& 3\left(\left\|\mathcal{W}^{\mathbf{K}}-\mathcal{W}^{\mathbf{K}'}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}^2+\kappa(G,G')^2+R(G,G)^2\kappa(G,G')^2 \right)\\ \leq& 6\left(\left\|\mathcal{W}^{\mathbf{K}}-\mathcal{W}^{\mathbf{K}'}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}^2+\kappa(G,G')^2\big(\kappa(G,G'\big)+1)^2 \right) \end{align*} where the last inequality uses the fact that $R(G,G')\leq (\kappa(G,G')+1).$ \end{proof} The following corollaries are immediate consequences of Theorem \ref{thm: transferTtoP} and of Theorems \ref{thm: wavelet stability1} and \ref{thm: wavelet stability2}. \begin{corollary}\label{cor: stabilityK1} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ let $\mathbf{M}$ and $\mathbf{M}'$ be invertible matrices, and let $\lambda_1^*=\max\{\lambda_1,\lambda_1'\}.$ Let $J\geq 0,$ let $\mathcal{W}$ be the wavelet transform $\mathcal{W}^{(1)}$ constructed from $\mathbf{K}$ in Section \ref{sec: wavelets}, and let $\mathcal{W}'$ be the corresponding wavelet transform constructed from $\mathbf{K}'.$ Then, \begin{align*} \|\mathcal{W}- \mathcal{W}'\|_{\ell^2(\mathbf{L}^2(G))}^2 &\leq C_{\lambda_1^*}\left(2^J\sup_{1\leq i\leq n-1}|\lambda_i-\lambda_i'|^2+\|\mathbf{V}-\mathbf{V}'\|_2^2 +\kappa(G,G')^2(\kappa(G,G')+1)^2\right).\\ \end{align*} \end{corollary} \begin{corollary}\label{cor: stabilityK2} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ let $\mathbf{M}$ and $\mathbf{M}'$ be invertible matrices, and let $\lambda_1^*=\max\{\lambda_1,\lambda_1'\}.$ Let $J\geq 0,$ let $\mathcal{W}$ be the wavelet transform $\mathcal{W}^{(2)}_J$ constructed from $\mathbf{K}$ in Section \ref{sec: wavelets} and let $\mathcal{W}'$ be the corresponding wavelet transform constructed from $\mathbf{K}'.$ Then, \begin{align*} \|\mathcal{W}- \mathcal{W}'\|_{\ell^2(\mathbf{L}^2(G))}^2 &\leq C_{\lambda_1^*}\left(\|\mathbf{T}-\mathbf{T}'\|^2_2+ \|\mathbf{T}-\mathbf{T}'\|_2+\kappa(G,G')^2(\kappa(G,G')+1)^2\right).\\ \end{align*} \end{corollary} One might also wish to replace Corollaries \ref{cor: stabilityK1} and \ref{cor: stabilityK2} with inequalities written in terms of $\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}$ rather than $\|\mathbf{T}-\mathbf{T}'\|_2.$ This can be done by the following proposition. Recall that we think of the two Hilbert spaces $\mathbf{L}^2(G,\mathbf{M})$ and $\mathbf{L}^2(G,\mathbf{M}')$ as being well-aligned if $\kappa(G,G') \approx0$ and $R(G,G')\approx1.$ In this case, the right-hand side of \eqref{eqn: TdisttoPdist} is approximately $\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}.$ \begin{proposition} \label{prop: Tdist to Pdist} \begin{equation}\label{eqn: TdisttoPdist} \|\mathbf{T}-\mathbf{T}'\|_2\leq\kappa(G,G')\left(1+R(G,G')^{3}\right) + R(G,G)\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}. \end{equation} \end{proposition} \begin{proof} Let $\|\mathbf{x}\|_2=1.$ Then, since $\mathbf{T}=\mathbf{M}\mathbf{K}\mathbf{M}^{-1}$ \begin{align} \|(\mathbf{T}-\mathbf{T}')\mathbf{x}\|_2 &= \|\mathbf{M}\mathbf{K}\mathbf{M}^{-1} \mathbf{x} - \mathbf{M}'\mathbf{K}'(\mathbf{M}')^{-1} \mathbf{x}\|_2\nonumber\\ &= \|\mathbf{M}\mathbf{K}\mathbf{M}^{-1} \mathbf{x} - (\mathbf{M}\bM^{-1})\mathbf{M}'\mathbf{K}'(\mathbf{M}')^{-1}(\mathbf{M}\bM^{-1}) \mathbf{x}\|_2\nonumber\\ &= \|\mathbf{M}(\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1)\mathbf{M}^{-1} \mathbf{x}\|_2\nonumber\\ &= \|(\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1)\mathbf{M}^{-1} \mathbf{x}\|_\mathbf{M}\nonumber\\ &\leq \|\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1\|_\mathbf{M}\|\mathbf{M}^{-1} \mathbf{x}\|_\mathbf{M}\nonumber\\ &= \|\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1\|_\mathbf{M}, \label{eqn: middlebound}\\ \end{align} since $\|\mathbf{M}^{-1} \mathbf{x}\|_\mathbf{M}=\|\mathbf{x}\|_2=1.$ By the triangle inequality, \begin{align*} \|\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1\|_\mathbf{M} & \leq \|\mathbf{K} - \mathbf{R}_1^{-1}\mathbf{K}\|_\mathbf{M} + \|\mathbf{R}_1^{-1}\mathbf{K}- \mathbf{R}_1^{-1}\mathbf{K}'\mathbf{R}_1\|_\mathbf{M}\\ &\leq \|\mathbf{K}\|_\mathbf{M}\|\mathbf{I}-\mathbf{R}_1^{-1}\|_\mathbf{M} + \|\mathbf{R}_1^{-1}\|_\mathbf{M}\|\mathbf{K}-\mathbf{K}'\mathbf{R}_1\|_\mathbf{M}\\ &\leq \|\mathbf{K}\|_\mathbf{M}\|\mathbf{I}-\mathbf{R}_1^{-1}\|_\mathbf{M} + \|\mathbf{R}_1^{-1}\|_\mathbf{M}\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}+\|\mathbf{R}_1^{-1}\|_\mathbf{M}\|\mathbf{K}'(\mathbf{I}-\mathbf{R}_1)\|_\mathbf{M}\\ &\leq \|\mathbf{K}\|_\mathbf{M}\|\mathbf{I}-\mathbf{R}_1^{-1}\|_\mathbf{M} + \|\mathbf{R}_1^{-1}\|_\mathbf{M}\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}+\|\mathbf{R}_1^{-1}\|_\mathbf{M}\|\mathbf{K}'\|_\mathbf{M}\|\mathbf{I}-\mathbf{R}_1\|_\mathbf{M}\\ &\leq \kappa(G,G') + R(G,G')\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}+R(G,G')R(G,G')^2\kappa(G,G')\\ &=\kappa(G,G')\left(1+R(G,G')^{3}\right) + R(G,G)\|\mathbf{K}-\mathbf{K}'\|_\mathbf{M}, \end{align*} where we used the facts that $\|\mathbf{I}-\mathbf{R}_1^{\pm1}\|_\mathbf{M}\leq \kappa(G,G'),$ $\|\mathbf{R}_1^{\pm1}\|\leq R(G,G'),$ $\|\mathbf{K}\|_\mathbf{M}=1,$ and $\|\mathbf{K}'\|_\mathbf{M}\leq \|\mathbf{R}_1\|_2\|\mathbf{R}^{-1}\|_2\leq R(G,G')^2.$ \end{proof} Our next theorem shows that if $G$ and $G'$ are well-aligned, then the upper frame bound for $\mathcal{W}$ can be used to produce an upper frame bound for $\mathcal{W}'$ on $\mathbf{L}^2(G,\mathbf{M}).$ This result will play a key role in proving the stability of the scattering transform. \begin{theorem}\label{thm: Cstability} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ and let $\mathbf{M}$ and $\mathbf{M}'$ be invertible $n\times n$ matrices. Let $\mathcal{J}\coloneqq\{0,\ldots,J\}$ for some $J\geq0,$ let \begin{equation*} \mathcal{W}=\{\Psi_j,\Phi_J\}_{j\in\mathcal{J}} \end{equation*}be either of the wavelet transforms, $\mathcal{W}^{(1)}$ or $\mathcal{W}^{(2)},$ constructed in Section \ref{sec: wavelets}, and let $\mathcal{W}'$ be the corresponding wavelet transform constructed from $\mathbf{K}'.$ Then $\mathcal{W}'$ is a bounded operator on $\mathbf{L}^2(G,\mathbf{M})$ and \begin{equation*} \sum_{j\in\mathcal{J}}\|\Psi'_j\mathbf{x}\|_\mathbf{M}^2+\|\Phi_J\mathbf{x}\|_\mathbf{M}^2\leq R(G,G')^4\|\mathbf{x}\|^2_\mathbf{M}. \end{equation*} \end{theorem} \begin{proof} By Lemma \ref{lem: polynomialproperties}, we have that if $r$ is either a polynomial, or the square root of a polynomial, then \begin{equation*} r(\mathbf{K}')=(\mathbf{M}')^{-1}r(\mathbf{T}')\mathbf{M}'. \end{equation*} Therefore, again applying Lemma \ref{lem: polynomialproperties}, we have \begin{align*} \|r(\mathbf{K}')\mathbf{x}\|_\mathbf{M}&=\|\mathbf{M}((\mathbf{M}')^{-1}r(\mathbf{T}')\mathbf{M}')(\mathbf{M}^{-1}\mathbf{M}))\mathbf{x}\|_2 \\ &=\|\mathbf{R}_2^{-1} r(\mathbf{T}')\mathbf{R}_2\mathbf{M}\mathbf{x}\|_2\\ &\leq \|\mathbf{R}_2^{-1}\|_2\|r(\mathbf{T}')\|_2\|\mathbf{R}_2\|_2\|\mathbf{M}\mathbf{x}\|_2\\ &=R(G,G')^2\|r(\mathbf{K}')\|_\mathbf{M}\|\mathbf{x}\|_\mathbf{M}. \end{align*} Since $\Phi_J$ and each of the $\Psi_j$ are either a polynomial in $\mathbf{K}$ or the square root of polynomial in $\mathbf{K}',$ the proof follows by observing \begin{align*} \sum_{j\in\mathcal{J}}\|\Psi'_j\mathbf{x}\|_\mathbf{M}^2+\|\Phi'_J\mathbf{x}\|_\mathbf{M}^2&\le R(G,G)^4 \left(\sum_{j\in\mathcal{J}}\|\Psi_j\mathbf{x}\|_\mathbf{M}^2+\|\Phi_J\mathbf{x}\|_\mathbf{M}^2\right)\\ &\leq R(G,G')^4\|\mathbf{x}\|_\mathbf{M}^2, \end{align*} with the last inequality following from the fact that $B= 1$ in \eqref{eqn: frameAB} by Propositions \ref{prop: waveletisometries} and \ref{prop: nonexpansivewaveletframes} \end{proof} \subsection{Stability of the Scattering Transform}\label{sec: scattering stability} In this section, we will prove a stability result for the scattering transform. We will state and prove our result in a great degree of generality in order both to emphasize that the stability of the scattering transform is a consequence of the stability of the underlying frame and so that our result can be applied to other graph wavelet constructions. Towards this end, we will assume that $G=(V,E,W)$ and $G'=(V',E',W')$ are two-weighted graphs such that $|V|=|V'|=n,$ let $\mathbf{M}$ and $\mathbf{M}'$ be $n\times n$ invertible matrices, and assume that $\mathcal{W}=\{\Psi_j,\Phi\}_{j\in\mathcal{J}}$ and $\mathcal{W}'=\{\Psi'_j,\Phi'\}_{j\in\mathcal{J}}$ are frames on $\mathbf{L}^2(G,\mathbf{M})$ and $\mathbf{L}^2(G',\mathbf{M}')$ such that $B\leq 1$ in \eqref{eqn: frameAB}. If $\Pi$ is a permutation, we will let $\mathcal{W}''\coloneqq\Pi\mathcal{W}'\Pi^T= \{\Pi\Psi'_j\Pi^T,\Pi\Phi'\Pi^T\}_{j\in\mathcal{J}}$ denote the corresponding permuted wavelet frame on $G''\coloneqq\Pi G'$. Our stability bound for the scattering transform will depend on choosing the optimal permutation $\Pi$ such that $\mathcal{W}''=\Pi\mathcal{W}'\Pi^T$ is well-aligned with $\mathcal{W}$ and has an upper frame bound on $\mathbf{L}^2(G,\mathbf{M})$ that is not too large. For $\Pi\in S_n,$ we let \begin{equation*} \mathcal{A}_\Pi(G,G')\coloneqq\sup_{\|\mathbf{x}\|_{\mathbf{M}}=1}\|\mathcal{W}\mathbf{x}-\Pi\mathcal{W}'\Pi^T\mathbf{x}\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))} \end{equation*} and \begin{equation*} \mathcal{C}_\Pi(G,G')\coloneqq\sup_{\|\mathbf{x}\|_{\mathbf{M}}=1}\|\Pi\mathcal{W}'\Pi^T\mathbf{x}\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}. \end{equation*} We will also let $\mathcal{A}(G,G')=\mathcal{A}_{\mathbf{I}}(G,G')$ and $\mathcal{C}(G,G')=\mathcal{C}_{\mathbf{I}}(G,G')$ when $\Pi$ is the indentity. Theorem \ref{thm: scattering stability no perm} provides stability guarantees for the windowed and non-windowed scattering transform with bounds that are functions of $\mathcal{A}(G,G')$ and $\mathcal{C}(G,G').$ Corollary \ref{cor: permutationstability} uses the permutation invariance results of Theorems \ref{thm: permuationinvariancewindowed} and \ref{thm: permuatation invariance} to extend these results by infimizing the same functions over all permutations. Since the non-windowed scattering transform is always fully permutation invariant, this corollary will always apply to it. By Theorem \ref{thm: permuationinvariancewindowed}, it will apply to the windowed scattering transform when $\mathbf{M}=\mathbf{D}^{1/2}$ (or any other case in which the windowed scattering transform has provable invariance guarantees). These results imply, by Theorems \ref{thm: wavelet stability1}, \ref{thm: wavelet stability2}, \ref{thm: transferTtoP}, and \ref{thm: Cstability}, that the scattering transforms constructed from $\mathcal{W}^{(1)}_J$ or $\mathcal{W}^{(2)}_J$ are stable in the sense that the spectral properties of $G$ are similar to the spectral properties of $G'$ and the $\|\cdot\|_M$ and $\|\cdot\|_{\mathbf{M}'}$ norms are well-aligned, then the scattering transforms $\mathbf{S}$ and $\mathbf{S}'$ will produce similar representations of an inputted signal $\mathbf{x}.$ Many of the ideas in the proof of Theorem \ref{thm: scattering stability no perm} are similar to those used to prove Theorem 5.3 in \cite{gama:diffScatGraphs2018}. The primary difference is Lemma \ref{lem: Ustability} which is needed because $\mathcal{W}'$ is a non-expansive frame on $\mathbf{L}^2(G,\mathbf{M}'),$ but not in general a non-expansive frame on $\mathbf{L}^2(G,\mathbf{M}).$ \begin{theorem}\label{thm: scattering stability no perm} Let $G=(V,E,W)$ and $G'=(V',E',W')$ be two graphs such that $|V|=|V'|=n,$ let $\mathbf{M}$ and $\mathbf{M}'$ be invertible $n\times n$ matrices, and let $\mathcal{J}$ be an indexing set. Let $\mathcal{W}=\{\Psi_j,\Phi\}_{j\in\mathcal{J}}$ and be $\mathcal{W}'=\{\Psi'_j,\Phi'\}_{j\in\mathcal{J}}$ be frames on $\mathbf{L}^2(G,\mathbf{M})$ and $\mathbf{L}^2(G',\mathbf{M}'),$ such that $B\leq 1$ in \eqref{eqn: frameAB}, and let $\bm{\mu}$ and $\bm{\mu}'$ be weighting vectors on $\mathbf{L}^2(G,\mathbf{M}),$ and $\mathbf{L}^2(G',\mathbf{M}').$ Let $\mathbf{S}_{\ell}^{(L)},$ $\left(\mathbf{S}_{\ell}^{(L)}\right)',$ $\overline{\mathbf{S}}_{\ell}^{(L)},$ and $\left(\overline{\mathbf{S}}_{\ell}^{(L)}\right)'$ be the partial windowed and non-windowed scattering transforms on $G$ and $G'$ with coefficients from layers $\ell\leq m \leq L.$ Then, for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation}\label{eqn: scatteringstabilitynopermwindow} \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \sqrt{2\mathcal{A}(G,G')}\left(\sum_{m=\ell}^L\sum_{k=0}^m\mathcal{C}_{\Pi}(G,G')^{k/2}\right)\|\mathbf{x}\|_\mathbf{M}, \end{equation} and \begin{equation}\label{eqn: scatteringstabilitynopermnowindow} \left\|\overline{\mathbf{S}}_\ell^{(L)}\mathbf{x}-\left(\overline{\mathbf{S}}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \sqrt{2}\left((L-\ell)\|\bm{\mu}-\bm{\mu}'\|_\mathbf{M} +\|\bm{\mu}'\|_\mathbf{M} \sqrt{\mathcal{A}(G,G')}\cdot \sum_{m=\ell}^L\sum_{k=0}^{m-1}\mathcal{C}^{k/2}(G,G')\right)\|\mathbf{x}\|_\mathbf{M}. \end{equation} \end{theorem} \begin{corollary}\label{cor: permutationstability} Under the assumptions of Theorem \ref{thm: scattering stability no perm}, the non-windowed scattering transform satisfies \begin{align}\label{eqn: scatteringstabilitypermnowindow} &\left\|\overline{\mathbf{S}}_\ell^{(L)}\mathbf{x}-\left(\overline{\mathbf{S}}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{d}))}\nonumber\\ \leq& \sqrt{2}\inf_{\Pi\in S_n} \left((L-\ell)\|\bm{\mu}-\bm{\mu}'\|_\mathbf{M} +\|\bm{\mu}'\|_\mathbf{M} \sqrt{\mathcal{A}_\Pi(G,G')}\cdot \sum_{m=\ell}^L\sum_{k=0}^{m-1}\mathcal{C}_\Pi(G,G')^{k/2}\right)\|\mathbf{x}\|_\mathbf{M}. \end{align} Moreover, if we further assume that the windowed scattering transform $\left(\mathbf{S}_\ell^{(L)}\right)'$ is permutation invariant up to a factor of $\mathcal{B}$ in the sense that for all $\Pi\in S_n$ and for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation}\label{eqn: permutation assumption} \left\|\left(\mathbf{S}_\ell^{(L)}\right)''\Pi\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\mathbf{x}\right)'\right\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \mathcal{B}\|\mathbf{x}\|^2_{\mathbf{M}}, \end{equation} then \begin{equation}\label{eqn: scatteringstabilitypermwindow} \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{d}))}\leq \left(\mathcal{B} + \inf_{\Pi\in S_n}\sqrt{2\mathcal{A}_{\Pi}(G,G')}\sum_{m=\ell}^L\sum_{k=0}^m\mathcal{C}_{\Pi}(G,G')^{k/2}\right)\|\mathbf{x}\|_\mathbf{M}. \end{equation} \end{corollary} \begin{proof}[The Proof of Theorem \ref{thm: scattering stability no perm}] Let $\mathcal{A}\coloneqq\mathcal{A}(G,G')$ and $\mathcal{C}\coloneqq\mathcal{C}(G,G').$ By the triangle inequality, \begin{align*} \left\|\mathbf{S}_{\ell}^{(L)}\mathbf{x}-\left(\mathbf{S}_{\ell}^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))} &=\left(\sum_{m=\ell}^L\sum_{\mathbf{j}\in\mathcal{J}^m}\left\|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\right\|^2_{\mathbf{M}}\right)^{1/2} \\ &\leq \sum_{m=\ell}^L\left(\sum_{\mathbf{j}\in\mathcal{J}^m}\left\|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\right\|^2_{\mathbf{M}}\right)^{1/2}. \end{align*} Therefore, to prove \eqref{eqn: scatteringstabilitynopermwindow} it suffices to show \begin{equation}\label{eqn: korder} \sum_{\mathbf{j}\in\mathcal{J}^m}\left\|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\right\|^2_{\mathbf{M}}\leq 2\mathcal{A}\cdot\left(\sum_{k=0}^m\mathcal{C}^{k/2}\right)^2\|\mathbf{x}\|_\mathbf{M}^2 \end{equation} for all $0\leq m \leq L.$ Similarly, to prove \eqref{eqn: scatteringstabilitynopermnowindow}, it suffices to show \begin{equation}\label{eqn: kordernowindow} \sum_{\mathbf{j}\in\mathcal{J}^m}\left\|\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}-\overline{\mathbf{S}}'[\mathbf{j}]\mathbf{x}\right\|^2_{\mathbf{M}}\leq 2\|\bm{\mu}-\bm{\mu}'\|^2_\mathbf{M} \|\mathbf{x}\|_\mathbf{M}^2 +2\|\bm{\mu}'\|^2_\mathbf{M} \mathcal{A}\cdot \left(\sum_{k=0}^{m-1}\mathcal{C}^{k/2}\right)^{2}\|\mathbf{x}\|_\mathbf{M}^2 \end{equation} for all $0\leq m \leq L,$ and then use the inequality $\sqrt{a^2+b^2}\leq |a|+|b|.$ Since the zeroth-order windowed scattering coefficient of $\mathbf{x}$ is given by \begin{equation*} \mathbf{S}[\mathbf{j}_{e}]\mathbf{x}=\Phi\mathbf{x}, \end{equation*} where $\mathbf{j}_{e}$ is the empty-index, we see that by the definition of $\mathcal{A}$ we hav \begin{equation*} \sum_{\mathbf{j}\in\mathcal{J}^0}\|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\|^2_{\mathbf{M}} =\|\Phi\mathbf{x}-\Phi'\mathbf{x}\|^2_{\mathbf{M}} \leq\|\mathcal{W}\mathbf{x}-\mathcal{W}'\mathbf{x}\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \mathcal{A}\|\mathbf{x}\|_\mathbf{M}^2. \end{equation*} Therefore, \eqref{eqn: korder} holds when $m=0.$ Similarly, since $\overline{\mathbf{S}}[\mathbf{j_e}]\mathbf{x}=\langle\bm{\mu},\mathbf{x}\rangle_\mathbf{M},$ we see that \eqref{eqn: kordernowindow} holds when $m=0.$ The case where $1\leq m\leq L$ relies on the following two lemmas. They iteratively apply the assumption that $B\leq 1$ in \eqref{eqn: frameAB} and use the definitions of $\mathcal{A}$ and $\mathcal{C}$ to bound $ \{\mathbf{U}[\mathbf{j}]\mathbf{x}\}_{\mathbf{j}\in\mathcal{J}^m}$ and $\left(\sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{U}[\mathbf{j}]\mathbf{x}-\mathbf{U}'[\mathbf{j}]\mathbf{x}\|^2_\mathbf{M}\right)^{1/2}.$ Full details are provided in Appendix \ref{sec: the proof of lemmas for scattering stability}. \begin{lemma}\label{lem: nonexpansiveU} For all $m\geq 1,$ \begin{equation*} \sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{U}[\mathbf{j}]\mathbf{x}\|^2_{\mathbf{M}} \leq \|\mathbf{x}\|^2_\mathbf{M}. \end{equation*} \end{lemma} \begin{lemma}\label{lem: Ustability}For all $m\geq1,$ \begin{equation*} \sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{U}[\mathbf{j}]\mathbf{x}-\mathbf{U}'[\mathbf{j}]\mathbf{x}\|^2_{\mathbf{M}} \leq \mathcal{A} \left(\sum_{k=0}^{m-1}\mathcal{C}^{k/2}\right)^2\|\mathbf{x}\|^2_\mathbf{M} \end{equation*} \end{lemma} For $\mathbf{j}\in\mathcal{J}^m,$ the triangle inequality implies that, \begin{align*} \|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\|_\mathbf{M}&= \|\Phi M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}-\Phi' M\Psi'_{j_m}\ldots M\Psi'_{j_1}\mathbf{x}\|_\mathbf{M}\\ &\leq \|(\Phi-\Phi')M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}\|_\mathbf{M} + \|\Phi'(M\Psi_{j_m}\ldots M\Psi_{j_1}-M\Psi'_{j_m}\ldots M\Psi'_{j_1})\mathbf{x}\|_\mathbf{M}\\ &\leq \|\Phi-\Phi'\|_\mathbf{M}\|M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}\|_\mathbf{M} + \|\Phi'\|_\mathbf{M}\|M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}-M\Psi'_{j_m}\ldots M\Psi'_{j_1}\mathbf{x}\|_\mathbf{M}.\\ \end{align*} Therefore, by Lemmas \ref{lem: nonexpansiveU} and \ref{lem: Ustability} \begin{align*} \sum_{\mathbf{j}\in\mathcal{J}^m}\|\mathbf{S}[\mathbf{j}]\mathbf{x}-\mathbf{S}'[\mathbf{j}]\mathbf{x}\|^2_{\mathbf{M}}&\leq 2\|(\Phi-\Phi')\|_\mathbf{M}^2\sum_{\mathbf{j}\in\mathcal{J}^m}\|M\Psi_{j_m}\ldots M\Psi_{j_1}\|_\mathbf{M}^2\\ &\quad\quad\quad+2\|\Phi'\|_\mathbf{M}^2\sum_{\mathbf{j}\in\mathcal{J}^m} \|M\Psi_{j_m}\ldots M\Psi_{j_1}-M\Psi'_{j_m}\ldots M\Psi'_{j_1}\mathbf{x}\|_\mathbf{M}^2\\ &\leq 2\mathcal{A}\|\mathbf{x}\|_\mathbf{M}^2+ 2\mathcal{C}\left(\mathcal{A}^{1/2}\cdot \sum_{k=0}^{m-1}\mathcal{C}^{k/2}\|\mathbf{x}\|_\mathbf{M}\right)^2\\ &\leq 2\mathcal{A}\cdot\left(\sum_{k=0}^m\mathcal{C}^{k/2}\right)^2\|\mathbf{x}\|_\mathbf{M}^2, \end{align*} which completes the proof of \eqref{eqn: korder} and therefore of \eqref{eqn: scatteringstabilitynopermwindow}. Similarly, by the Cauchy-Schwarz inequality \begin{align*} |\overline{\mathbf{S}}[\mathbf{j}]\mathbf{x}-\overline{\mathbf{S}}'[\mathbf{j}]\mathbf{x}|&= |\bm{\mu} M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}-\bm{\mu}' M\Psi'_{j_m}\ldots M\Psi'_{j_1}\mathbf{x}|\\ &\leq |(\bm{\mu}-\bm{\mu}')M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}| + |\bm{\mu}'(M\Psi_{j_m}\ldots M\Psi_{j_1}-M\Psi'_{j_m}\ldots M\Psi'_{j_1})\mathbf{x}|\\ &\leq \|\bm{\mu}-\bm{\mu}'\|_\mathbf{M}\|M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}\|_\mathbf{M} + \|\bm{\mu}'\|_\mathbf{M}\|M\Psi_{j_m}\ldots M\Psi_{j_1}\mathbf{x}-M\Psi'_{j_m}\ldots M\Psi'_{j_1}\mathbf{x}\|_\mathbf{M}.\\ \end{align*} Squaring both sides and summing over $\mathbf{j}$ implies \eqref{eqn: kordernowindow} and therefore \eqref{eqn: scatteringstabilitynopermnowindow}. \end{proof} \begin{comment} \begin{theorem}\label{thm: scattering stability perm} Suppose $G=(V,E,W)$ and $G'=(V',E',W')$ are two graphs such that $|V|=|V'|=n,$ and let $\mathbf{M}$ and $\mathbf{M}'$ be invertible $n\times n$ matrices. Let $\mathcal{J}$ be an indexing set. Let $\mathcal{W}=\{\Psi_j,\Phi\}_{j\in\mathcal{J}}$ and be $\mathcal{W}'=\{\Psi'_j,\Phi'\}_{j\in\mathcal{J}}$ be frames on $\mathbf{L}^2(G,\mathbf{M}),$ and $\mathbf{L}^2(G',\mathbf{M}'),$ such that $B\leq 1$ in \eqref{eqn: frameAB}, and and let $\bm{\mu}$ and $\bm{\mu}'$ be weighting vectors $\mathbf{L}^2(G,\mathbf{M}),$ and $\mathbf{L}^2(G',\mathbf{M}')$. Let $\mathbf{S}_{\ell}^{(L)}$ and $\left(\mathbf{S}_{\ell}^{(L)}\right)'$ be the partial windowed scattering transforms on $G$ and $G'$ with coefficients from layers $\ell\leq m \leq L,$ and let $\overline{\mathbf{S}}_{\ell}^{(L)}$ and $\left(\overline{\mathbf{S}}_{\ell}^{(L)}\right)'$ be the corresponding non-windowed scattering transforms. If $\left(\mathbf{S}_\ell^{(L)}\right)'$ is permutation invariant up to a factor of $\mathcal{B}$ in the sense that for all $\Pi\in S_n,$ and for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M}),$ \begin{equation}\label{eqn: permutation assumption} \left\|\left(\mathbf{S}_\ell^{(L)}\right)''\Pi\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\mathbf{x}\right)'\right\|^2_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\leq \mathcal{B}\|\mathbf{x}\|^2_{\mathbf{M}}, \end{equation} where $\left(\mathbf{S}_\ell^{(L)}\right)''$ is the partial layer scattering transform on $G''=\Pi G'.$ Then, for all $\mathbf{x}\in\mathbf{L}^2(G,\mathbf{M})$ \begin{equation}\label{eqn: scatteringstabilitypermwindow} \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{d}))}\leq \left(\mathcal{B} + \inf_{\Pi\in S_n}\sqrt{2\mathcal{A}_{\Pi}(G,G')}\sum_{m=\ell}^L\sum_{k=0}^m\mathcal{C}_{\Pi}(G,G')^{k/2}\right)\|\mathbf{x}\|_\mathbf{M}. \end{equation} Similarly, since the non-windowed scattering transform is always fully permutation invariant, \begin{align}\label{eqn: scatteringstabilitypermnowindow} &\left\|\overline{\mathbf{S}}_\ell^{(L)}\mathbf{x}-\left(\overline{\mathbf{S}}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{d}))}\nonumber\\ \leq& \sqrt{2}\inf_{\Pi\in S_n} \left((L-\ell)\|\bm{\mu}-\bm{\mu}'\|_\mathbf{M} +\|\bm{\mu}'\|_\mathbf{M} \sqrt{\mathcal{A}_\Pi(G,G')}\cdot \sum_{m=\ell}^L\sum_{k=0}^{m-1}\mathcal{C}_\Pi(G,G')^{k/2}\right)\|\mathbf{x}\|_\mathbf{M}. \end{align} \end{theorem} \end{comment} \begin{proof}[The Proof of Corollary \ref{cor: permutationstability}] Choose $\Pi_0\in S_n$ such that \begin{equation*} \sqrt{2\mathcal{A}_{\Pi_0}(G,G')}\sum_{m=0}^M\sum_{k=0}^m\mathcal{C}_{\Pi_0}(G,G')^{k/2}= \inf_{\Pi\in S_n}\sqrt{2\mathcal{A}_{\Pi}(G,G')}\sum_{m=0}^M\sum_{k=0}^m\mathcal{C}_{\Pi}(G,G')^{k/2}. \end{equation*} Let $G''=\Pi_0 G',$ and let $\mathbf{S}''$ be the scattering transform on $G''$ constructed from the wavelets $\mathcal{W}''\coloneqq\Pi_0\mathcal{W}'\Pi_0^T.$ Then under the assumption \eqref{eqn: permutation assumption}, we see \begin{align*} \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)'\mathbf{x}\right\|^2_{\ell^2(\mathbf{L}(G,\mathbf{M}))}&\leq \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)''\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\b<))} + \left\|\left(\mathbf{S}_\ell^{(L)}\right)''\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)'\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))}\\ &\leq \left\|\mathbf{S}_\ell^{(L)}\mathbf{x}-\left(\mathbf{S}_\ell^{(L)}\right)''\mathbf{x}\right\|_{\ell^2(\mathbf{L}^2(G,\mathbf{M}))} + \mathcal{B}\|\mathbf{x}\|_{\mathbf{M}}. \end{align*} \eqref{eqn: scatteringstabilitypermwindow} now follows from Theorem \ref{thm: scattering stability no perm}. The proof of \eqref{thm: scattering stability no perm} is similar, using the fact that the non-windowed scattering transform is always fully permutation invariant by Theorem \ref{thm: permuatation invariance}. \end{proof} \section{Future Work} As alluded to in Section \ref{sec: related}, we believe that our work opens up several new lines of inquiry for future research. Graph scattering transforms typically get numerical results which are good, but not quite state of the art in most situations. Our work has introduced a large class of scattering networks with provable guarantees. Therefore, one might attempt to learn the optimal choices of the matrix $\mathbf{M}$ and the spectral function $g$ based on training data and produce a network which retains the invariance and stability properties of the scattering transform but has superior numerical performance. This would be an important step towards bridging the gap between theory and practice by producing an increasingly realistic model of graph neural networks with provable guarantees. Another possible extension would be to consider a construction similar to ours but which uses the spectral decomposition of the unnormalized graph Laplacian rather than the normalized Laplacian. Such a work would generalize \cite{zou:graphScatGAN2019} in a manner analogous to the way that this work generalizes \cite{gama:diffScatGraphs2018} and \cite{gao:graphScat2018}. Lastly, particularly in the case where $\mathbf{M}$ is a function of $\mathbf{D},$ e.g. when $\mathbf{K}=\mathbf{P},$ one might wish to study the behavior of the graph scattering transform on data-driven graphs obtained by subsampling a Riemannian manifold $\mathcal{M}.$ Such data-driven graphs typically arise in high-dimensional data analysis and in manifold learning. It can be shown that, under certain conditions, the normalized graph Laplacian of the data-driven graph converges pointwise \cite{coifman:diffusionMaps2006, singer:GraphToManifold2006} or in a spectral sense \cite{belkin2007convergence, Burago2013, Fujiwara1995EigenvaluesOL, Shi2015, Trillos2018} to the Laplace Beltrami operator on $\mathcal{M}$ as the number of samples tends to infinity. It would be interesting to see if one could use these results to study the convergence the graph scattering transforms constructed here to the manifold scattering transform constructed in \cite{perlmutter:geoScatCompactManifold2019}.
2,877,628,089,339
arxiv
\section*{This is an unnumbered first-level section head} \par \section{Introduction} The main objective of this work is to perform a rigorous mathematical analysis of a system of nonlinear partial differential equations corresponding to a generalization of a mathematical model describing the growth of a tumor proposed in \cite{Fassoni}. To describe the model, let $\Omega \subset \mathrm{I\!R\!}^{2}$, be an open and bounded set; let also $0< T< \infty$ be a given final time of interest and denote $t$ the times between $[0,T]$ and $Q=\Omega\times(0,T)$, the space-time cylinder and $\bar{\Gamma}=\partial \Omega\times(0,T)$, the space-time boundary. Then, the system of equations we are considering is the following:\begin{equation} \label{0riginalEquations} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial N}{\partial t} = r_N - \mu_N N - \beta_1 N A - \alpha_N\gamma_N D N, & \textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial A}{\partial t} = r_A A\left(1-\frac{A}{k_A}\right)-(\mu_A+\epsilon_A)A - \alpha_A\gamma_A D A, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial D}{\partial t} = \sigma \Delta D + \mu\chi_{\omega} - \gamma_A D A - \gamma_N D N - \tau D, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial D}{\partial \eta} =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle N(\cdot,0) = N_{0} (\cdot), A(\cdot,0) = A_{0}(\cdot), D(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega. \end{array} \right. \end{equation} In \cite{Fassoni}, Fassoni studied an ODE system corresponding to system \eqref{0riginalEquations} in a spatially homogeneous setting. Such model describes the growth of a tumor and its effect on the normal tissue, the tissue response to the tumor and the application of chemotherapeutic treatments, without spatial heterogeneity. The aim of the authors was to understand the phenomena of cancer onset and treatment as transitions between different basins of attraction of the underlying ODE system. The equations of the model that were studied in \cite{Fassoni} are \begin{equation} \label{início1} \left\{ \begin{array}{lcl} \displaystyle \frac{d N}{d t} = r_N - \mu_N N - \beta_1 N A - \alpha_N\gamma_N D N, \vspace{0.2cm} \\ \displaystyle \frac{d A}{d t} = r_A A\left(1-\frac{A}{k_A}\right)-(\mu_A+\epsilon_A)A - \beta_3 NA - \alpha_A\gamma_A D A, \vspace{0.2cm} \\ \displaystyle \frac{d D}{d t} = \mu - \gamma_A D A - \gamma_N D N - \tau D, \end{array} \right. \end{equation} where $N$ represents the number of normal cells in a given tissue of the human body, $A$ represents the number of tumor cells in the tissue and $D$ represents the concentration of a chemotherapeutic drug used to treat such a tumor. Parameter $r_N$ represents a constant influx of new normal cells produced by the tissue stem cells and $\mu_N$ presents the natural mortality of normal cells. A constant influx is considered because the imperative dynamics within a formed tissue is the maintenance of a homeostatic state through the natural replenishment of old and dead cells, see \cite{Simons}. On the other hand, tumor cells maintain their own growth program \cite{Fedi}. Thus, a density dependent growth is considered for tumor cells. The logistic growth is chosen due to its simplicity. Parameter $\mu_A$ represents the natural mortality of tumor cells, and $\epsilon_A$ represents an extra mortality rate due to apoptosis \cite{Danial}. Parameters $\beta_1$ and $\beta_3$ encompass the many negative interactions exerted by tumor cells on normal cells and vice-versa, such as competition for nutrients and oxygen. Besides competition, parameter $\beta_3$ encompasses also the effects on normal cells of anti-growth and death signals released by normal cells. In the same way, the parameter $\beta_1$ encompasses also mechanisms developed by tumor cells that damage normal tissue, such as increased local acidity, growth suppression, and release of death signals \cite{Hanahan}. The third equation of (\ref{início1}) describes the dynamics of chemotherapeutic drug concentration according the following assumptions. The drug has a constant infusion rate $\mu$ and a clearance rate $\tau$. Such constant infusion rate mimics a metronomic dosage, i.e., a near continuous and long-term administration of the drug. The absorption and deactivation of the drug by normal and cancerous cells are described in terms of the law of mass action with rates $\gamma_N$ and $\gamma_A$. Following the log-linear hypothesis \cite{Andre}, it is assumed that the amounts of drug absorbed by normal ($\gamma_N N D$) and cancerous cells ($\gamma_A A D$) kill such cells with rates $\alpha_N$ and $\alpha_A$, respectively. Although many models of cancer treatment do not consider drug absorption explicitly, in \cite{Fassoni}, the authors believe that it is an important fact to be considered, since, this phenomenon contributes to decrease the concentration of drug as time passes. System (\ref{início1}) is similar to the classical Lotka-Volterra competition model, frequently used in models for tumor growth and population dynamics. The fundamental difference here is the use of a constant flux for normal cells instead of a logistic growth. Such constant flux, also used in other well-known models of cancer \cite{Earn}, removes the symmetry observed in the Lotka-Volterra equations, so that there is no steady state with $N = 0$. Thus, it is impossible to observe the extinction of one of the populations (the normal cells in this case), as opposed to the Lotka-Volterra models. The authors of \cite{Fassoni} claim that this is a realistic result since, roughly speaking, cancer "does not win" by killing all the cells in the tissue, but by reaching a dangerous size that disrupts the proper functioning of the tissue and threatens the health of the individual. In this work, we are not interested in analyzing the dynamics (stability, asymptotic behavior) of the model, as such study has already been made in \cite{Fassoni}. Our objective is to study the existence and uniqueness of the solution of system (\ref{0riginalEquations}). Such system extends the ODE model \eqref{início1} to a more realistic situation by considering spatial variation of normal and cancer cells and the diffusion of the chemotherapeutic drug through the tissue, with diffusion coefficient $\sigma$ \cite{Anderson}. Further, it is also assumed that the drug influx is restricted to a limited region of the tissue, corresponding to a blood vessel passing transversely in such region. This is mathematically described in the model by the expression $\mu \chi_{\omega} $, where $\chi_{\omega} $ is the characteristic function of the subset $\omega \subset \Omega$. Finally, due to mathematical necessity to simplify the model, we set $\beta_3=0$. This corresponds to a situation where normal cells do not exert negative effects on tumor cells, and is a plausible biological assumption, since there are many tumors that develop resistance to the normal tissue' mechanisms which suppress tumor growth \cite{Hanahan}. The paper is organized as follows. In Section 2 we present the technical hypothesis and state our main result. In Section 3 we study an auxiliary problem. Using its solution, we prove our main result in Section 4. In Section 5 we present numerical simulations illustrating model behavior. \section{Technical hypotheses and main result} Let $\Omega \subset \mathrm{I\!R\!}^2$ be a domain with boundary $\partial\Omega$, $0\leq T <\infty$, and denote $Q = \Omega\times (0, T)$ and $\Gamma = \partial \Omega\times (0, T)$. We will use standard notations for Sobolev spaces, i.e., given $1~\leq~p~\leq~+\infty $ and $k \in\mathbb{N}$, we denote $$W_{p}^{k}(\Omega)=\left\{ f \\ \in L^{p}(\Omega) : D^{\alpha}f \in L^{p}(\Omega), |\alpha| \leq k \right\}; $$ when $p=2$, as usual we denote $W_{2}^{k}(\Omega) = H^k (\Omega)$; properties of these spaces can be found for instance in Adams~\cite[Theorem~5.4, p. 97]{Adams}. Problem~(\ref{0riginalEquations}) will be studied in the standard functional spaces denoted by \begin{eqnarray*} W_{q}^{2,1}(Q) & =& \left\{f\in L^{q}(Q):D^{\alpha}f\in L^{q}(Q), \, \forall 1\leq|\alpha|\leq 2, f_t \in L^{q}(Q)\right\}, \end{eqnarray*} \begin{eqnarray*} W &=& \left\{f\in L^\infty(Q): f_t \in L^\infty(Q)\right\} \end{eqnarray*} and \begin{eqnarray*} L^{p}(0,T;B) &=& \left\{f:(0,T)\rightarrow B: \|f(t)\|_{L^{p}(0,T;B)} <+\infty \right\}, \end{eqnarray*} where $B$ is suitable Banach space, and the norm is given by $\|f(t)\|_{L^{p}(0,T;B)} = \|\ \|f(t)\|_{B}\ \|_{L^{p}((0,T))}$. We remark that $L^{p}(Q) = L^{p}((0,T);L^{p}(\Omega))$. Results concerning these spaces can be found for instance in Ladyzhenskaya~\cite{Ladyzhenskaya} and Mikhaylov~\cite{Mikhaylov}. \vspace{0.1cm} Next, we state some hypotheses that will be assumed throughout this article. \subsection{Technical Hypotheses:} \label{MainHypotheses} \begin{itemize} \item[{\bf (i)}] $\Omega\subset\mathbb{R}^2$ is a bounded $C^2$-domain; \item[{\bf (ii)}] $0< T < \infty$, and $Q=\Omega\times(0,T)$; \item[{\bf (iii)}] $N_0, A_0 \in L^{\infty}(\Omega)$ and $D_0 \in W^{\frac{3}{2}}_{4}(\Omega)$, satisfying $\frac{\partial D_0}{\partial \eta} (\cdot) =0, \textup{ on } \partial\Omega$; \item[{\bf (iv)}] $0 \leq D_0 \leq \frac{\mu}{\tau}$ and $N_0, A_0 \ge 0$ a.e. on $\Omega$. \end{itemize} \begin{remark} The constraints imposed in~{\bf (iv)} on the initial conditions are natural biological requirements. \end{remark} \subsection{Main result:} \begin{theorem} \label{Teorema1} Assume that the Technical Hypotheses \ref{MainHypotheses} hold; then, there exists a unique nonnegative solution $(N,A,D) \in W \times W \times W^{2,1}_4(Q)$ of Problem (\ref{0riginalEquations}). Moreover, $N, A$ and $D$ are functions satisfying \begin{eqnarray*} N \leq ||N_0||_{L^\infty(Q)} + r_N T, \ A \leq C_{\lambda}||A_0||_{L^\infty(\Omega)} \ a.e. \ in \ Q \end{eqnarray*} and \begin{eqnarray*} ||N||_W + ||A||_{W} + ||D||_{ W^{2,1}_4(Q)} \leq C, \end{eqnarray*} where $C$ is a constant depending on $r_N$, $\mu_N$, $\beta_1$, $\alpha_N$, $\gamma_N$, $C_{\lambda}$, $r_A$, $k_A$, $\mu_A$, $\epsilon_A$, $\alpha_A$, $\gamma_A$, $\mu$, $\tau$, $T$, $\omega$, $||N_0||_{L^\infty (\Omega)}$, $||A_0||_{L^\infty(\Omega)}$ and $||D_0||_{W^{\frac{3}{2}}_4(\Omega)}$. \end{theorem} \begin{remark} The explicit knowledge on how the constant $C$ appearing in the above estimates depends on the given data is important for applications in related control problems. \end{remark} \subsection{Known technical results:} To ease the references, we also state some technical results to be used in this paper. The first one is sometimes called the Lions-Peetre embedding theorem (see Lions~\cite{Lions}, pp.15); it is also a particular case of Lemma~3.3, pp.80, in Ladyzhenskaya~\cite{Ladyzhenskaya}: (obtained by taking $l = 1$ and $r = s = 0$). \begin{lemma} \label{icontLp01} Let $\Omega$ be a domain of $\mathrm{I\!R\!}^n$ with boundary $\partial \Omega$ satisfying the cone property. Then, the functional space $W^{2,1}_p(Q)$ is continuously embedded in $u \in L^{q}(Q)$ for $q$ satisfying: {\bf (i)} $1 \leq q \leq \frac{p(n+2)}{n+2-2p}$, if $ p< \frac{n+2}{2}$; {\bf (ii)} $1 \leq q <\infty$, if $p= \frac{n+2}{2}$ and {\bf (iii)} $q=\infty$, if $p>\frac{n+2}{2}$. \noindent In particular, for such $q$ and any function $u \in W^{2,1}_p(Q)$ we have that \begin{eqnarray*} \label{i.01} \displaystyle \|u\|_{L^{q}(Q)} \leq C\|u\|_{W^{2,1}_p(Q)}, \end{eqnarray*} \noindent with a constant $C$ depending only on $\Omega$, $T$, $p$, $q$, $n$. In the cases {\bf (ii)}, {\bf (iii)} or in {\bf (i)} when $\displaystyle 1 \leq q < \frac{p(n+2)}{n+2-2p}$, the referred embedding is compact. \end{lemma} \vspace{0.1cm} Next, we consider the following simple parabolic initial-boundary value problem: \begin{equation} \label{P_Newmman} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial u}{\partial t} - \sum\limits_{i,j=1}^na_{ij}(x,t)\frac{\partial u^2}{\partial x_ix_j} + \sum\limits_{j=1}^na_i(x,t)\frac{\partial u}{\partial x_j} + a(x,t)u=f & \textup{in} & Q, \\ \displaystyle \sum\limits_{i=1}^n b_i(x,t)\frac{\partial u}{\partial x_i} + b(x,t)u =0 & \textup{on} & \Gamma, \\ \displaystyle u(\cdot,0)= u_0(\cdot) & \textup{in} & \Omega . \end{array} \right. \end{equation} Existence and uniqueness of solutions for this problem is a particular case of Theorem~$9.1$, pp.$341$, in Ladyzenskaya~\cite{Ladyzhenskaya} for the case of Neumann boundary condition, according to the remarks at the end Chapter IV, section 9, p. 351 in \cite{Ladyzhenskaya}. In the following, we state this particular result, stressing the dependencies certain norms of the coefficients, that will be important in our future arguments. \begin{proposition} \label{sol. Neumann} Let $\Omega$ be a bounded domain in $\mathbb{R}^n$, with a $C^{2}$ boundary $\partial \Omega$, $a_{ij}$ be bounded continuous functions in $Q$, and $q > 1$. Assume that \begin{enumerate} \item $a_{ij} \in C(\bar{Q})$, $i, j=1, \ldots, n$; $[a_{ij}]_{n \times n}$ is a real positive matrix such that for some positive constant $\beta$ we have $ \sum\limits_{i,j=1}^n a_{ij}(x,t)\xi_i\xi_j\geq \beta|\xi|^2$ for all $(x,t) \in Q$ and all $\xi \in R^n$, ; \item $\displaystyle f \in L^p(Q)$; \item $\displaystyle a_i \in L^r(Q)$ with either $r = \max\big(p, n + 2\big)$ if $p \neq n + 2$ or $r = n + 2 + \varepsilon$, for any $\varepsilon>0$, if $p = n + 2$; \item $\displaystyle a \in L^s(Q)$ with either $s = \max\big(p, (n + 2)/2\big)$ if $\displaystyle p \neq (n + 2)/2$ or $s = (n + 2)/2 + \varepsilon$, for any $\varepsilon>0$, if $\displaystyle p = (n + 2)/2$. \item $b_i, b \in C^2 (\bar{\Gamma})$, $i=1, \ldots, n$, and the coefficients $b_i(x,t)$ satisfy the condition $\left| \sum\limits_{i=1}^n b_i(x,t)\eta_i(x) \right|\geq \delta >0$ for $a.e.$ in $\partial\Omega \times (0,T)$, where $\eta_i(x)$ is the $i^{th}$-component of the unitary outer normal vector to $\partial\Omega$ in $x \in \partial\Omega$; \item $u_0 \in \ W^{2 - \frac{2}{p}}_p(\Omega)$ with $p\neq 3$ and satisfying the compatibility condition \\ $\displaystyle \sum\limits_{i=1}^n b_i \frac{\partial u_0}{\partial x_i} + b \ u_0 =0$ on $\partial \Omega$ when $p > 3$. \end{enumerate} Then, there exists a unique solution $u \in W^{2,1}_p(Q)$ of Problem~(\ref{P_Newmman}); moreover, there is a positive constant $C_p$ such that the solution satisfies \begin{equation} \label{BasicParabolicEstimate} \|u\|_{W^{2,1}_p(Q)} \leq C_{p} \left(\|f\|_{L^p(Q)} + \|u_0\|_{W^{2 - \frac{2}{p}}_p(\Omega)}\right). \end{equation} Such constant $C_{p}$ depends only on $\Omega$, $T$, $p$, $r$, $s$, $\beta$, $\delta$ and on the norms $\|b_i\|_{C^2 (\bar{\Gamma})}$, $\|b\|_{C^2 (\bar{\Gamma})}$, $\|a_{ij}\|_{C(\bar{Q})}$, $\|a_i\|_{L^r(Q)}$ and $\|a\|_{L^s(Q)}$. Moreover, we may assume that the dependencies of $C_{p}$ on stated the norms are non decreasing. \end{proposition} \begin{remark} The result set out in Proposition \ref{sol. Neumann} can be formulated for the parabolic problem with Dirichlet conditions (see Ladyzenskaya \cite[Theorem 9.1, pp.$341$]{Ladyzhenskaya}). In the problem with Dirichlet condition the compatibility condition in Proposition \ref{sol. Neumann}-($6$) can be replaced by $u_0=0$ on $\partial \Omega$ when $p > 3/2$. This way, all the results in this paper holds if we replaced the Neumann conditions by Dirichlet conditions. \end{remark} \section{An auxiliary problem} In this section we will prove an auxiliary result to be used in the proof of Theorem~\ref{Teorema1}. To cope with difficulties with the signs of certain terms during the derivation of the estimates, we firstly have to consider the following modified problem: \begin{equation} \label{P01} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \hat{N}}{\partial t} = r_N - \mu_N \hat{N} - \beta_1 \hat{N} \hat{A} - \alpha_N\gamma_N |\hat{D}| \hat{N} , & \textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \hat{A}}{\partial t} = r_A\hat{A}\left(1-\frac{\hat{A}}{k_A}\right)-(\mu_A+\epsilon_A)\hat{A} - \alpha_A\gamma_A |\hat{D}| \hat{A}, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \hat{D}}{\partial t} = \sigma \Delta\hat{D} + \mu\chi_{\omega} - \gamma\hat{D}\hat{A} - \gamma_N\hat{D}\hat{N} - \tau\hat{D}, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \hat{D}}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle \hat{N}(\cdot,0) = N_{0} (\cdot), \hat{A}(\cdot,0) = A_{0}(\cdot), \hat{D}(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega. \end{array} \right. \end{equation} Now we observe that, since the equation for $\hat{N}$ in this last problem is, for each $x \in \Omega$, an ordinary differential equation which is linear in $\hat{N}$, we can find an explicit expression for it in terms of $\hat{A}$ and $|\hat{D}|$. However, $\hat{A} $ is, for each $x \in \Omega$, a nonlinear differential equation in $\hat{A}$, and we can determine its explicit expression in terms of $|\hat{D}|$ using Bernoulli's method. Using these observations and setting $\lambda = r_A - (\mu_A + \epsilon_A)$, we introduce operators $\Lambda: L^{\infty}(Q) \to L^{\infty}(Q)$ and $\Theta: L^{\infty}(Q) \to L^{\infty}(Q)$, defined respectively by \begin{equation} \label{P77} \displaystyle \Lambda(\phi)(x,t) = \frac{A_0(x)k_A e^{\lambda t} e^{-\alpha_A\gamma_A \int_{0}^{t}|\phi(\xi,x)| d\xi}}{k_A + A_0(x) r_A \int_{0}^{t} e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi(\xi,x)| d\xi} ds} \end{equation} and \begin{equation} \label{P7} \displaystyle \Theta(\phi)(x,t) = \frac{N_0(x) + r_N \int_{0}^{t} e^{\mu_N s} e^{\alpha_N\gamma_N\int_{0}^{s} |\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi)(x, \xi) d\xi} ds}{e^{\mu_N t} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi)(x, \xi) d\xi}}, \end{equation} \noindent where $0 \leq s \leq t \leq T$. \begin{remark} \label{obs1} Thus, $(\hat{N},\hat{A}, \hat{D})$ is a solution of (\ref{P01}) if, and only if, $\hat{N} = \Theta (\hat{D})$, $\hat{A} = \Lambda(\hat{D})$ and $\hat{D}$ satisfies the following integro-differential system: \begin{equation} \label{P3} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \hat{D}}{\partial t} = \sigma \Delta\hat{D} + \mu\chi_{\omega} - \gamma\hat{D}\Lambda(\hat{D}) - \gamma_N\hat{D} \Theta(\hat{D}) - \tau\hat{D}, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \hat{D}}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle \hat{D}(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega. \end{array} \right. \end{equation} \end{remark} \begin{remark} \label{obs2} Notice that, to guarantee that $(N, A, D)$, with $D = \hat{D}$, $N = \Theta(\hat{D})$ and $A = \Lambda(\hat{D})$ is also a solution of system~(\ref{0riginalEquations}), it is enough to prove that the solution $\hat{D}$ of Problem~(\ref{P3}) is nonnegative. \end{remark} For the Problem \ref{P3}, we have the following existence result: \begin{proposition}\label{Prop1} Assuming that the Technical Hypotheses~\ref{MainHypotheses} hold, there exists at least one nonnegative solution $\hat{D} \in W^{2,1}_4(Q)$ of Problem \eqref{P3}. Moreover, $\hat{D} \leq \frac{\mu}{\tau}$ a.e. in $Q$ and \begin{eqnarray*} ||\hat{D}||_{W^{2,1}_4(Q)} \leq C, \end{eqnarray*} where $C$ is a constant depending on $\mu$, $T$, $\omega$ and $||D_0||_{W^{\frac{3}{2}}_4(\Omega)}$. \end{proposition} \begin{lemma} \label{base} Let $f:(0,T) \to \mathbb{R}$ differentiable such that $f(t) > 0$ and $f'(t) \ge 0$. If $g(t) = \frac{\int_{0}^{t} f(x) dx}{f(t)}$, then $g(t) \leq T$, for all $t \in (0, T)$. \end{lemma} \noindent {\bf Proof:} Since $f$ is continuous in $(0, T)$, it follows that \begin{eqnarray*} g'(t) &=& \frac{f(t)^2 - f'(t) \int_{0}^{t}f(x)dx}{f(t)^2} \\ &=& 1 - \frac{f'(t)}{f(t)} g(t). \end{eqnarray*} As $f(t) > 0$ we have $g(t) \ge 0$ and using the fact that $f'(t) \ge 0$ we obtain $\frac{f'(t)}{f(t)} g(t) \ge 0$. Therefore, $g'(t) \leq 1$, which suggests $g(t) \leq t$, for all $t \in (0, T)$. Thus, $g(t) \leq T$, as intended. \hfill$\Box$ Since in the proof of existence of solutions of (\ref{P3}) the expression of $\Lambda$ and $\Theta$ will play important roles, we state some of their properties in the following: \begin{lemma} \label{PropertiesEtcFirst} If $N_0, A_0 \in L^\infty(\Omega)$ and $C_{\lambda} = \max\{1, e^{\lambda T}\}$, then for any $\phi, \phi_1, \phi_2 \in L^\infty(Q)$ and for almost every $(x,t) \in Q$, there holds \[ \begin{array}{ll} \mbox{\bf (i)} & 0 \leq \Theta(\phi)(x,t) \leq ||N_0||_{L^\infty(\Omega)} + r_N T; \vspace{0.2cm} \\ \mbox{\bf (ii)} & 0 \leq \Lambda(\phi)(x,t) \leq C_{\lambda}||A_0||_{L^\infty(\Omega)}; \vspace{0.2cm} \\ \mbox{\bf (iii)} & \|\Lambda(\phi_1) - \Lambda(\phi_2) \|_{L^\infty {(Q)}} \leq C_1 \|\phi_1 - \phi_2\|_{L^\infty(Q)}, \\ & where \ C_1 \ is \ a \ constant \ depending \ on \ r_A, k_A, \alpha_A, \gamma_A, C_{\lambda}, T \ and \ ||A_0||_{L^\infty(\Omega)}; \vspace{0.2cm} \\ \mbox{\bf (iv)} & \|\Theta(\phi_1) - \Theta(\phi_2) \|_{L^\infty {(Q)}} \leq C_2 \|\phi_1 - \phi_2\|_{L^\infty(Q)}, \\ & where \ C_2 \ is \ a \ constant \ depending \ on \ r_N, \mu_N, \beta_1, \alpha_N, \gamma_N, C_{\lambda}, C_1, T , \\ & ||\phi_1||_{L^\infty(Q)}, ||\phi_2||_{L^\infty(Q)}, ||N_0||_{L^\infty(\Omega)} \ and \ ||A_0||_{L^\infty(\Omega)}. \end{array} \] \end{lemma} \noindent {\bf Proof (i) and (ii):} By the expressions (\ref{P77}) and (\ref{P7}) it is immediate that $\Lambda(\phi)(x,t), \Theta(\phi)(x,t) \ge 0$. To prove that $\Theta(\phi)(x,t) \leq ||N_0||_{L^\infty(\Omega)} + r_N T$, we observe that $$ \begin{array}{rcl} \displaystyle \Theta(\phi)(x,t) = \frac{N_0(x) + r_N \int_{0}^{t} e^{\mu_N s} e^{\alpha_N\gamma_N\int_{0}^{s} |\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi)(x, \xi) d\xi} ds}{e^{\mu_N t} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi)(x, \xi) d\xi}} \\ \displaystyle \\ \leq N_0(x) + r_N \frac{\int_{0}^{t} e^{\mu_N s} e^{\alpha_N\gamma_N\int_{0}^{s} |\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi)(x, \xi) d\xi} ds}{e^{\mu_N t} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi)(x, \xi) d\xi}}. \end{array} $$ Fixed $x \in \Omega$, we define \begin{equation*} \displaystyle g(x,t) = \frac{\int_{0}^{t} e^{\mu_N s} e^{\alpha_N\gamma_N\int_{0}^{s} |\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi)(x, \xi) d\xi} ds}{e^{\mu_N t} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi)(x, \xi) d\xi}}, \end{equation*} and using the Lemma \ref{base} with $f(x,t) = e^{\mu_N t} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi)(x, \xi) d\xi}$, it follows that \begin{eqnarray*} \Theta(\phi, \varphi)(x,t) \leq N_0(x) + r_N T \\ \leq ||N_0||_{L^\infty(\Omega)} + r_N T. \end{eqnarray*} To prove that $\Lambda(\phi)(x,t) \leq C_{\lambda}||A_0||_{L^\infty(\Omega)}$, note that $$ \begin{array}{rcl} \displaystyle \Lambda(\phi)(x,t) = \frac{A_0(x)k_A e^{\lambda t} e^{-\alpha_A \gamma_A \int_{0}^{t}|\phi(\xi,x)| d\xi}}{k_A + A_0(x) r_A \int_{0}^{t} e^{\lambda s} e^{-\alpha_A \gamma_A \int_{0}^{s}|\phi(\xi,x)| d\xi} ds} \\ \displaystyle \\ \leq \frac{1}{k_A} A_0(x)k_A e^{\lambda t}e^{-\alpha_A \gamma_A \int^{t}_{0}|\phi(x,\xi)|d\xi} \\ \\ \displaystyle \leq A_0(x) e^{\lambda t} \leq C_{\lambda} A_0(x) \leq C_{\lambda} ||A_0||_{L^{\infty}(\Omega)}. \end{array} $$ {\bf Proof (iii):} We firstly need to observe that, due to the mean value inequality, given any $z_1, z_2 \in \mathrm{I\!R\!}$, there is $\theta = \theta(z_1, z_2)$ such that $e^{z_2} - e^{z_1} = e^{(1-\theta) z_1 + \theta z_2 } (z_2 - z_1)$; in particular, for any $z_1, z_2 \leq 0$ we also have $(1-\theta) z_1 + \theta z_2 \leq 0$ and thus \begin{equation} \label{AlgebraicExponentialInequality} |e^{z_2} - e^{z_1} | \leq |z_2 - z_1|, \quad \forall z_1, z_2 \leq 0 . \end{equation} Secondly, we note that by the inequality (\ref{AlgebraicExponentialInequality}) and by $\phi_i \in L^\infty(Q)$, $i = 1,2$, we obtain \begin{equation}\label{i1} \begin{array}{rcl} \big|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi} - e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\big| & \leq & \big|-\alpha_A\gamma_A\int_{0}^{t}(|\phi_1(x,\xi)| - |\phi_2(x,\xi)|) d\xi\big| \\ \\ &\leq & \alpha_A\gamma_A T ||\phi_1 - \phi_2||_{L^{\infty}(Q)}. \end{array} \end{equation} Thirdly, we observe that \begin{eqnarray*} \big|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi}ds &-& \\ \displaystyle e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi}ds\big| &\leq& \\ \displaystyle \big|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi} - e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\big| \int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi} ds &+& \\ \displaystyle e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi} \int_{0}^{t} e^{\lambda s} \big|e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_1(x,\xi)| d\xi} - e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi}\big|ds. \end{eqnarray*} How $e^{\lambda T} \leq C_\lambda$ and $e^{-\alpha_A\gamma_A||\phi_2||_{L^\infty(Q)}} \leq 1$, and using study analogous to that done in (\ref{i1}), we obtain that \begin{equation} \label{ineq1x} \begin{array}{rcl} \displaystyle \big|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi}ds &-& \\ \\ \displaystyle e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_1(x,\xi)| d\xi}ds\big| &\leq& \\ \\ \displaystyle 2 \alpha_A\gamma_A C_{\lambda} T^2 ||\phi_1 - \phi_2||_{L^{\infty}(Q)}. \end{array} \end{equation} Finally, the expression in (\ref{P7}) suggests \begin{eqnarray*} |\Lambda(\phi_1)(x,t) - \Lambda(\phi_2)(x,t)| & \leq & \\ A_0(x) e^{\lambda t}\bigg|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi} - e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\bigg| &+& \\ \frac{1}{k_A}{A_0(x)}^2 r_A e^{\lambda t}\bigg|e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_1(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_2(x,\xi)| d\xi}ds &-& \\ e^{-\alpha_A\gamma_A\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\int_{0}^{t}e^{\lambda s} e^{-\alpha_A\gamma_A \int_{0}^{s}|\phi_1(x,\xi)| d\xi}ds\bigg|, \end{eqnarray*} and using the estimates obtained in (\ref{i1}) and (\ref{ineq1x}) and making the possible simplifications, we obtain $$ \begin{array}{rcl} |\Lambda(\phi_1)(x,t) - \Lambda(\phi_2)(x,t)| & \leq & \\ \\ ||A_0||_{L^{\infty}(\Omega)} C_{\lambda} \alpha_A\gamma_A T ||\phi_1 - \phi_2||_{L^{\infty}(Q)} &+& \\ \\ \frac{2}{k_A} ||A_0||_{L^{\infty}(\Omega)}^2 r_A {C_{\lambda}}^2 \alpha_A\gamma_A T^2 ||\phi_1 - \phi_2||_{L^{\infty}(Q)}, \end{array} $$ for almost everything $(x,t) \in Q$, i.e., \begin{equation} \label{eqc1} ||\Lambda(\phi_1) - \Lambda(\phi_2)||_{L^{\infty}(Q)} \leq C_1 \ ||\phi_1 - \phi_2||_{L^{\infty}(Q)}. \end{equation} {\bf Proof (iv):} First, note that \begin{eqnarray*} \big| e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi} - e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\big| &\leq& \\ e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}\big|e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi} - e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi} \big| &+& \\ e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\big|e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi} - e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi} \big|, \end{eqnarray*} and by the inequality (\ref{AlgebraicExponentialInequality}) and by $\Lambda(\phi_i), \phi_i \in L^\infty(Q)$, $i = 1,2$, we obtain \begin{equation} \label{i12} \begin{array}{rcl} \displaystyle \big| e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi} - e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\big| &\leq& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi} \beta_1 T ||\Lambda(\phi_1) - \Lambda(\phi_2)||_{L^\infty(Q)} &+& \\ \\ \displaystyle e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi} \alpha_N \gamma_N T ||\phi_1 - \phi_2||_{L^\infty(Q)}. \end{array} \end{equation} Since \begin{eqnarray*} \bigg|e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_1)(x, \xi) d\xi} ds &-& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2)(x, \xi) d\xi} ds\bigg| &\leq& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi} &\times& \\ \\ \displaystyle \int_{0}^{t}e^{\mu_Ns}\bigg|e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_1)(x, \xi) d\xi} - e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2)(x, \xi) d\xi}\bigg| ds &+& \\ \\ \displaystyle \bigg|e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2)(x, \xi) d\xi} - e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_1)(x, \xi) d\xi}\bigg| &\times& \\ \\ \displaystyle \int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2(x, \xi) d\xi}, \end{eqnarray*} doing $||\phi||_{L^\infty(Q)} = \max\{||\phi_1||_{L^\infty(Q)}, ||\phi_2||_{L^\infty(Q)}\}$ and study analogous to that done in (\ref{i12}), guarantees us \begin{equation} \label{ineq21} \begin{array}{rcl} \displaystyle \bigg|e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_1)(x, \xi) d\xi} ds &-& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2)(x, \xi) d\xi} ds\bigg| &\leq& \\ \displaystyle \\ e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1 \int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi}e^{\mu T} e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}} \beta_1 T^2 &\times& \\ \displaystyle \\ ||\Lambda(\phi_1) - \Lambda(\phi_2)||_{L^\infty(Q)} &+& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, t) d\xi}e^{\mu T} e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}}\alpha_N\gamma_N T^2 &\times& \\ \\ \displaystyle ||\phi_1 - \phi_2||_{L^\infty(Q)} &+& \\ \\ \displaystyle e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi} \beta_1 T^2 ||\Lambda(\phi_1) - \Lambda(\phi_2)||_{L^\infty(Q)} &\times& \\ \\ \displaystyle e^{\mu_N T}e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}}e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}} &+& \\ \\ \displaystyle e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi} \alpha_N \gamma_N T^2 ||\phi_1 - \phi_2||_{L^\infty(Q)} &\times& \\ \\ \displaystyle e^{\mu_N T}e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}}e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}}. \end{array} \end{equation} Finally, the expression in (\ref{P7}) suggests \begin{eqnarray*} |\Theta(\phi_1)(x,t) - \Theta(\phi_2)(x,t)| & \leq & \\ \frac{1}{e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi} e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi}} &\times& \\ \bigg(N_0(x)\big|e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi} - e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi} \big| &+& \\ r_N\bigg|e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_2)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_1)(x, \xi) d\xi} ds &-& \\ e^{\alpha_N\gamma_N\int_{0}^{t}|\phi_1(x,\xi)| d\xi}e^{\beta_1\int_{0}^{t}\Lambda(\phi_1)(x, \xi) d\xi}\int_{0}^{t}e^{\mu_Ns}e^{\alpha_N\gamma_N\int_{0}^{s}|\phi_2(x,\xi)| d\xi}e^{\beta_1\int_{0}^{s}\Lambda(\phi_2)(x, \xi) d\xi}\bigg|\bigg). \end{eqnarray*} and using the estimates obtained in (\ref{eqc1}), (\ref{i12}) and (\ref{ineq21}) and making the possible simplifications, we obtain $$ \begin{array}{rcl} \displaystyle \displaystyle |\Theta(\phi_1)(x,t) - \Theta(\phi_2)(x,t)| & \leq & \\ \\ \displaystyle ||N_0||_{L^{\infty}(\Omega)} e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}} \beta_1 T^2 C_1 ||\phi_1 - \phi_2||_{L^{\infty}(Q)} &+& \\ \\ \displaystyle ||N_0||_{L^{\infty}(\Omega)} e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}}\alpha_N\gamma_N T^2 ||\phi_1 - \phi_2||_{L^{\infty}(Q)} &+& \\ \\ \displaystyle r_N e^{\mu_N T} e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}} e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}} \beta_1 T^2 C_1 ||\phi_1 - \phi_2||_{L^{\infty}(Q)} &+& \\ \\ \displaystyle r_N e^{\mu_N T} e^{\alpha_N\gamma_N T ||\phi||_{L^{\infty}(Q)}} e^{\beta_1 T C_{\lambda} ||A_0||_{L^\infty(\Omega)}} \alpha_N\gamma_N T^2 ||\phi_1 - \phi_2||_{L^{\infty}(Q)} \end{array} $$ for almost everything $(x,t) \in Q$, i.e., \begin{eqnarray*} \displaystyle ||\Theta(\phi_1) - \Theta(\phi_2)||_{L^\infty(Q)} \leq C_2 ||\phi_1 - \phi_2||_{L^\infty(Q)}. \end{eqnarray*} \hfill$\Box$ \subsection{Proof of Proposition \ref{Prop1}} To not overburden the notation, in this subsection we denote $D$ as a generic solution of the equations that follows. To get a solution of problem \eqref{P3}, we will apply the Leray-Schauder fixed point theorem to the mapping $\Psi$ defined as follows: \begin{equation} \label{oper} \begin{array}{rccl} \Psi: & [0,1]\times L^\infty(Q) & \rightarrow & L^\infty(Q) \\ &(l, \phi) & \mapsto & D, \end{array} \end{equation} \noindent where $D$ is the unique solution of \begin{equation} \label{P6} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial D}{\partial t} = \sigma \Delta D + \mu\chi_{\omega} - l\gamma D\Lambda(\phi) - l\gamma_N D \Theta(\phi) - \tau D, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial D}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle D(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega, \end{array} \right. \end{equation} with $\Lambda(\phi)$ and $\Theta(\phi)$ given by (\ref{P77}) and (\ref{P7}), respectively. To apply such theorem we present next a sequence of lemmas: \begin{lemma} \label{lemaAi} Suppose $N_0, A_0 \in L^\infty(\Omega)$ and $D_0 \in W^{\frac{3}{2}}_4(\Omega)$. Then the mapping $\Psi :[0,1] \times L^\infty(Q) \rightarrow L^\infty(Q)$ is well defined. \end{lemma} \noindent{\bf Proof:} We affirm that the coefficients of the Problem \ref{P6} satisfy the hypotheses of the Proposition \ref{sol. Neumann}. For example, it is immediate that $- l \gamma \Lambda(\phi) - l \gamma_N \Theta(\phi) - \tau \in L^4(Q)$, because by Lemma \ref{PropertiesEtcFirst}, $\Lambda(\phi), \Theta(\phi) \in L^\infty(Q)$. Thus, we conclude that there is a unique solution $D \in W^{2,1}_4(Q)$ of problem $\ref{P6}$. Moreover, $D$ satisfies the following estimate: \begin{equation}\label{AA} \begin{array}{cccc} ||D||_{W^{2,1}_4(Q)} \leq C_p \bigg( ||\mu \chi_{\omega}||_{L^4(Q)} + ||D_0||_{W^{\frac{3}{2}}_4(\Omega)}\bigg) \\ \leq C_p \bigg( \mu |\omega|^{\frac{1}{4}} T^{\frac{1}{4}} + ||D_0||_{W^{\frac{3}{2}}_4(\Omega)} \bigg). \end{array} \end{equation} Finally, from Lemma \ref{icontLp01}, we have $W^{2,1}_4(Q) \hookrightarrow L^\infty(Q)$, and we conclude that the operator $\Psi$ in well defined. \hfill$\Box$ \\ \begin{lemma} \label{nãonegativa1} Suppose $D$ is a solution of (\ref{P6}) and $0 \leq D_0 \leq \frac{\mu}{\tau}$ a.e. in $\Omega$, then $0 \leq D \leq \frac{\mu}{\tau}$ a.e. in $Q$. \end{lemma} \noindent {\bf Proof:} Multiplying the first equation in $(\ref{P6})$ by $D^-$ and integrating into $\Omega$, we get \begin{eqnarray*} \frac{1}{2} \frac{d}{dt} \int_{\Omega} (D^{-})^2 \ dx = -\sigma \int_{\Omega} |\nabla D^{-}|^2 \ dx - \mu \int_{\omega} D^{-} \ dx \\ - l\gamma \int_{\Omega} \Lambda(\phi) (D^{-})^2 \ dx - l\gamma_N \int_{\Omega} \Theta(\phi) (D^{-})^2 \ dx - \tau \int_{\Omega} (D^{-})^2 \ dx. \end{eqnarray*} Thus, \begin{eqnarray*} \frac{d}{dt} \int_{\Omega} (D^{-})^2 dx \leq 0, \end{eqnarray*} and using Gronwall's inequality and the fact that $D_0 \ge 0$ a.e. in $\Omega$, we obtain \begin{eqnarray*} \int_{\Omega} (D^{-})^2 dx \leq \int_{\Omega} ({D_0}^-)^2 dx = 0, \end{eqnarray*} that is, $||D^-(\cdot, t)||_{L^2(\Omega)} = 0$ for all $t \in (0,T)$, where we conclude that $D^{-} = 0$ a.e. in $Q$ and therefore $D \ge 0$ a.e. in $Q$. Now, we observe that the first equation in (\ref{P6}) can be rewritten as \begin{eqnarray*} \frac{\partial}{\partial t} \big(D - \frac{\mu}{\tau}\big) = \sigma \Delta \big(D - \frac{\mu}{\tau}\big)- l\gamma \Lambda(\phi) D - l\gamma_N \Theta(\phi) D - \tau\big(D - \frac{\mu\chi_{\omega}}{\tau} \big). \end{eqnarray*} Multiplying by $(D - \frac{\mu}{\tau})^+$ and integrating in $\Omega$, we obtain \begin{eqnarray*} \frac{1}{2} \frac{d}{dt} \int_{\Omega} \big(\big(D - \frac{\mu}{\tau}\big)^{+}\big)^2 \ dx = - \sigma \int_{\Omega} \big|\nabla \big(D - \frac{\mu}{\tau}\big)^{+} \big|^2 \ dx \\ - l\gamma \int_{\Omega} \Lambda(\phi) D \big(D - \frac{\mu}{\tau}\big)^{+} \ dx - l\gamma_N \int_{\Omega} \Theta(\phi) D \big(D - \frac{\mu}{\tau}\big)^{+} \ dx \\ - \tau \int_{\omega} \big(\big(D - \frac{\mu}{\tau} \big)^{+}\big)^2 \ dx - \tau \int_{\Omega \backslash \omega} D \big(D - \frac{\mu}{\tau} \big)^{+} \ dx, \end{eqnarray*} that is, \begin{eqnarray*} \frac{d}{dt} \int_{\Omega} \big(\big(D - \frac{\mu}{\tau}\big)^{+}\big)^2 \ dx \leq 0. \end{eqnarray*} Thus, using Gronwall's inequality and the fact that $D_0 \leq \frac{\mu}{\tau}$ a.e. in $\Omega$, it follows that \begin{eqnarray*} \int_{\Omega} \big(\big({D} -\frac{\mu}{\tau}\big)^{+}\big)^2 \ dx &\leq& \int_{\Omega} \big(\big(D_0 - \frac{\mu}{\tau} \big)^{+}\big)^2 \ dx = 0, \end{eqnarray*} that is, $||\big(D(\cdot, t) -\frac{\mu}{\tau}\big)^{+}||_{L^2(\Omega)} = 0$ for all $t \in (0,T)$, and therefore $\big(D -\frac{\mu}{\tau}\big)^{+} = 0$ a.e. in $Q$, and we conclude that $D \leq \frac{\mu}{\tau}$ a.e. in $Q$. \hfill$\Box$ \\ \begin{lemma} \label{con1} For each fixed $l \in [0,1]$, the mapping $\Psi(l, \cdot): L^\infty(Q) \rightarrow L^\infty(Q)$ is compact, i.e., it is continuous and maps bounded sets into relatively compacts sets. \end{lemma} \noindent{\bf Proof:} The functions $\Psi(l, \phi_1)= D_1$ and $\Psi(l, \phi_2) = D_2$ satisfy the system \begin{equation*} \label{Aij} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial D_i}{\partial t} = \sigma \Delta D_i + \mu\chi_{\omega} - l\gamma D_i\Lambda(\phi_1) - l\gamma_N D_i \Theta(\phi_i) - \tau D_i, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial D_i}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle D_i(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega, \end{array} \right. \end{equation*} with $i=1,2$; letting $\tilde{D} = D_1 - D_2$, we have \begin{equation} \label{Ai1} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \tilde{D}}{\partial t} - \sigma \Delta \tilde{D} + l\gamma \tilde{D} \Lambda(\phi_2) + l\gamma_N \tilde{D} \Theta(\phi_2) + \tau \tilde{D} = \\ \displaystyle - l\gamma D_1 (\Lambda(\phi_1) - \Lambda(\phi_2)) - l\gamma_N D_1(\Theta(\phi_1) - \Theta(\phi_2)), &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \tilde{D}}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle \tilde{D}(\cdot,0) = \tilde{D}_0 (\cdot) = 0, &\textup{in}& \Omega. \end{array} \right. \end{equation} Using the Proposition \ref{sol. Neumann} and the fact that $L^{\infty}(Q) \hookrightarrow L^4(Q)$ and $D_1 \leq \frac{\mu}{\tau}$, we get \begin{eqnarray*} ||\tilde{D}||_{W_{4}^{2, 1}(Q)} \leq C_p ||- l\gamma D_1 (\Lambda(\phi_1) - \Lambda(\phi_2)) - l\gamma_N D_1(\Theta(D_1) - \Theta(D_2)) ||_{L^4(Q)} \\ \leq \bar{C}_p ||- l\gamma D_1 (\Lambda(\phi_1) - \Lambda(\phi_2)) - l\gamma_N D_1(\Theta(\phi_1) - \Theta(\phi_2)) ||_{L^\infty(Q)} \\ \leq \bar{C}_p \gamma \frac{\mu}{\tau} ||\Lambda(\phi_1) - \Lambda(\phi_2)||_{L^\infty(Q)} + \bar{C}_p \gamma_N \frac{\mu}{\tau} ||\Theta(\phi_1) - \Theta(\phi_2)||_{L^\infty(Q)}. \end{eqnarray*} Then, by Lemmas \ref{PropertiesEtcFirst} and \ref{icontLp01}, we finally have \begin{eqnarray*} ||\Psi(l, \phi_1) - \Psi(l, \phi_2)||_{L^\infty(Q)} \leq C ||\phi_1 - \phi_2||_{L^\infty(Q)}, \end{eqnarray*} where $C$ depends on $\bar{C}_p$, $C_1$, $C_2$, $\gamma$, $\gamma_N$, $\mu$, $\tau$ and the immersion constant. To show that $\Psi (l, \cdot) $ is compact, we use the fact that the immersion $W^{2,1}_4(Q) \hookrightarrow L^\infty(Q)$ is compact and that $\Psi(l, \cdot)$ is the composition between the inclusion operator and the solution operator, i.e., $\Psi(l, \cdot): L^\infty(Q) \rightarrow W^{2,1}_4(Q) \rightarrow L^\infty(Q)$. \hfill$\Box$ \\ \begin{lemma} \label{unifcon1} Given a bounded subset $B\subset L^\infty(Q)$, for each $\phi \in B$, the mapping $\Psi(\cdot, \phi): [0, 1] \rightarrow L^\infty(Q)$ is uniformly continuous with respect to $B$. \end{lemma} \noindent{\bf Proof:} Since $B \in L^\infty(Q)$ is bounded, there is $r_B \ge 0$ such that, for any $\phi \in B$, we have $||\phi||_{L^\infty(Q)} \leq r_B$. Now, let us fix $\phi \in L^\infty(Q)$ and consider $l_1, l_2 \in [0,1]$ and denote $\Psi(l_1, \phi) = D_1$, $\Psi(l_2, \phi) = D_2$ and $\tilde{D} = D_1 - D_2$. Then, $\tilde{D}$ satisfies \begin{equation} \label{B} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \tilde{D}}{\partial t} - \sigma \Delta \tilde{D} + \gamma l_2\Lambda(\phi) \tilde{D} + \gamma_N l_2 \Theta(\phi) \tilde{D} + \tau \tilde{D} = \\ \gamma\Lambda(\phi)D_1 (l_1 - l_2) - \gamma_N \Theta(\phi)D_1(l_1 - l_2), &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \tilde{D}}{\partial \eta} = 0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle \tilde{D}(\cdot, 0) = \tilde{D}_0(\cdot) = 0, &\textup{in}& \Omega. \end{array} \right. \end{equation} Using the Proposition \ref{sol. Neumann} and the fact that $L^{\infty}(Q) \hookrightarrow L^4(Q)$, $D_1 \leq \frac{\mu}{\tau}$, we get \begin{eqnarray*} ||\tilde{D}||_{W_{4}^{2,1}(Q)} \leq C_p ||\gamma\Lambda(\phi)D_1 (l_1 - l_2) - \gamma_N \Theta(\phi)D_1(l_1 - l_2)||_{L^4(Q)} \\ \leq \bar{C}_p \gamma \frac{\mu}{\tau} |l_1 - l_2| ||\Lambda(\phi)||_{L^\infty(Q)} + \bar{C}_p \gamma_N \frac{\mu}{\tau} |l_1 - l_2| ||\Theta(\phi)||_{L^\infty(Q)}. \end{eqnarray*} Then, by Lemmas \ref{PropertiesEtcFirst} and \ref{icontLp01}, we finally have \begin{eqnarray*} ||\Psi(l_1,\phi) - \Psi(l_2,\phi)||_{L^\infty(Q)} \leq C |l_1 - l_2|, \end{eqnarray*} where $C$ depends on $\bar{C}_p$, $\gamma$, $\gamma_N$, $\mu$, $\tau$, $r_N$, $T$, $C_{\lambda}$, $||N_0||_{L^\infty(\Omega)}$, $||A_0||_{L^\infty(\Omega)}$ and the immersion constant. \hfill$\Box$ \\ \begin{lemma} \label{estimativa1} Suppose $D_0 \leq \frac{\mu}{\tau}$ a.e. in $\Omega$, then there exists a number $\rho > 0$ such that, for any $l \in [0,1]$ and any possible fixed point $D \in L^\infty(Q)$ of $\Psi(l, \cdot)$, there holds $\|D\|_{L^\infty(Q)} < \rho$. \end{lemma} \noindent {\bf Proof:} Let $D \in L^\infty(Q)$ such that $\Psi(l, D) = D$. The analogous demonstration made in Proposition \ref{nãonegativa1} guarantees us $||D||_{L^\infty(Q)} \leq \frac{\mu}{\tau}$. Therefore, just take $\rho = \frac{\mu}{\tau} + 1$. \hfill$\Box$ \\ \begin{lemma} \label{fix1} The mapping $\Psi(0,\cdot): L^\infty(Q) \rightarrow L^\infty(Q)$ has a unique fixed point. \end{lemma} \noindent {\bf Proof:} Indeed, letting $l =0$ in \ref{P6}, $D$ is a fixed point of $\Psi(0, \cdot)$ if, and only if, $D$ is the unique solution to the problem \begin{equation*} \label{P0} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial D}{\partial t} = \sigma \Delta D + \mu \chi_{\omega} - \tau D, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial D}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle D(\cdot,0) = D_{0} (\cdot), &\textup{in}& \Omega. \end{array} \right. \end{equation*} But Proposition {\ref{sol. Neumann}} guarantees the existence of a unique solution $D \in W^{2,1}_{4}(Q)\hookrightarrow L^{\infty}(Q)$ of this last problem; therefore $\Psi(0, \cdot )$ has a unique fixed point in $L^\infty(Q)$. \hfill$\Box$ \\ \begin{proposition} \label{existenciaA} There is a nonnegative solution $\hat{D} \in W^{2,1}_4(Q)$ of the problem (\ref{P3}). \end{proposition} \noindent {\bf Proof:} From Lemmas \ref{lemaAi}, \ref{con1}, \ref{unifcon1}, \ref{estimativa1} and \ref{fix1}, we conclude that the mapping $\Psi: [0,1] \times L^\infty(Q) \rightarrow L^\infty(Q)$ satisfies the hypotheses of the Leray-Schauder's fixed point theorem (see Friedman \cite[pp.~189, Theorem 3]{Friedman}). Thus, there exists $\hat{D} \in L^\infty(Q)$ such that $\Psi(1, \hat{D}) = \hat{D}$. Moreover, by Lemmas \ref{lemaAi} and \ref{nãonegativa1}, $\hat{D} \in W^{2,1}_4(Q)$ is nonnegative and $\hat{D}$ is the required solution of (\ref{P3}). \hfill$\Box$ \\ \section{Proof of Theorem \ref{Teorema1}} \begin{proposition} \label{h2} There is a nonnegative solution $(\hat{N}, \hat{A}, \hat{D}) \in L^\infty(Q) \times L^\infty(Q) \times W^{2,1}_4(Q)$ of the modified problem (\ref{P01}). \end{proposition} \noindent {\bf Proof:} Just combine the Proposition \ref{existenciaA}, the Remark \ref{obs1} and the Lemma \ref{PropertiesEtcFirst}. \hfill$\Box$ \\ \begin{remark} \label{estimativaN} We affirm that $\hat{N}, \hat{A} \in W$. Indeed, by Lemma \ref{PropertiesEtcFirst} we know that $\hat{N} = \Theta(\hat{D}), \hat{A} = \Lambda(\hat{D}) \in L^\infty(Q)$. Moreover, returning to the first equation of (\ref{P01}), using the Lemmas \ref{PropertiesEtcFirst} and \ref{nãonegativa1}, it follows that: \begin{equation}\label{Nt} \begin{array}{cccc} \displaystyle \bigg|\frac{\partial \hat{N}}{\partial t}\bigg| \leq r_N + \mu_N (||N_0||_{L^\infty(\Omega)} + r_N T) + \beta_1 (||N_0||_{L^\infty(\Omega)} + r_N T) C_{\lambda}||A_0||_{L^\infty(\Omega)} \\ \displaystyle + \alpha_N \gamma_N \frac{\mu}{\tau} (||N_0||_{L^\infty(\Omega)} + r_N T), \end{array} \end{equation} a.e. in $Q$, i.e., $\hat{N}_t \in L^\infty(Q)$. Moreover, returning to the second equation of (\ref{P01}) and using, again, the Lemmas \ref{PropertiesEtcFirst} and \ref{nãonegativa1}, we get: \begin{equation}\label{At} \begin{array}{cccc} \displaystyle \bigg|\frac{\partial \hat{A}}{\partial t}\bigg| \leq r_A C_{\lambda}||A_0||_{L^\infty(\Omega)} + \frac{r_A}{k_A}(C_{\lambda}||A_0||_{L^\infty(\Omega)})^2 + (\mu_A+\epsilon_A)C_{\lambda}||A_0||_{L^\infty(\Omega)} \\ \displaystyle + \alpha_A\gamma_A \frac{\mu}{\tau} C_{\lambda}||A_0||_{L^\infty(\Omega)}, \end{array} \end{equation} a.e. in $Q$, i.e., $\hat{A}_t \in L^\infty(Q)$. \end{remark} \begin{proposition} \label{j1} There is a nonnegative solution $(N, A, D) \in W \times W \times W^{2,1}_4(Q)$ of problem (\ref{0riginalEquations}). \end{proposition} \noindent {\bf Proof:} Just combine the Proposition \ref{h2} and the Remarks \ref{obs2} and \ref{estimativaN}. \hfill$\Box$ \\ \begin{proposition} \label{j2} The solution $(N, A, D)$ of the problem (\ref{0riginalEquations}) is unique. \end{proposition} \noindent {\bf Proof:} Let $(N_1, A_1, D_1)$ and $(N_2, A_2, D_2)$ be solutions to the problem (\ref{0riginalEquations}); if $\tilde{N} = N_1 - N_2, \tilde{A} = A_1 - A_2$ and $\tilde{D} = D_1 - D_2$, then $\tilde{N}$, $\tilde{A}$ and $\tilde{H}$ satisfy the following problems, respectively: \begin{equation} \label{original1} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \tilde{N}}{\partial t} = - \mu_N \tilde{N} - \beta_1 A_1 \tilde{N} -\beta_1 N_2\tilde{A} - \alpha_N\gamma_N N_1 \tilde{D}-\alpha_N\gamma_N D_2\tilde{N}, & \textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \displaystyle \tilde{N}(\cdot,0) = \tilde{N}_0(\cdot) = 0, &\textup{in}& \Omega, \end{array} \right. \end{equation} \begin{equation} \label{original2} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \tilde{A}}{\partial t} = r_A \tilde{A} -\frac{r_A}{k_A}(A_1 + A_2)\tilde{A}-(\mu_A+\epsilon_A)\tilde{A} - \alpha_A\gamma_A A_1 \tilde{D}- \alpha_A\gamma_A D_2\tilde{A}, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \tilde{A}(\cdot,0) = \tilde{A}_0(\cdot) = 0, &\textup{in}& \Omega, \end{array} \right. \end{equation} \begin{equation} \label{original3} \left\{ \begin{array}{lcl} \displaystyle \frac{\partial \tilde{D}}{\partial t} = \sigma \Delta \tilde{D} - \gamma A_1 \tilde{D} -\gamma D_2\tilde{A} - \gamma_N N_1 \tilde{D} -\gamma_N D_2\tilde{N} - \tau \tilde{D}, &\textup{in}& Q, \vspace{0.2cm} \\ \displaystyle \frac{\partial \tilde{D}}{\partial \eta} (\cdot) =0, &\textup{on}& \Gamma, \vspace{0.2cm} \\ \displaystyle \tilde{D}(\cdot,0) = \tilde{D}_0(\cdot) = 0, &\textup{in}& \Omega. \end{array} \right. \end{equation} Multiplying the first equation of (\ref{original1}) by $\tilde{N}$, integrating into $\Omega$, using the fact that $N_1 \leq ||N_0||_{L^\infty(\Omega)} + r_N T$ and the inequality of Young, we have \begin{eqnarray*} \frac{1}{2}\frac{d}{dt} \int_{\Omega} \tilde{N}^2 dx &=& -\mu_N \int_{\Omega} \tilde{N}^2 dx -\beta_1 \int_{\Omega} N_2\tilde{A}\tilde{N} dx - \alpha_N\gamma_N \int_{\Omega} N_1\tilde{D}\tilde{N} dx \\ \\ &-& \alpha_N\gamma_N \int_{\Omega} D_2 \tilde{N}^2 dx \\ \\ &\leq& (||N_0||_{L^\infty(\Omega)} + r_N T) \bigg(\beta_1 \int_{\Omega}|\tilde{A}||\tilde{N}| dx + \alpha_N\gamma_N \int_{\Omega} |\tilde{D}||\tilde{N}| dx\bigg) \\ &\leq& C \int_{\Omega} (\tilde{A}^2 + \tilde{N}^2 + \tilde{H}^2) dx, \end{eqnarray*} where $C$ depends on $\beta_1$, $\alpha_N$, $\gamma_N$, $r_N$, $T$ and $||N_0||_{L^\infty(\Omega)}$. Now, multiplying the first equation of (\ref{original2}) by $\tilde{A}$, integrating into $\Omega$, using the fact that $A_1 \leq C_{\lambda}||A_0||_{L^\infty(\Omega)} $ and the inequality of Young, we obtain \begin{eqnarray*} \frac{1}{2}\frac{d}{dt} \int_{\Omega} \tilde{A}^2 dx &=& r_A \int_{\Omega}\tilde{A}^2 dx - \frac{r_A}{k_A}\int_{\Omega}(A_1 + A_2)\tilde{A}^2 dx -(\mu_A+\epsilon_A)\int_{\Omega}\tilde{A}^2 dx \\ &-& \alpha_A \gamma_A \int_{\Omega} A_1 \tilde{D}\tilde{A} dx - \alpha_A \gamma_A \int_{\Omega}D_2\tilde{A}^2 dx \\ &\leq& r_A \int_{\Omega}|\tilde{A}|^2 dx + \alpha_A \gamma_A C_{\lambda}||A_0||_{L^\infty(\Omega)} \int_{\Omega} |\tilde{D}||\tilde{A}| dx \\ &\leq& C \int_{\Omega} (\tilde{A}^2 + \tilde{N}^2 + \tilde{H}^2) dx, \end{eqnarray*} where $C$ depends on $r_A$, $\alpha_A$, $\gamma_A$ $C_{\lambda}$ and $||A_0||_{L^\infty(\Omega)}$. Lastly, multiplying the first equation of (\ref{original3}) by $\tilde{H}$, integrating into $\Omega$, using the fact that $D \leq \frac{\mu}{\tau}$ and the inequality of Young, we obtain \begin{eqnarray*} \frac{1}{2}\frac{d}{dt} \int_{\Omega} \tilde{D}^2 dx &=& -\sigma \int_{\Omega} |\nabla \tilde{D}|^2 dx - \gamma \int_{\Omega} A_1 {\tilde{D}}^2 dx - \gamma \int_{\Omega} D_2\tilde{A} \tilde{D} dx \\ &-& \gamma_N \int_{\Omega}N_1 \tilde{D}^2 dx - \gamma_N \int_{\Omega} D_2 \tilde{N}\tilde{D} dx - \tau \int_{\Omega}{\tilde{D}}^2 dx \\ &\leq& \gamma\frac{\mu}{\tau} \int_{\Omega} |\tilde{A}| |\tilde{D}| dx + \gamma_N \frac{\mu}{\tau} \int_{\Omega}|\tilde{N}||\tilde{D}| dx \\ &\leq& C \int_{\Omega} (\tilde{A}^2 + \tilde{N}^2 + \tilde{H}^2) dx, \end{eqnarray*} where $C$ depends on $\gamma$, $\gamma_N$, $\mu$ and $\tau$. Thus, \begin{eqnarray*} \frac{d}{dt}\bigg( \int_{\Omega} (|\tilde{N}|^2 + |\tilde{A}|^2 + |\tilde{D}|^2) dx \bigg) &\leq& C \int_{\Omega} (|\tilde{N}|^2 + |\tilde{A}|^2+ |\tilde{D}|^2) dx, \end{eqnarray*} and using the Gronwall's inequality, we finally \begin{equation*} \int_{\Omega} (|\tilde{N}|^2 + |\tilde{A}|^2 + |\tilde{D}|^2) dx \leq e^{CT} \int_{\Omega} (|\tilde{N}_0|^2 + |\tilde{A}_0|^2 + |\tilde{D}_0|^2) dx = 0, \end{equation*} that is, $||\tilde{N}(\cdot, t)||_{L^2(\Omega)}^2 + ||\tilde{A}(\cdot, t)||_{L^2(\Omega)}^2 + ||\tilde{D}(\cdot, t)||_{L^2(\Omega)}^2 = 0$, for all $t \in (0, T)$. Where we conclude $\tilde{N} = \tilde{A} = \tilde{D} = 0$ a.e. in $Q$ and therefore $N_1 = N_2, A_1 = A_2$ and $D_1 = D_2$ a.e. in $Q$. \hfill$\Box$ \\ \section{Numerical simulations} In this section, we provide numerical simulations illustrating different model behaviors. The settings and methods used to implement the simulations are the following. We consider the spatial domain as a square $\Omega=[0,L] \times [0,L]$, with $L=1$, discretized with $n=50$ steps $\Delta x = \Delta y = L/n=0.02$. The Laplacian $\Delta D$ is approximated by second order centered finite differences and the coupled ODE system arising from such discretization is solved with the method of lines in the software \textit{Mathematica}. The simulations run from time $t=0$ until $t=25$ (which is enough to achieve stationary behavior in all simulations). The initial conditions for numerical simulations are $N(x,0)=N_2$, $A(x,0)=A_2$, $D(x,0)=0$, where $(N_2,A_2,0)$ is a globally asymptotically stable equilibrium point for the ODE system \eqref{início1} without treatment ($\nu=0$). The expressions for $N_2$ and $A_2$ are: \[ N_2=\dfrac{r_N}{\mu_N+\beta_1 A_2}, \ \ \ A_2=\dfrac{r_A-\mu_A-\epsilon_A}{r_A}K_A. \] Such equilibrium is allways globally asymptotically stable in system \eqref{início1} (see details in \cite{Fassoni}). From the biological point of view, these initial conditions correspond to the start of chemotherapy application when a tumor is already a formed, where the normal cells were not able to control tumor growth, and no chemotherapy was applied until the tumor reached a stationary state. To avoid large numbers and numerical instabilities, we re-scale the populations with respect to their possible maximum values, setting $N \leftarrow N/(r_N/\mu_N)$ and $A\leftarrow A/K_A$. Therefore, the population sizes range from $0$ to $1$. The re-scaled parameter values used in the model simulations were fixed to \[ r_N=1, \ \mu_N=1, \ r_A = 1, \ K_A = 1, \ \beta_1 = 1.5, \ \mu_A =0.05, \ \epsilon_A =0.05, \ \] \[ \tau_H=0.9, \ \gamma_N=0.1, \ \alpha_N =1, \ \gamma_A=1. \] These values were chosen to describe: normal cells that reach the equilibrium $N=r_N/\mu_N=1$ at absence of tumor cells; a tumor with the same carrying capacity of normal cells ($K_A=r_N/\mu_N=1$) and a greater absorption of the chemotherapeutic drug by tumor cells in comparison with normal cells ($\gamma_A>\gamma_N$), due to the drug specificity. In order to illustrate different biological outcomes in the model simulations, we allowed the following parameters to assume different values: the chemotherapeutic drug cytotoxicity against cancer cells $\alpha_A$, the diffusion coefficient of the chemotherapeutic drug $\sigma$ and the chemotherapy infusion rate $\mu$. We will show that these properties of the drug and the infusion rate are crucial for determining an effective treatment. We also simulated different positions for the subset $\omega$, which is a mathematical description of a blood vessel crossing the tissue, from where the chemotherapy enters the tissue. The values for parameters $\alpha_A$, $\sigma$, $\mu$ and the position of $\omega$ used in each simulation are indicated in Table \ref{tableSims}. We present the following results. \begin{center} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Simulation & Figure & Outcome & $\alpha_A$ & $\mu$ & $\sigma$ & $\omega$ \\ \hline 1 & \ref{fig1} & tumor persistence & 5 & 3 & 0.1 & $[0.45,0.55] \times [0.45,0.55]$ \\ \hline 2 & \ref{fig2} & tumor persistence & 10 & 3 & 0.1 & $[0,0.1]$ \\ \hline 3 & \ref{fig3} & tumor extinction & 10 & 6 & 0.1 & $[0,0.1]$ \\ \hline 4 & \ref{fig4} & tumor extinction & 10 & 3 & 0.2 & $[0,0.1]$ \\ \hline 5 & \ref{fig5} & tumor extinction & 20 & 3 & 0.1 & $[0,0.1]$ \\ \hline \end{tabular} \caption{Set-up of different simulations an their biological outcomes. Each row indicates the numerical values used for the chemotherapeutic parameters $\alpha_A$ (cytotoxicity), $\sigma$ (diffusion coefficient), $\mu$ (infusion rate), and the position of $\omega \subset \Omega \subset \mathbb{R}^2$. Simulation 1 was performed in a two-dimensional domain $\Omega=[0,1]\times [0,1]$, while simulations 2-5 were performed in a one-dimensional domain $\Omega=[0,1]$.} \label{tableSims} \end{table} \end{center} In the first simulation of system \eqref{0riginalEquations}, we confirm that our model and numerical methods are able to reproduce the expected biological behavior (Figure \ref{fig1}). The blood vessel crosses the tissue at its center, i.e., $\omega =[0.45,0.55] \times [0.45,0.55]$. We use the following parameter values: $\alpha_A=5$, $\mu=3$, and $\sigma=0.1$. With such values, the chemotherapy is not able to lead to tumor extinction. We observe that tumor cells that are near the blood vessel are eliminated but not extinct by the chemotherapeutic effect, and those which are distant from the blood vessel persist (Figure \ref{fig1}). \begin{figure}[!htb] \centering \includegraphics[width=0.32\linewidth]{fig1a_1.pdf} \includegraphics[width=0.32\linewidth]{fig1a_2.pdf} \includegraphics[width=0.32\linewidth]{fig1a_3.pdf} \includegraphics[width=0.32\linewidth]{fig1a_4.pdf} \includegraphics[width=0.32\linewidth]{fig1a_5.pdf} \includegraphics[width=0.32\linewidth]{fig1a_6.pdf} \includegraphics[width=0.32\linewidth]{fig1a_7.pdf} \includegraphics[width=0.32\linewidth]{fig1a_8.pdf} \includegraphics[width=0.32\linewidth]{fig1a_9.pdf} \caption{Results of Simulation 1, for model (\ref{0riginalEquations}) within a two-dimensional domain $\Omega = [0,L] \times [0,L] =[0,1] \times [0,1]$. Plots of model solutions $N(x,y,t)$ (normal cells, blue, top row), $A(x,y,t)$ (cancer cells, red, middle row) and $D(x,y,t)$ (chemotherapeutic drug concentration, green, bottom row) at time points $t=0,1,15$ (columns 1,2 and 3, respectively). See Table \ref{tableSims} for parameter values used here. At time $t=0$, the tumor is spread trough the tissue, and as chemotherapy is applied ($t>0$), the tumor cells are reduced in the vicinity of the blood vessel, while the distant tumor cells persist along time (the shape of the solution at time $t=15$ is stationary). Within the vicinity of the blood vessel, the removal of tumor cells allows the normal tissue to recover and grow.} \label{fig1} \end{figure} In order to make easier to illustrate the model dynamics, we present the results of next simulations in a one-dimensional domain $\Omega = [0, 1]$. In Simulation 2, we use the same parameters values used in Simulation 1 (see Table \ref{tableSims}), but increase the chemotherapy toxicity $\alpha_A$ and move the blood vessel to the left side of the tissue, $\omega=[0,0.1]$. Although the tumor cells in the vicinity of the blood vessel are extinct, the chemotherapy is still not able to eliminate the distant tumor cells (Figure \ref{fig2}). Thus, we observe tumor persistence in the long-term. In Simulation 3, we keep the parameters as in Simulation 2, but increase the chemotherapy infusion rate $\mu$ (mimicking a higher dose). We observe that the tumor cells are extinct in the entire tissue (Figure \ref{fig3}). In Simulation 4, we illustrate other mechanism to achieve tumor extinction: instead of increasing drug dose, we adopt the parameter values of Simulation 2, but increase the drug diffusion $\sigma$, so that it is capable to spread over the entire tissue and effectively eliminate all tumor cells (Figure \ref{fig4}). Finally, in Simulation 5, we also adopt the parameter values of Simulation 2, but increase the chemotherapy toxicity against tumor cells $\alpha_A$. This also leads to tumor extinction (Figure \ref{fig5}). An advantage of the strategies adopted in Simulations 4 and 5, in comparison with Simulation 3 (increasing dose), is that the former lead to less side effects. Simulation 3 describes the use of a drug which spreads faster, while Simulation 5 illustrates the use of a more potent and specific drug, which targets more tumor cells but not more normal cells ($\alpha_N$ was not changed). Taken together, these simulations and the different outcomes observed for different parameter values confirm the ability of the model to consistently describe tumor chemotherapy and illustrate the potential of mathematical models to provide testable hypothesis that could be studied together with clinicians in order to achieve better results in the treatment of cancer. \begin{figure}[!htb] \centering \includegraphics[width=0.32\linewidth]{fig2a.pdf} \includegraphics[width=0.32\linewidth]{fig2b.pdf} \includegraphics[width=0.32\linewidth]{fig2c.pdf} \includegraphics[width=0.32\linewidth]{fig2d.pdf} \includegraphics[width=0.32\linewidth]{fig2e.pdf} \includegraphics[width=0.32\linewidth]{fig2f.pdf} \caption{Results of Simulation 2, with $\Omega = [0,L]= [0,1]$. Plots of model solutions $A(x,t)$ (cancer cells, red), $N(x,t)$ (normal cells, blue) and $D(x,t)$ (chemotherapeutic drug concentration, green) at time points $t=0,3,6,9,12,15$. See Table \ref{tableSims} for parameter values used here. At time $t=0$, the tumor is spread trough the tissue, and as chemotherapy is applied ($t>0$), the tumor cells are reduced and extinct within a given distance from the blood vessel ($x<0.6$), but not in the entire tissue ($x>0.6$). Within the region of tumor extinction, the removal of tumor cells release the normal tissue to recover and grow.} \label{fig2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.32\linewidth]{fig3a.pdf} \includegraphics[width=0.32\linewidth]{fig3b.pdf} \includegraphics[width=0.32\linewidth]{fig3c.pdf} \includegraphics[width=0.32\linewidth]{fig3d.pdf} \includegraphics[width=0.32\linewidth]{fig3e.pdf} \includegraphics[width=0.32\linewidth]{fig3f.pdf} \caption{Results of Simulation 3, with $\Omega = [0,L]= [0,1]$. Plots of model solutions $A(x,t)$ (cancer cells, red), $N(x,t)$ (normal cells, blue) and $D(x,t)$ (chemotherapeutic drug concentration, green) at time points $t=0,3,6,9,12,15$. See Table \ref{tableSims} for parameter values used here. At time $t=0$, the tumor is spread trough the tissue, and as chemotherapy is applied ($t>0$), the tumor cells are reduced and in the entire tissue. In comparison with Simulation 2, the tumor extinction is reached because the drug infusion rate $\mu$ is increased here. Within the entire tissue, the removal of tumor cells release the normal tissue to recover and grow.} \label{fig3} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.32\linewidth]{fig4a.pdf} \includegraphics[width=0.32\linewidth]{fig4b.pdf} \includegraphics[width=0.32\linewidth]{fig4c.pdf} \includegraphics[width=0.32\linewidth]{fig4d.pdf} \includegraphics[width=0.32\linewidth]{fig4e.pdf} \includegraphics[width=0.32\linewidth]{fig4f.pdf} \caption{Results of Simulation 4, with $\Omega = [0,L]= [0,1]$. Plots of model solutions $A(x,t)$ (cancer cells, red), $N(x,t)$ (normal cells, blue) and $D(x,t)$ (chemotherapeutic drug concentration, green) at time points $t=0,3,6,9,12,25$. See Table \ref{tableSims} for parameter values used here. At time $t=0$, the tumor is spread trough the tissue, and as chemotherapy is applied ($t>0$), the tumor cells are reduced and in the entire tissue. In comparison with Simulation 2, the tumor extinction is reached because the drug diffusion coefficient $\sigma$ is increased here.} \label{fig4} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.32\linewidth]{fig5a.pdf} \includegraphics[width=0.32\linewidth]{fig5b.pdf} \includegraphics[width=0.32\linewidth]{fig5c.pdf} \includegraphics[width=0.32\linewidth]{fig5d.pdf} \includegraphics[width=0.32\linewidth]{fig5e.pdf} \includegraphics[width=0.32\linewidth]{fig5f.pdf} \caption{Results of Simulation 5, with $\Omega = [0,L]= [0,1]$. Plots of model solutions $A(x,t)$ (cancer cells, red), $N(x,t)$ (normal cells, blue) and $D(x,t)$ (chemotherapeutic drug concentration, green) at time points $t=0,3,6,9,12,15$. See Table \ref{tableSims} for parameter values used here. At time $t=0$, the tumor is spread trough the tissue, and as chemotherapy is applied ($t>0$), the tumor cells are reduced and in the entire tissue. In comparison with Simulation 2, the tumor extinction is reached because the chemotherapy toxicity against tumor cells, $\alpha_A$, was increased.} \label{fig5} \end{figure} \bibliographystyle{amsplain}
2,877,628,089,340
arxiv
\section{Introduction} Actually, any quantum system interacting with the environment (the bath) can not be isolated from the environment completely. \cite{Quantum Ddecoherence} In quantum information and quantum computation, \cite{QInformation} the decay process of quantum system induced by quantum fluctuations of the bath is very important for the qubit. In quantum optics, \cite{Quantum Optics} the Jaynes-Cummings model has been one of the most important models, \cite{JC model} which describes the light-matter interaction of a two-level atom and a single mode of the quantized electromagnetic field. \cite{Cavity QED} Among these light-matter interaction issues, \cite{Atom-photon interaction} the revivals and collapses of the atomic population inversion (also named Rabi oscillation) has been studied in the literatures. \cite{Rabi-1,Rabi-2,Rabi-3-experment} Decay of Rabi oscillation has also been used as a tool to characterize the decoherence in superconducting qubits (charge qubit, phase qubit and flux qubit). \cite{JJ qubit,JJ qubit-Nori} Recently, in circuit QED system, \cite{Circuit QED} the researchers have performed spectroscopic measurements of a superconducting qubit dispersively coupled to a nonlinear resonator driven by a pump microwave field. \cite{Circuit QED with Nonlinear Resonator} Also in nanomechanical QED system, the integration of Josephson junction qubit and nanomechanical resonators are attracting considerable attentions. \cite{Nanomechanical QED,Nanomechanical Resonator,Nanomechanical Resonator approaching quantum limit,Roukes13} The dynamics of all these qubit-resonator systems could be described by the Jaynes-Cummings Hamiltonian. When intrinsic nonlinearity of nanomechanical resonator \cite{Source of Nonlinearity} is considered in the coupled qubit-resonator system, superconducting qubit can be used to probe quantum fluctuations of nonlinear resonator. \cite{Quantum Heating of a Nonlinear Resonator} And the nonlinearity can be used to create nonclassical states in mechanical systems \cite{nonlinearity to creast nonclassical state, Fock state in mechanical freedom} and selectively address the nanomechanical qubit transitions in quantum information processing. \cite{Qinformation with NR Qubit} In previous studies, \cite{An Open System Approach to Quantum Optics} master equation approach has been used to deal with the issues in open quantum system. In this paper, considering the influence of the environment on this nanomechanical QED system, we can use microscopic master equation approach \cite{Microscopic master equation} to solve time evolution of the density operator for the qubit-resonator system and study the temporal Behavior of Rabi Oscillation. The paper is structured as follows. In sec.II, a nonlinear Jaynes-Cummings Model \cite{Nonlinear JC model} is used to describe the dynamics of the coupled qubit-nanomechanical resonator system. In sec.III, using microscopic master equation approach, we solve time evolution of density operator for the qubit-resonator system. The probability on excited state of the qubit is calculated to show the temporal process of Rabi oscillation. Finally, the results are summarized. \section{the qubit-resonator system} In nanomechanical QED system, we can use a Jaynes-Cummings type Hamiltonian to describes the dynamics of the qubit-resonator system consisting of a charge qubit and a nanomechanical resonator system, \begin{equation} H_{JC}=\frac{\omega_{q}}{2}\sigma_{z}+g\left(a\sigma_{+}+a^{\dagger}\sigma_{-}\right)+\omega_{c}a^{\dagger}a.\label{JC Ham} \end{equation} Considering the nonlinearity of nanomechanical resonator, the Hamiltonian for this qubit-resonator system writes \cite{Nonlinear Quantum Decoherence} \begin{equation} H_{S}=H_{JC}+\chi a^{\dagger}a+\chi\left(a^{\dagger}a\right)^{2}.\label{Nonlinear JC Ham} \end{equation} Here the rotating-wave approximation ($\omega_{q}=\omega_{a}=\omega$) and $\hbar=1$ is adopted. Corresponding to charge qubit and nanomechanical resonator, the lowering (raising) operator $\sigma_{-}$ ($\sigma_{+}$) and the annihilation (creation) operator $a$ ($a^{\dagger}$) satisfy the commutation relation $[\sigma_{-},\sigma_{+}]=\sigma_{z}$ and $[a,a^{\dagger}]=1$. The Hamiltonian in Eq. (\ref{Nonlinear JC Ham}) describes the dynamics of a nonlinear Jaynes-Cummings model, \cite{Nonlinear JC model} and a quartic potential $x^{4}$ \cite{Source of Nonlinearity} gives nonlinear part $\chi\left(a^{\dagger}a\right)^{2}$ which leads to the phonon-phonon interaction in nanomechanical QED systems. The $g$ is the coupling constant and the $\chi$ is the nonlinearity parameter ($\chi\ll g$). Solving the Hamiltonian $H_{S}$, we get the ground state $\vert E_{0}\rangle=\vert00\rangle$ with energy $E_{0}=-\omega/2$ and excited state doublets \begin{eqnarray*} \left\vert E_{1+}\right\rangle & = & \cos\left(\frac{\theta}{2}\right)\left\vert 10\right\rangle +\sin\left(\frac{\theta}{2}\right)\left\vert 01\right\rangle ,\\ \left\vert E_{1-}\right\rangle & = & -\sin\left(\frac{\theta}{2}\right)\left\vert 10\right\rangle +\cos\left(\frac{\theta}{2}\right)\left\vert 01\right\rangle , \end{eqnarray*} for $n=1,2,3,...$ with energy \[ E_{1\pm}=\left(\frac{\omega}{2}+\chi\right)\pm\Omega. \] Some parameters are defined, i.e., $\Omega=\sqrt{g^{2}+\chi^{2}}$ and $\theta=\arcsin\left(g/\Omega\right)$. With the loss of nanomechanical resonator, the total Hamiltonian \[ H_{\text{total}}=H_{S}+H_{I}+H_{B} \] consists of three parts, i.e., the system part $H_{S}$, the interaction part \[ H_{I}=\sum_{j}\xi_{j}\left(ab_{j}^{\dagger}+a^{\dagger}b_{j}\right) \] and the bath part \[ H_{B}=\sum_{j}\omega_{j}b_{j}^{\dagger}b_{j}. \] Where $b_{j}$ and $b_{j}^{\dagger}$ are bosonic annihilation and creation operators for the bath oscillators for the mode frequency $\omega_{j}$ ($j=1,2,...$). In this paper, we adopt microscopic master equation approach \cite{Microscopic master equation} to solve time evolution of density operator ($\rho$) for the qubit-resonator system, our master equation is \begin{equation} \dot{\rho}=\mathcal{L}\rho\label{master equation-1} \end{equation} where $\mathcal{L}$ is a time-independent linear superoperator. Using the microscopic master equation approach, \cite{Microscopic master equation} we obtain the eigen-equations \begin{equation} \mathcal{L}\rho_{k}=\lambda_{k}\rho_{k}.\label{eigen operator} \end{equation} The $\left\{ \rho_{k}\right\} $ is a set of eigenoperators due to the superoperator $\mathcal{L}$ with the eigenvalue $\left\{ \lambda_{k}\right\} $ for the index $k$. Given initial state of the qubit-resonator system, the initial reduced density operator $\rho\left(0\right)$ is expanded in terms of $\rho_{k}$, \begin{equation} \rho\left(0\right)=\sum_{k}C_{k}\rho_{k}\label{Ck} \end{equation} where the $C_{k}$s are time-independent coefficients. The results in Ref.\cite{Microscopic master equation} tell us that time evolution of reduced density operator $\rho$ will be \begin{equation} \rho\left(t\right)=\sum_{k}C_{k}e^{_{\lambda_{k}t}}\rho_{k}.\label{time rho} \end{equation} Now only one excitation is interested, our truncated basis $\left\{ \vert E_{1+}\rangle,\vert E_{1-}\rangle,\vert E_{0}\rangle\right\} $ consists of the three lowest eigenstates due to the Hamiltonian $H_{S}$, Now we can rewrite the master equation in Eq. (\ref{master equation-1}), \begin{equation} \dot{\rho}=-i\left[H_{S},\rho\right]+\mathcal{L}_{+}\rho+\mathcal{L}_{-}\rho.\label{master equation} \end{equation} Here the non-unitary parts of dissipative dynamics are described by $\mathcal{L}_{+}\rho$ and $\mathcal{L}_{-}\rho$, \begin{eqnarray*} \mathcal{L_{\pm}\rho} & = & \frac{\gamma_{\pm}}{2}\left\vert E_{0}\right\rangle \left\langle E_{1\pm}\right\vert \rho\left\vert E_{1\pm}\right\rangle \left\langle E_{0}\right\vert \\ & & -\frac{\gamma_{\pm}}{4}\left(\left\vert E_{1\pm}\right\rangle \left\langle E_{1\pm}\right\vert \rho+\rho\left\vert E_{1\pm}\right\rangle \left\langle E_{1\pm}\right\vert \right). \end{eqnarray*} The superoperator $\mathcal{L}_{\pm}$describe the transitions between the higher excited state $\vert E_{\pm}\rangle$ and the ground state $\vert E_{0}\rangle$ induced by the environment. The corresponding decay rate $\gamma_{+}$ ($\gamma_{-}$) describes the transition from the excited state $\vert E_{1+}\rangle$ ($\vert E_{1-}\rangle$) to the ground state $\vert E_{0}\rangle$, these transitions are induced by the interaction between the system and the environment. With respect to the system Hamiltonian $H_{S}$ in Eq. (\ref{Nonlinear JC Ham}), the eigenoperators $\rho_{k}$s are obtained, \begin{eqnarray*} \rho_{1} & = & \left\vert E_{0}\right\rangle \left\langle E_{0}\right\vert ,\\ \rho_{2} & = & \left\vert E_{1,-}\right\rangle \left\langle E_{1,-}\right\vert -\left\vert E_{0}\right\rangle \left\langle E_{0}\right\vert ,\\ \rho_{3} & = & \left\vert E_{1,+}\right\rangle \left\langle E_{1,+}\right\vert -\left\vert E_{0}\right\rangle \left\langle E_{0}\right\vert ,\\ \rho_{4} & = & \left\vert E_{0}\right\rangle \left\langle E_{1,-}\right\vert ,\\ \rho_{5} & = & \left\vert E_{0}\right\rangle \left\langle E_{1,+}\right\vert ,\\ \rho_{6} & = & \left\vert E_{1,-}\right\rangle \left\langle E_{1,+}\right\vert ,\\ \rho_{7} & = & \rho_{4}^{\dagger},\ \rho_{8}=\rho_{5}^{\dagger},\ \rho_{9}=\rho_{6}^{\dagger}. \end{eqnarray*} The corresponding eigenvalues $\lambda_{k}$s (for $k=1,2,3,...,9$) are \begin{eqnarray*} \lambda_{1} & = & 0,\ \lambda_{2}=-\frac{\gamma_{1-}}{2},\ \lambda_{3}=-\frac{\gamma_{1+}}{2}, \end{eqnarray*} \begin{eqnarray*} \lambda_{4} & = & i\left(\omega+\chi-\Omega\right)-\frac{1}{4}\gamma_{1-},\\ \lambda_{5} & = & i\left(\omega+\chi+\Omega\right)-\frac{1}{4}\gamma_{1+},\\ \lambda_{6} & = & i(2\Omega)-\frac{1}{4}\left(\gamma_{1+}+\gamma_{1-}\right), \end{eqnarray*} and \[ \lambda_{7}=\lambda_{4}^{*},\ \lambda_{8}=\lambda_{5}^{*},\ \lambda_{9}=\lambda_{6}^{*}. \] \section{Rabi Oscillation} In traditional cavity QED theory, \cite{Cavity QED} the Rabi oscillation means that there exists energy exchange of one photon between a two-level atom and a single mode quantized field in cavity. Considering the nonlinearity of nanomechanical resonator, we study the decay process of Rabi oscillation in the nonlinear Jaynes-Cummings model described by the Hamiltonian in Eq. (\ref{Nonlinear JC Ham}). Given the initial state of the qubit-resonator system $\vert\psi\left(0\right)\rangle=\vert e\rangle\otimes\vert0\rangle$, it means that the qubit is in excited state $\vert e\rangle$ and the resonator is in vacuum state $\vert0\rangle$. Thus, the initial reduced density operator reads \[ \rho\left(0\right)=\left\vert \psi\left(0\right)\right\rangle \left\langle \psi\left(0\right)\right\vert . \] Expanding $\rho\left(0\right)$ into some eigenoperators $\rho_{k}$s, we obtain a set of coefficients $C_{k}$s, \[ C_{1}=1,\ C_{2}=\frac{1}{2}\left(1-\cos\theta\right),\ C_{3}=\frac{1}{2}\left(1+\cos\theta\right), \] \[ C_{6}=C_{9}=-\frac{1}{2}\sin\theta, \] and \[ C_{4}=C_{5}=C_{7}=C_{8}=0. \] According to Eq.(\ref{time rho}), the time evolution of density operator for the qubit-resonator system is calculated as \begin{eqnarray*} \rho\left(t\right) & = & C_{1}e^{_{\lambda_{1}t}}\rho_{1}+C_{2}e^{_{\lambda_{2}t}}\rho_{2}\\ & & +C_{3}e^{_{\lambda_{3}t}}\rho_{3}+C_{6}e^{_{\lambda_{6}t}}\rho_{6}+C_{9}e^{_{\lambda_{9}t}}\rho_{9}. \end{eqnarray*} Here the probability of the qubit in the excited (upper) state $\left\vert e\right\rangle $ is \begin{eqnarray} P_{e}(t) & = & \left\langle e0\right\vert \rho\left(t\right)\left\vert e0\right\rangle \nonumber \\ & = & \left[\frac{1}{2}\left(1-\cos\theta\right)e^{-\frac{\gamma_{1-}}{4}t}-\frac{1}{2}\left(1+\cos\theta\right)e^{-\frac{\gamma_{1+}}{4}t}\right]^{2}\nonumber \\ & & +\sin^{2}\theta e^{-\frac{\gamma_{1+}+\gamma_{1-}}{4}t}\cos^{2}\left(\Omega t\right).\label{probability of excited state} \end{eqnarray} It characterizes the temporal behavior of Rabi oscillation in the qubit-resonator system, decay process of Rabi oscillation owns the periodic structure of time oscillating. The nanomechanical resonator is assumed to be an ideal resonator ($\chi=0$), and ignoring the difference of decay rates ($\gamma_{1+}=\gamma_{1-}=\gamma$), then the probability $P_{e}(t)$ becomes \begin{equation} P_{e}(t)=e^{-\frac{\gamma}{2}t}\cos^{2}\left(gt\right).\label{probability at excited state in Rabi} \end{equation} It describes the well known Rabi oscillation in Jaynes-Cummings model.\cite{JC model} Comparing the results in Eq. (\ref{probability of excited state}) with Eq. (\ref{probability at excited state in Rabi}), we find that nonlinearity parameter $\chi$ and decay rates $\gamma_{1+}\neq\gamma_{1-}$ modify the periodic structure of time oscillating in Rabi oscillation. To further clarify the dependence of nonlinearity parameter and decay rates on the probability $P_{e}(t)$ clearly, some figures are plotted with parameters $\omega=1.0,\ g=0.1$. Here, we take the frequency $\omega$ as the unit for all these parameters. In Fig.$\,1$, the probability $P_{e}(t)$ versus time $t$ is plotted with parameters $\chi=0$ and $\gamma_{1+}=\gamma_{1-}=0.004$. Figure 1 shows the well known Rabi oscillation, it verifies the results in Eq. (\ref{probability at excited state in Rabi}). \begin{figure}[ht] \includegraphics[width=8cm]{weixiang}\label{Rabi in JC model} \caption{Rabi oscillation in Jaynes-Cummings model. Some parameters are $\chi=0$ and $\gamma=0.004$.} \end{figure} \begin{figure}[ht] \includegraphics[width=8cm]{all}\label{different decay rate and nonlinearity} \caption{The probability $P_{e}\left(t\right)$ vs the time $t$, where the excite states ($\vert1+\rangle$ and $\vert1-\rangle$) own the different decay rate ($\gamma_{1-}=0.001$ and $\gamma_{1+}=0.007$) and $\chi=0$.04.} \end{figure} In Fig.$\,2$, the probability $P_{e}(t)$ versus time $t$ is plotted with parameters $\chi=0.04$, $\gamma_{1+}=0.007$ and $\gamma_{1-}=0.001$. Figure 2 shows that nonlinearity parameter $\chi$ and decay rates ($\gamma_{1+}\neq\gamma_{1-}$) modify the periodic structure of time oscillating in Rabi oscillation, which is obviously different from Figure 1. Based on those results in Eq. (\ref{probability of excited state},\ref{probability at excited state in Rabi}), we find that nonlinearity parameter $\chi$ slows down the time-oscillating period of Rabi oscillation $T=\left(2\Omega\right)^{-1}$. In the following, we will study how nonlinearity parameter $\chi$ and decay rates ($\gamma_{1+}\neq\gamma_{1-}$) affect the temporal behavior in Rabi oscillation solely. Firstly, assuming the same decay rates $\gamma_{1-}=\gamma_{1+}=\gamma$ and nonlinearity parameter $\chi\neq0$, the probability in Eq. (\ref{probability of excited state}) becomes \begin{equation} P_{e}(t)=e^{-\frac{\gamma}{2}t}\left(\cos^{2}\theta+\sin^{2}\theta\cos^{2}\left(\Omega t\right)\right).\label{Pe of same decay rate} \end{equation} The dependence of the probability $P_{e}(t)$ on nonlinearity parameter $\chi$ is plotted in Fig.$\,3$. When $\cos^{2}\left(\Omega t\right)=0$, the minimum of the probability \begin{equation} P_{e}(t)=e^{-\frac{\gamma}{2}t}\cos^{2}\theta\label{Pe of same decay rate-1} \end{equation} decays exponentially, which is different from the well known Rabi oscillation in Fig.$\,1$. Secondly, assuming no nonlinearity $\chi=0$ and different decay rates ($\gamma_{1+}\neq\gamma_{1-}$), the probability in Eq. (\ref{probability of excited state}) becomes \[ P_{e}(t)=\frac{1}{4}\left(e^{-\frac{\gamma_{1-}}{4}t}-e^{-\frac{\gamma_{1+}}{4}t}\right)^{2}+e^{-\frac{\gamma_{1+}+\gamma_{1-}}{4}t}\cos^{2}\left(gt\right). \] The dependence of the probability $P_{e}(t)$ on different decay rates is plotted in Fig.$\,4$. When $\cos^{2}\left(gt\right)=0$, the minimum of the probability \[ P_{e}(t)=\frac{1}{4}\left(e^{-\frac{\gamma_{1-}}{4}t}-e^{-\frac{\gamma_{1+}}{4}t}\right)^{2} \] shows that the difference of decay rates between $\gamma_{1-}$ and $\gamma_{1+}$ does not affect the short-time behavior and long-time behavior of Rabi oscillation obviously, which is seen in Fig.$\,4$. \begin{figure}[ht] \includegraphics[width=8cm]{samedecay}\label{nonlinearity fig1} \caption{The probability $P_{e}\left(t\right)$ vs the time $t$, where the excite states ($\vert1+\rangle$ and $\vert1-\rangle$) own the same decay rate ($\gamma_{1+}=\gamma_{1-}=\gamma=0.004$) and nonlinearity parameter $\chi=0.04$.} \end{figure} \begin{figure}[ht] \includegraphics[width=8cm]{2decay0nonlinear}\label{pure JC with microscopic master} \caption{The probability $P_{e}\left(t\right)$ vs the time $t$, the excite states ($\vert1+\rangle$ and $\vert1-\rangle$) of the qubit-resonator system own the different decay rate ($\gamma_{1-}=0.001$ and $\gamma_{1+}=0.007$) and nonlinearity parameter $\chi=0$.} \end{figure} Analytically, we can study the short-time behavior of Rabi oscillation, then the probability $P_{e}\left(t\right)$ becomes \begin{equation} P_{e}(t)=\exp\left\{ -\left[\cos\theta\left(\frac{\gamma_{1+}}{4}-\frac{\gamma_{1-}}{4}\right)+\frac{\gamma_{1-}+\gamma_{1+}}{4}\right]t\right\} .\label{short-time probability} \end{equation} Ignoring the nonlinearity of nanomechanical resonator or the difference of decay rates, i.e., $\chi=0$ or $\gamma_{1-}=\gamma_{1+}$, the probability $P_{e}\left(t\right)$ becomes \begin{equation} P_{e}(t)=\exp\left\{ -\frac{\gamma_{1-}+\gamma_{1+}}{4}t\right\} .\label{short-time decay rate} \end{equation} According to the results in Eq. (\ref{short-time probability}) and Eq. (\ref{short-time decay rate}), we find that both nonlinearity parameter $\chi$ and the difference of decay rates ($\gamma_{1+}\neq\gamma_{1-}$) affect dominate the short-time behavior of Rabi oscillation jointly. Also, these two factors speed up the decay of Rabi oscillation in short-time limit. \section{conclusions} In summary, we have studied the dynamics of the nanomechanical QED system consisting of a charge qubit and a nanomechanical resonator. The temporal behavior of Rabi oscillations is analytically studied while the intrinsic nonlinearity of nanomechanical resonator is considered. With the loss of nanomechanical resonator, microscopic master equation approach is used to calculate the excited-state probability of charge qubit in this nonlinear Jaynes-Cummings model. These results show that nonlinearity parameter and decay rates can affect time-oscillating and decaying of Rabi oscillation solely or jointly. \begin{acknowledgments} We thank the discussions with Professor Peng Zhang and Dr. Ming Hua. \end{acknowledgments}
2,877,628,089,341
arxiv
\subsection{Multiple saddle-node bifurcations, hysteresis, and isolas} \label{sec:hysteresis and isolas} We conduct here a numerical investigation into the qualitative nature of bistability for the Toggle Switch model. We once present the simplest explanation for the attractors computed with DSGRN in Table \ref{tab:parameter_regions}. Specifically, for generic $\xi^* \in \Xi^*$ with $\pi_{\Xi^*}(\xi^*) \in R^*(5)$, then for ``large'' $d$ we expect $f^*$ to have three equilibria: two stable and one saddle. For all other parameters we expect to find a single globally stable equilibrium for all Hill coefficients. In the previous section we demonstrated that with overwhelming probability this simple assumption is correct. The most parsimonious situation for a generic parameter to exhibit such global dynamics would be the following: \begin{itemize} \item If $\pi_{\Xi^*}(\xi^*) \in R^*(5)$ then along a Hill path through $\xi^*$ $f^*$ undergoes a single saddle-node bifurcation. At this point the saddle and one of the stable equilibria collide and disappear. \item If $\pi_{\Xi^*}(\xi^*) \notin R^*(5)$, then $f^*$ has a single globally stable equilibrium for all $d \in [1, \infty)$. In particular, $f^*$ does not undergo any saddle-node bifurcations. \end{itemize} While the first part of this assumption has been discussed in the previous Section, we here want to investigate the behaviour of parameters outside $R^*(5)$. The second part of the assumption seems reasonable based not only on our previous statistical result but also on the following intuition. For $d > 2$ both Hill functions are sigmoidal with $d$ controlling the incline of the steepest part of the sigmoid. Therefore, for the Toggle Switch with identified Hill coefficients, it seems reasonable to guess that the nullclines of $f^*$ move ``toward each other'' monotonically as $d \to \infty$. This intuition is shown in Figure \ref{fig:nullclines} and suggests that $f^*$ may undergo at most one saddle-node bifurcation with respect to $d$. \begin{figure}[t] \begin{center} \includegraphics[width = \textwidth, keepaspectratio=true, trim={0 0 0 1.2cm},clip]{nullclines.png} \caption{Toggle Switch nullclines for varying Hill coefficient along a Hill path. As the response of both Hill functions becomes steeper, the nullclines tend to be closer to one another until the saddle-node bifurcation at which point the nullclines intersect tangentially. Further increasing $d$ increases the area bounded between the nullclines and the three equilibria appear to move further apart from each other. } \label{fig:nullclines} \end{center} \end{figure} However, the techniques described in this paper provide the means to efficiently test this conjecture. Despite the fact that the parameter space is ``high'' dimensional, the combinatorial analysis suggests that if a counterexample exists, one might look for it near the boundary of $R^*(5)$. With this search strategy in mind and using the efficient numerical methods described in this work, we easily find parameters which numerically demonstrate multiple bifurcations and surprisingly this conjecture appears to be false. Indeed, the numerical investigation suggests that parameters outside of $R^*(5)$ may undergo (at least) two saddle-node bifurcations along a Hill path at some Hill coefficients $1 < d_1 < d_2 < \infty$. For Hill coefficients in $[1, d_1)$ and $(d_2, \infty)$, the system is monostable and bistable in $(d_1 , d_2)$. This behavior is shown for two parameters in Figure \ref{fig:numerical_isolas}. Moreover, the transition between monostability and bistability need not be hysteretic despite the fact that this model is a sort of ``canonical'' network motif associated with hysteretic switching. Indeed, pairs of saddle-node bifurcations occurring along a single Hill path may be hysteretic (i.e.~connected by a single equilibrium branch) or form isolas which are disconnected from a ``main'' branch associated to stable equilibrium which persists for all Hill coefficients. The combination of these dynamic behaviors in this model suggests one possible cause for failures in the construction of synthetic biological switches. \if false This means that, for any parameter outside $R(5)$ for which we were able to find one saddle node, we should be able to find a second one, where the two additional branches of equilibria disappear. Considering the equilibrium branches involved in these two saddle node bifurcations, the two possible behaviour of the equilibria are either to be hysteretic, or to define an isola. The graphical difference between the two behaviours can be seen in the Figure \ref{fig:isolas}. \begin{figure}[h] \begin{center} \includegraphics[width = 0.7\textwidth, trim= 1.5cm 14cm 4.25cm 1.7cm, clip]{sketch_isolas} \caption{On the left, expected behavior of a saddle-node in the center region $\mathcal{R}$. On the right, an isola, expected behavior of equilibria when a saddle-node is detected outside the center region. Vertical axes in this case is $\hill$, while the saddle-nodes are indicated by a red diamond. Unstable equilibria are drawn as dotted lines.} \label{fig:isolas} \end{center} \end{figure} To the best of our knowledge, only hysteretic behaviour is expected to be found in the toggle switch \cite{kuznetsov2004synchrony}, but we were able to identify parameters showcasing both behaviours. \fi In Figure \ref{fig:numerical_isolas}, we present a plot of the Hill coordinate with respect to the $x_1$ coordinate of numerically found equilibria for two different parameters outside of the center region. We can see how they differ in the fact that, in the first parameter, a continuous function can be plotted, while in the second one, the Hill coefficient is not a function of $x_1$ and creates an isolas. In both cases, the saddles have been first found numerically, then the equilibria have been numerically continued to produce the figures. \begin{figure}[th] \begin{center} \includegraphics[width = 0.45\textwidth]{figure_hysteresis.png} \includegraphics[width = 0.45\textwidth]{figure_isolas.png} \caption{ Branches of equilibria computed for the reduced Toggle Switch for varying $d$ via pseudo-arc length continuation depicting the two mechanisms which cause multiple saddle-node bifurcations. (Left) Hysteretic behaviour for the Toggle Switch found at parameter value $\lambda^*=[0.9243, 0.0506, 0.8125, 0.0779, 0.8161] $. (Right) An isola is found at $\lambda^* = [0.6470, 0.3279, 0.9445, 0.5301, 0.3908]$ indicating that multiple saddle-node bifurcations are possible without hysteretic switching.} \label{fig:numerical_isolas} \end{center} \end{figure} \subsection{Degenerate saddle-node bifurcations} \label{sec:degenerate saddle-node bifurcations} In this section we numerically investigate and compare the saddle-node bifurcations in the Toggle Switch occurring near the ``corners'' of $R(5)$. We note how the description of the attractors given by DSGRN also includes not only a count of the stable equilibria but their relative position in a simplified phase space. This richer information can be used to further investigate the dynamical behaviour supported by a system buy applying our combined combinatorial-numerical approach. In Section \ref{sec:degenerate saddle-node bifurcations} we demonstrate how to deduce additional dynamical information in the Toggle Switch model. Consider the simplest situation that all of the attractors in the Morse decomposition identified for each region of the combinatorial parameter space in Table \ref{tab:parameter_regions} is associated with a single stable equilibrium for some smooth vector field $f : (0, \infty)^2 \to \rr^2$ located in the indicated quadrant of the state space. We consider first a path through the continuous parameter space which visits the parameter regions in the order $R(5) \mapsto R(8) \mapsto R(7) \mapsto R(4) \mapsto R(5)$. In the combinatorial system, a continuous branch of equilibria can not cross the hyperplanes defined by the equations $x_1 = \gamma_1 \theta_{2,1}$ and $x_2 = \gamma_2 \theta_{1,2}$. Therefore, if $f$ is a faithful continuous representation of the combinatorial dynamics then the most parsimonious explanation for the sequence of Morse decompositions along this path is as follows. For parameters in $R(5)$, $f$ has (at least) one unstable equilibrium. When passing between regions $R(5)$ and $R(8)$, this unstable equilibrium and the stable equilibrium in $(1,0)$ disappear in a saddle-node bifurcation. Similarly, when passing between regions $R(5)$ and $R(4)$ this unstable equilibrium and the stable equilibrium in $(0,1)$ disappear in a saddle-node bifurcation. Therefore, along the boundary separating $R(4)$ and $R(7)$ these two saddle-node bifurcations collide in a codimension-2 cusp bifurcation and similarly on the boundary between $R(8)$ and $R(7)$. An analogous analysis of a path visiting the regions (in order) $R(5) \mapsto R(2) \mapsto R(3) \mapsto R(6 )\mapsto R(5)$ suggests another cusp bifurcation at the ``corner'' shared by $R(5)$ and $R(3)$. To contrast this behavior, consider a path visiting the parameter regions $R(5) \mapsto R(8) \mapsto R(9) \mapsto R(6) \mapsto R(5)$. For all regions along this path, the unique stable equilibrium is located in quadrant $(0,1)$ which is most likely modelled by the existence of a common smooth branch of equilibria which remains in this quadrant throughout the path. On the boundary between $R(5)$ and $R(6)$, the unstable equilibrium and the stable equilibrium in $(1,0)$ disappear in a saddle-node bifurcation much like on the boundary between $R(5)$ and $R(8)$. A similar situation occurs for paths visiting regions $R(5) \mapsto R(2) \mapsto R(1) \mapsto R(4) \mapsto R(5)$. This behaviour is not likely compatible with a cusp bifurcation at the "corner" between $R(5)$ and $R(1)$ or $R(9)$. Thus, our conjecture is that bifurcations for parameters at the ``corners'' shared by region $R(5)$ and regions $R(1), R(9)$ and those at the corners shared by $R(5)$ and regions $R(3), R(7)$ are caused by very different mechanisms. To investigate our conjecture, suppose once again that $f^*$ is the nondimensionalized Toggle Switch Hill model defined in section \ref{sec:reducing the number} with $6$ dimensionless parameters. For visualisation purposes, we define a suitable dimension reduction map from from $\Lambda^*$ onto the rectangle $[0, 3]^2 \subset \rr^2$ which preserves these $9$ DSGRN parameter regions and the relative distances between parameters and the region boundaries. To start, suppose $\xi^* \in \Xi^*$ and define the related parameters $\setof*{a_1, b_1, a_2, b_2}$ by the formulas \[ a_1 := \ell^*_{1,2} \qquad b_1 := \ell_{1,2}^* + \delta_{1,2}^* \qquad a_2 := \frac{\ell_{2,1}^*}{\gamma_2^*} \qquad b_2 := \frac{\ell_{2,1}^* + \delta_{2,1}^*}{\gamma_2^*}. \] Observe that if the reduced polynomial inequalities in Table \ref{tab:parameter_regions} are expressed in terms of these new parameters, then the $9$ DSGRN parameter regions as well as their boundaries are defined by linear manifolds. For example, region $R(5)$ is given by \[ R(5) = \setof*{(a, b) \in \rr^2 \times \rr^2 : 0 < a_1 < 1 < b_1 \ \text{and} \ 0 < a_2 < 1 < b_2 } \] and the boundary separating $R(5)$ and $R(6)$ is given by \[ \partial R(5) \cap \partial R(6) = \setof*{(a, b) \in \rr^2 \times \rr^2 : 0 < a_1 < 1 < b_1, \ a_2 = 1, \ \text{and} \ 1 < b_2 }. \] Implicitly this defines a nonlinear transformation $\psi : \Xi^* \to \rr^2 \times \rr^2$ given by the formula $\psi(\xi^*) = (a, b)$ which maps each DSGRN region to a unique hyperplane in $\rr^2 \times \rr^2$. To complete the construction, fix positive constants $\overbar{a}, \overbar{b}$ and define another map $g : \rr^2 \times \rr^2$ by the formulas \begin{equation}\label{eq:2dmap} g_1(a,b) = \begin{cases} b_2 & \text{if} \ b_2 \leq 1 \\ 1+\frac{ 1 - a_2}{b_2 - a_2} & \text{if} \ a_2 < 1 < b_2 \\ 2 + \frac{a_2-1}{\overbar{a} - 1} & \text{if} \ 1 \leq a_2 \end{cases} \qquad g_2(a,b) = \begin{cases} b_1 & \text{if} \ b_1 \leq 1 \\ 1+\frac{ 1 - a_1}{b_1 - a_1} & \text{if} \ a_1 < 1 < b_1 \\ 2 + \frac{a_1-1}{\overbar{a} - 1} & \text{if} \ 1 \leq a_1 \end{cases} \end{equation} Observe that the bounded subset $K_{\overbar{a}, \overbar{b}} \subset \image \psi$ defined by the formula \[ K_{\overbar{a}, \overbar{b}} := \setof*{(a, b) \in \image \psi : \norm{a}_\infty \leq \overbar{a}, \norm{b}_\infty \leq \overbar{b}}. \] we have that $g(K_{\overbar{a}, \overbar{b}} ) \subset [0,3]^3$. We use the previous constructions to visualize the relative position of parameters as follows. Given a fixed collection of parameters $\setof*{\xi^*_1, \dotsc, \xi^*_M} \subset \Xi^*$ we choose $\overbar{a}, \overbar{b}$ sufficiently large so that $\psi(\xi^*_j) \in K_{\overbar{a}, \overbar{b}}$ for $1 \leq j \leq M$. Therefore, the mapping $g \circ \psi : \Xi^* \to [0,3]^2$ satisfies the following properties. \begin{enumerate} \item Each of the $9$ parameter regions is mapped to a distinct unit square in $[0,3]^2$ and their relative positions are preserved. In fact, $g \circ \psi$ has been constructed so that the images of these regions are simply obtained by superimposing the graph in Figure \ref{fig:TS}(b) onto $[0, 3]^2$. We let $S(i) := g \circ \psi (R^*(i))$ denote these unit squares for $1 \leq i \leq 9$. \item Each boundary separating a pair of parameter regions is mapped into a line of the form $g_i = j$ with $i \in \setof*{1,2}$ and $j \in \setof*{1,2,3}$. In other words, boundaries of parameter regions are mapped onto boundaries of the corresponding unit square and relative positions of the boundaries are also preserved. \item Relative proximity to boundaries is preserved. Specifically, suppose $\xi_1, \xi_2 \in R^*(j)$ and $\operatorname{dist}(\xi_1, \partial R(j)) < \operatorname{dist}(\xi_2, \partial R(j))$, then $\operatorname{dist}(g \circ \psi ( \xi_1), \partial S(j)) < \operatorname{dist}(g \circ \psi ( \xi_2), \partial S(j))$. \end{enumerate} Given this projection of the parameter space into a two-dimensional Euclidean space we now visually investigate the behavior of saddle-node bifurcations along the boundary of $R^*(5)$. To generate random parameters for a statistical analysis, we construct an unbounded distribution $\mathcal{F}$ such that samples taken from $\mathcal{F}$ span the parameter regions $R(i)\subset \Xi^*, i = 1,\dots, 9$ of the Toggle Switch in such a way that no region is significantly over- or under-represented. Thus, we create large samples of parameters knowing that $R(5)$ will be roughly represented in the sample as often as the other parameter regions. Having constructed a large sample of parameters, we can use the techniques presented in this Section to project each parameter to the square $[0,3]^2$. For each parameter sample $\lambda \in \Xi^*$, we are interested in finding any saddle node bifurcation that happens along the path $\curve(s)$ as presented in Section \ref{sec:the toggle switch saddle}. We will restrict ourselves to Hill coefficients satisfying $1 \leq d \leq 100$, since higher Hill coefficients would give rise to numerical instability. \corrc SK: Check that $100$ is still the value we actually used in the final computation <<>> The first visual result we want to present is an overview of parameters that undergo saddle nodes at different Hill coefficients. For this, we find saddle nodes with respect to the Hill coefficient, then project the parameter over the $[0,3]^2$ square and represent the Hill coefficient as a heat map. In Figure \ref{fig:heat_map}, on the left the projection \ref{eq:2dmap} is used, while the color indicates the lowest Hill coefficient $d$ for which we could find a saddle node at the given parameter value. On the right Figure, there is a scatter plot of all parameters considered. These includes information on parameters that never undergo a saddle node, but also on parameters that undergo more than one saddle node. We will investigate further this class of parameters in Section \ref{sec:applications to bifurcation}. \begin{figure}[h] \begin{center} \includegraphics[width = 0.475\textwidt ]{dsgrn_heat_plot.pdf} \includegraphics[width = 0.475\textwidt ]{all_results.pdf} \caption{Left: Using the projection presented in Equation \ref{eq:2dmap}, a heat map is plotted, indicating the smallest Hill coefficient undergoing a saddle node. Right: Using the same projection, parameters are plotted in blue if they don't undergo any saddle node, in green if they undergo a sinlge saddle node, in orange if they undergo multiple saddle nodes and in red if the bisection algorithm found a saddle node that was not numerically confirmed with Equation \eqref{eq:num_saddle_node}} \label{fig:heat_map} \end{center} \end{figure} At first glance, Figure \ref{fig:heat_map} gives an intuition of what we proved in Section \ref{sec:statistical analysis of}: choosing parameters in $R^*(5)$ has the highest likelihood of giving us a saddle node for relatively low Hill coefficient. Looking at this map, we observe that the bottom left of the center region seems to be the best location for a practical bistable switch, since most parameters in that section undergo a saddle node at low Hill coefficient, and thus support bistability for most large Hill coefficients. We notice how there are saddle nodes taking place outside the center region, but most of them undergo two saddle nodes, thus for high enough Hill coefficient they again present a unique stable fixed point, as discussed in Section \ref{sec:hysteresis and isolas}. Crucially, this plot supports the claim that saddle-node bifurcations disappear into a cusp (or other codimension-2) bifurcation along a loop around the points $(1,1)$ or $(2,2)$. However, along a loop around $(1,2)$ or $(2,1)$, saddle-node bifurcations seem to disappear ``at infinity''. This is consistent with the observations we obtained from only the combinatorial dynamics using DSGRN, and that we presented at the beginning of this Section. \subsection{A high dimensional example: the EMT model} \label{sec:emt} As a final example we demonstrate the ability to reliably identify bistability and find saddle-node bifurcations even in ``very high'' dimensional parameter models. In \cite{Xin2020MultistabilityIT} the authors study a model for the Epithelial-Mesenchymal Transition network shown in figure \ref{fig:EMT_model}. We identify the $6$ genes in this network with state variables defined by \begin{eqnarray*} x_1: \text{TGF} \beta \qquad x_2: \text{miR200} \qquad x_3: \text{Snail1} \\ x_4: \text{Ovol2} \qquad x_5: \text{Zeb1} \qquad x_6: \text{miR35a} \end{eqnarray*} The associated Hill model for this network denoted by $f$ is defined on the state space $X := [0, \infty)^6$. As expected the linear part of this model has parameters $\gamma_1, \dotsc, \gamma_6$ representing the linear decay rate of each of the $6$ state variables. Under the assumption that all activating edges correspond to addition in the interaction function and all repressing edges are multiplied, the nonlinear term denoted by $\cH(x)$ is defined by \begin{equation}\label{eq:EMT_Hill} \cH(x) := \begin{pmatrix} H_{1,2}^-(x_2) H_{1,4}^-(x_4) \\ H_{2,3}^-(x_3) H_{2,5}^-(x_5) \\ H_{3,1}^+(x_1) H_{3,6}^-(x_6) \\ H_{4,5}^-(x_5) \\ H_{5,2}^-(x_2) H_{5,3}^+(x_3) H_{5,4}^-(x_4) \\ H_{6,3}^-(x_3) H_{6,5}^-(x_5) \end{pmatrix} \end{equation} $H^*_{i,j}$ denotes a Hill function associated to the edge from node $i$ to node $j$ which contributes $4$ non-negative parameters. Therefore the associated parameter space for this Hill model is $\Lambda = (0, \infty)^{54}$. \begin{figure} \centering \includegraphics[width = 0.7\textwidth]{EMT_model.pdf} \caption{The network structure for the EMT model associated with the Hill model given in Equation \eqref{eq:EMT_Hill}} \label{fig:EMT_model} \end{figure} In order to analyze this model using the tools discussed in this work we first implement this model in DSGRN. The combinatorial parameter space has 10,368,000,000 regions and we begin by finding a parameter region $P_M$, which exhibits combinatorial monostability and has a neighboring region $P_B$ exhibiting combinatorial bistability. Considering how widespread such regions seems to be, we don't check all possible regions, but stop within the first 100. Since $P_M$ and $P_B$ are semi-algebraic subset of $\rr^{42}$ described by hundreds of polynomial inequalities, we do not provide a list of these inequalities. However, these parameter regions have indices 302 and 284 in DSGRN and the interested reader can inspect these inequalities using the code available at \libraryLink. Due to the dimension of the parameter space and number of the parameter regions in this example it is not feasible to attempt any sort of statistical analysis of our accuracy. In fact, the purpose of this example is to demonstrate that we can even find saddle-node bifurcations in this model at all! With this in mind, we consider the reduced Hill model in which we identify the Hill coefficients for all $12$ edges of the network. This is an ODE with $43$ parameters, one of which is the common Hill coefficient which we denote by $d$. From the two regions we selected, we are interested in creating a large enough sample of parameters. One of DSGRN functionalities allows us to assign the appropriate region to a parameter vector, another allows us to find one (non-random) parameter within any given region. To sample regions $P_M$ and $P_B$, we first query DSGRN to provide us with one point in each region, $p_m$ and $p_b$. We then create a multivariate Gaussian distribution around the segment connecting these two parameters. By sampling from this Gaussian distribution we create a point cloud mostly distributed in $P_M$, $P_B$ and overlapping on neighbouring regions. We could choose a smaller variance to ensure all our samples belongs to the selected regions, but that would guarantee we are not spanning much of $P_M$ and $P_B$ either. Thus, we randomly sampled $10,000$ parameters from this Gaussian distribution. DSGRN computed that 7772 lie in $P_M$ and 1231 lie in $P_B$, the remaining parameters were from outside both regions. We can then numerically search for saddle nodes along the curve of increasing Hill coefficients for a subsample of these parameters. Due to the computational requests of this computation, a smaller subsample was considered, where the same number of random parameters were selected from the monostable and the bistable region. In this case, we can built a contingency matrix as discussed in Section \ref{sec:statistical analysis of}, where we replace $R(5)$ with the bistable region and its complement with the monostable region. No sample outside of these two regions is considered. Due to the increased size of the problem, a longer computational time is necessary, but we computed the following contingency matrix for a subsample of 200 parameters: $$ M = \begin{pmatrix} 5 & 0\\ 194 & 100 \end{pmatrix}, $$ where a single parameter could not be determined. Here we can see that roughly $2.5$\% of the parameters in the bistable region undergo a saddle node, while none of the parameters in the monostable region do. The $\chi$-test for this contingency matrix returns a $p$-value of 0.07. While it doesn't breach the traditional threshold of 0.05, we are confident this should be a good indication to support our method. The code is presented in {\texttt{EMT\_chitest.py}}. The parameters and computations for both the parameters in $P_m$ and $P_B$ are available at \libraryLink. \section{Introduction} \label{sec:introduction} \input{intro_dynamics.tex} \section{Hill models} \label{sec:hill models} \input{hill_models_dyn2.tex} \section{DSGRN} \label{sec:dsgrn} \input{dsgrn.tex} \section{Equilibria and Saddle-node Bifurcations} \label{sec:equilibria} \input{equilibria.tex} \section{The Toggle Switch} \label{sec:toggle_switch} \input{toggle.tex} \section{Applications to bifurcation analysis} \label{sec:applications to bifurcation} \input{applications.tex} \if False \section{A high dimensional example} \label{sec:emt} \input{emt.tex} \fi \section{Conclusion} \label{sec:conclusion} \input{conclusion} \bibliographystyle{plain} \subsection{Equilibria from DSGRN} \subsection{Overview of equilibrium finding} Within Step 2, we are asked to numerically find all equilibria for a given parameter value. In this Section, we give an initial overview of our method. We first assume implementations of two black box algorithms. The first is a root finding implementation which we denote by {\tt FindRoot}, which takes a function, $f : \rr^N \to \rr^N$, and an initial guess $x_0 \in \rr^N$ as input and attempts to identity a root of $f$ near $x_0$. When successful it returns $\hat x \in \rr^N$ satisfying $\norm{f(\hat x)} \approx 0$. For the computations in this work we used a Newton based root finder but other options exist of course. The second algorithm, denoted by {\tt Unique}, identifies pairs of distinct vectors in $\rr^N$ which approximate the same root of $f$. That is, if $\hat x_1, \hat x_2$ satisfy \[ \norm{f(x_1)} \approx 0 \approx \norm{f(\hat x_2)} \qquad \text{and} \qquad \norm{\hat x_1 - \hat x_2} \approx 0, \] then {\tt Unique}$(f,\hat x_1, \hat x_2) = \hat x_1$. If {\tt Unique} does not identify $\hat x_1$ and $\hat x_2$ as zeros of $f$, then we say that they are \emph{approximately} distinct with respect to $f$. If $\hat x$ is an array of vectors in $\rr^N$ and $f$ a function from $\rr^N$ to itself, then {\tt Unique}$(f,\hat x)$ returns an new array of vectors in $\rr^N$ of size equal or smaller than $\hat x$, in which each pair of vectors is approximately distinct. It is worth noting that Hill functions are trivially bounded, thus for any Hill model, there exists a rectangular subset of $X$ of the form \begin{equation} \label{eq:Rectangle} R := \prod_{i = 1}^{N} [a_i, b_i], \qquad [a_i, b_i] \subset (0, \infty) \forall 1 \leq i \leq N. \end{equation} such that all the zeros of the Hill model are within this rectangle. Furthermore, computing the bounds of $R$ is trivial. The details are presented in Section \ref{sec:the bootstrap algorithm}. With both {\tt FindRoot}, $R$ and {\tt Unique} in hand, we can present the algorithm for the computation of all (up to numerical error) zeros of the Hill model $f$. Each interval in the product \eqref{eq:Rectangle} is partitioned into $k$ subintervals bounded by $k+1$ uniformly spaced nodes. The product of these nodes forms a grid of points in $\rr^N$ which covers $R$. Each of the $(k+1)^N$ points in this grid is taken as an initial condition for {\tt Findroot} which attempts to return a candidate equilibrium nearby. The algorithm returns an array containing such candidates which are not identified as equivalent by {\tt Unique}. The pseudocode is described in Algorithm \ref{alg:general_equilibria}. \begin{algorithm} \caption{General algorithm} \label{alg:general_equilibria} \begin{algorithmic}[1] \Function{{\tt HillEquilibria}}{$f, R, k$} \State $\hat{x} \gets ()$ \Comment{Initialize equilibrium array} \State $\Delta_i \gets \frac{b_i - a_i}{k}$ \State $u_i \gets (a_i, a_i + \Delta_i, \dotsc, a_i + (k-1) \Delta i, b_i)$ \Comment{Discretize factors} \For{$\kappa \in \setof{1,\dotsc, k}^N$} \State $x_0 \gets (u_{1, \kappa_1}, \dotsc, u_{N, \kappa_N})$ \State $r_\kappa \gets {\tt FindRoot}(f, x_0)$ \Comment {Returns a candidate when it converges} \If{$r_\kappa$} $\hat x.{\tt Append}(r_k) $ \Comment {Append candidate equilibrium} \EndIf \EndFor \State \textbf{return} {\tt Unique}$(f, \hat x) $ \EndFunction \end{algorithmic} \end{algorithm} \subsection{The bootstrap algorithm} \label{sec:the bootstrap algorithm} In this section we define a second algorithm which exploits the structure of Hill models in specific cases to localize equilibria more reliably and efficiently than the Newton-based algorithm described above. The main idea is to begin with an initial rectangular subset of $(0,\infty)^N$ which is an enclosure for all equilibria and then iteratively obtain tighter rectangular enclosures by ``bootstrap''. \begin{definition} \label{def:monotone_factorization} We say that a continuous function $g\colon [0,\infty)^N\to (0,\infty)^N$ has a {\em monotone factorization} if for each $i=1,\ldots, N$, $g_i$ factors as \[ g_i(x) = g_i^+(x) g_i^-(x) \quad \forall x \in [0, \infty)^N, \] where $g_i^+\colon [0,\infty)^N\to (0,\infty)^N$ is bounded and strictly increasing with respect to $x_1,\dotsc, x_N$ and similarly, $g_i^-\colon [0,\infty)^N\to (0,\infty)^N$ is bounded and strictly decreasing with respect to $x_1, \dotsc, x_N$. \end{definition} Consider a continuous function $f\colon [0,\infty)^N\to\rr^N$ of the form $f(x) = -\Gamma x +g(x)$ where $g\colon [0,\infty)^N\to(0,\infty)^N$ has a monotone factorization and for all $i=1,\ldots, N$, $f_i(x) = -\gamma_i x + g_i(x)$ and $\gamma_i >0$. Define $\Phi\colon \rr^{2N}\to \rr^{2N}$ coordinate-wise by the formulas \[ \Phi_i(\alpha, \beta) = \frac{1}{\gamma_i} g_i^+(\alpha) g_i^-({\beta}) \quad\text{and}\quad \Phi_{N+i}(\alpha, \beta) = \frac{1}{\gamma_i} g_i^+(\beta) g_i^-(\alpha), \qquad i = 1, \ldots, N \] where $\alpha, \beta \in \rr^N$. \begin{theorem} \label{thm:bootstrap_eqbounds} Consider $f$, and $\Phi$ as defined above. Assume that $\liminf\limits_{\|x\|\to \infty}g_i^-({x})>0$ for all $i=1,\ldots,N$. Then, the following are true. \begin{enumerate}[align = Center, label=\roman*] \item $x\in [0,\infty)^N$ is a zero of $f$ if and only if $(x,x) \in [0, \infty)^{2N}$ is a fixed point of $\Phi$. \item Define $\paren*{\alpha^{(0)},\beta^{(0)}} \in \rr^{2N}$ coordinate-wise by \begin{equation} \alpha^{(0)}_i := \frac{1}{\gamma_i} g_i^+(0) \liminf_{\|x\|\to \infty}g_i^-({x}) \quad\text{and}\quad \beta^{(0)}_i := \frac{1}{\gamma_i} \limsup_{\|x\|\to \infty}g_i^+(x) g_i^-(0) \end{equation} and iteratively define $(\alpha^{n+1},\beta^{n+1}) = \Phi(\alpha^{n},\beta^{n})$ for $n \geq 1$. Then, $(\hat{\alpha},\hat{\beta}) := \lim_{n\to\infty}(\alpha^{n},\beta^{n})$ exists. \item If $f(\hat x)=0$, then \[ \hat{\alpha}_i \leq \hat{x}_i\leq \hat{\beta}_i, \quad \forall i=1,\ldots,N. \] \end{enumerate} \end{theorem} \begin{proof} We leave it to the reader to check ({\it{i}}). ({\it{ii}}) follows from the boundedness and strict monotonicity of $g_i^+$ and $g_i^-$. To be more specific, we prove inductively that for $1 \leq i \leq N$, $\alpha_i^{(n)}$ and $\beta_i^{(n)}$ are monotonically increasing and decreasing sequences, respectively. The base case follows from easy estimates \[ \alpha_i^{(1)} = \Phi_i\paren*{\alpha^{(0)},\beta^{(0)}} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(0)}} g_i^-\paren*{\beta^{(0)}} > \frac{1}{\gamma_i} g_i^+\paren*{0} \liminf_{\|x\|\to \infty}g_i^-({x}) = \alpha^{(0)}_i \] and \[ \beta_i^{(1)} = \Phi_{N+i}\paren*{\alpha^{(0)},\beta^{(0)}} = \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(0)}} g_i^-\paren*{\alpha^{(0)}} < \frac{1}{\gamma_i} \limsup_{\|x\|\to \infty}g_i^+\paren*{x} g_i^-\paren*{0} = \beta^{(0)}_i. \] Now assume that $\alpha_i^{(n)} < \alpha_i^{(n-1)}$ and $\beta_i^{(n)}> \beta_i^{(n-1)}$. The strict monotonicity of $g_i^+$ and $g_i^-$ implies that \[ \alpha_i^{(n+1)} = \Phi_i\paren*{\alpha^{(n)},\beta^{(n)}} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n)}} g_i^-\paren*{\beta^{(n)}} > \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n-1)}} g_i^-(\beta^{(n-1)}) = \alpha^{(n)}_i \] and \[ \beta_i^{(n+1)} = \Phi_{N+i}\paren*{\alpha^{(n)},\beta^{(n)}} = \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n)}} g_i^-\paren*{\alpha^{(n)}} < \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n-1)}} g_i^-\paren*{\alpha^{(n-1)}} = \beta^{(n)}_i. \] The proof of ({\it{iii}}) is also done inductively. Define \[ \cR^{(n)} := \prod_{i = 1}^{N} [\alpha^{(n)}_i, \beta^{(n)}_i]. \] By the proof of ({\it{ii}}), $\cR^{(n+1)}\subset \cR^{(n)}$. Define $F : [0,\infty)^N \to [0,\infty)^N$ by the formula \[ F_i(x) = \frac{1}{\gamma_i }g_i(x) \qquad 1 \leq i \leq N. \] Observe that if $f(\hat{x})=0$, then $F(\hat x)=\hat{x}$. Therefore, it suffices to prove that if $F(\hat x)=\hat{x}$ then $\hat{x}\in \cR^{(n)}$ for all $n \in \nn$. Observe that from the definitions of $F$ and $\paren*{\alpha^{(0))},\beta^{(0)}}$, \[ F\paren*{[0,\infty)^N} = \cR^{(0)} \] and therefore $\hat{x} \in \cR^{(0)}$. Inductively, suppose that $n \in \nn$ is fixed and $\hat{x} \in \cR^{(n-1)}$, i.e. \begin{equation} \label{eq:induction_bound} \alpha_i^{(n-1)} \leq \hat{x}_i \leq \beta_i^{(n-1)} \qquad \text{for} \ 1 \leq i \leq N. \end{equation} The inequalities of \eqref{eq:induction_bound} combined with the definition of $\Phi$ implies that for all $1 \leq i \leq N$ \[ \alpha_i^{(n)} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n-1)}} g_i^-\paren*{\beta^{(n-1)}} \leq F_i(\hat x)=\hat{x}_i \leq \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n-1)}} g_i^-\paren*{\alpha^{(n-1)}} = \beta_i^{(n)} \] where the inequalities are obtain from the fact that $g_i^+$ and $g_i^-$ are strictly monotonically increasing and decreasing respectively. Therefore, $\hat{x} \in \cR^{(n)}$. \end{proof} Observe that $F\paren*{\cR^{(n-1)}} = \cR^{(n)}$. This motivates the following algorithm for bounding equilibria. \begin{algorithm} \caption{Bootstrap algorithm} \label{alg:bootstrap_equilibria} \begin{algorithmic}[1] \Function{\tt RootEnclosure}{$f$} \State $u \gets (\alpha^{(0)}, \beta^{(0)})$ \Comment{Initialize orbit as described in Theorem \ref{thm:bootstrap_eqbounds}} \State $v \gets \Phi(u)$ \While{$\norm{u - v} > \epsilon$} \State $u \gets v$ \State $v \gets \Phi(u)$ \EndWhile \State $R \gets \prod_{i = 1}^N [v_i, v_{N+i}]$ \State \textbf{return} $R$ \EndFunction \Function{\tt MonotoneHillEquilibria}{$f, k$} \State $R \gets ${\tt RootEnclosure($f$)} \State \textbf{return} {\tt HillEquilibria$(f, R, k)$} \EndFunction \end{algorithmic} \end{algorithm} \subsection{Isolating equilibria} \label{sec:isolating equilibria} Let us consider the situation in which the previous Sections equilibrium finding numerical algorithms return two solutions $\overbar x _1$ and $\overbar x_2$. It is possible that they are two numerical representations of the same analytic solution, or that they represent two different analytic solutions. The numerical radii polynomial approach can allow us a heuristic to make this distinction. The radii polynomial approach, as presented in~\cite{radiipolynomailapproach, radiipolynomailapproach2}, is usually implemented rigorously for the validation of zero finding problems. In this Section, we are only interested in an approximate implementation, thus our results do not have status of proofs, contrary to the references given. While it is possible to attempt a validation of any of the equilibria or saddle-node bifurcations found in this work, this is not the focus of the present work and we have not implemented it here. Validation of a single equilibrium or saddle-node bifurcation using the radii polynomial approach requires a given high precision numerical solution. It does not provide a means for producing such a solution which is the focus in this paper. The radii polynomial approach is build around the contraction mapping theorem. In an abstract setting, we consider a zero-finding problem $F(x) = 0$. In our setting, $F = h(\cdot, \lambda)$ is the vector field associated with a Hill model, but the discussion in this section is presented in a more general way. Suppose $\overbar x$ is an approximate root of $F$ and $A$ is an approximate inverse of the Jacobian of $F$ at $\overbar{x}$ i.e.~informally we assume \[ \norm {F(\overbar x)} \approx 0 \qquad \norm{A - DF(\overbar{x})^{-1}} \approx 0. \] Then we define a ``Newton-like'' operator by the formula \begin{equation} \label{eq:Newton like operator} T(x) = x - AF(x), \end{equation} with the idea that if $\overbar{x}$ and $A$ are sufficiently good approximations, then $T$ is expected to be a contraction in some neighborhood of $\overbar x$. To estimate the radius of contraction of $T$, we make use of the following theorem. \begin{theorem} Let $\overbar{x}, A$ be given and $T$ as defined in Equation \eqref{eq:Newton like operator}. Suppose $Y > 0$ and $Z : (0, \infty) \to (0, \infty)$ satisfy the bounds $$ Y \geq \| T(\overbar x) - \overbar x \|, \qquad Z(r) \geq \max_{b\in B_1(0)} \| DT(\overbar x+ rb)\|, $$ then, let us define the radii polynomial as \begin{equation}\label{eq:radii_pol} p(r) = Y + (Z(r) - 1) r. \end{equation} If there exists an $r^*$ such that $p(r^*)<0$, then there exists a unique $\hat{x}$ such that $F(\hat{x}) = 0$ and $\|\overbar x - \hat{x}\|<r^*$. \end{theorem} For the proof we refer to \cite{radiipolynomailapproach}. It is useful to first simplify $$ \| T(\bar x) - \bar x \| = \| A F(\bar x)\| = Y, $$ thus $Y$ can be computed straighforwardly. In our case, we notice $DT(x) = I - ADF(x)$, thus $$ DT(\overbar x+ rb)=I - ADF(\overbar x+rb) = I - ADF(\overbar x) + A\left(DF(\overbar x) - DF(\overbar x + rb)\right) $$ and we split to computation of $Z(r)$ into the computation of $ Z_0 \geq \|I - ADF(\overbar x)\| $ and $ Z_1(r) \geq \max_{b\in B_1(0)} \|A\left(DF(\overbar x) - DF(\overbar x + rb)\right)\|$. It's worth noting that $$ \max_{b\in B_1(0)} \|A\left(DF(\overbar x) - DF(\overbar x + rb)\right) \| \leq \max_{z,b\in B_1(0)}\|AD^2F(\overbar x + rz)b\|r. $$ We then approximate $ \max_{z,b\in B_1(0)}\|AD^2F(\overbar x + rz)b\| r\approx \|AD^2F(\overbar x)\|r := Z_1r$, by assuming that the second derivative of $F$ is almost constant. While the approximation is crude, it has proven to be sufficiently precise for the task of identifying equivalent equilibria We then have the radii polynomial as $$ p(r) = Y + (Z_0 + Z_1 r - 1) r, $$ and the biggest existence radius is $$r^* = \frac{1-Z_0+ \sqrt{Z_1^2 - 4(Z_0-1) Y}}{2Z_1}.$$ With this approach in mind, we can distinguish between numerical duplicates of equilibria and multiple equilibria. Considering the numerical approximations $\bar x_1, \bar x_2$ of equilibria, we can compute their associated (approximate) existence radii $r^*_1, r^*_2$. If $\|\bar x_1-\bar x_2\| < \max(r^*_1, r^*_2)$, then we consider them to be numerical duplicates, otherwise, they represent different analytical solutions. We can then expand this algorithm to a set of approximate equilibria $\{\bar x_1, \dots \bar x_n\}$. First, all radii $r^*_i, i=1,\dots, n$ are computed, then we delete any $x_i$ such that there is an $x_j$ with $\| x_i - x_j\| < \max( r^*_i, r^*_j)$. To reduce the number of comparisons, we restrict ourselves to $j<i$. We implement \texttt{RadiiPol} as a function that computes $r^*$ for any function $f$ and approximate equilibrium $\bar x$. \begin{algorithm} \caption{Unique algorithm} \label{alg:unique_equilibria} \begin{algorithmic}[1] \Function{\tt Unique}{$f$, $X$} \For{$\bar x\in X$} \State $r \gets ${\tt RadiiPol}$(f, \bar x)$ \If{ $\bar y\in X$: $\|\bar x-\bar y\|<r$} \State delete $\bar y$ from $X$ \EndIf \EndFor \State \textbf{return} $X$ \EndFunction \end{algorithmic} \end{algorithm} \subsection{The zero finding problem} \subsection{The Lagrangian formulation} \blue{This should be turned into a section about the minimization algorithm used in the code and how it handles constraints. While it doesn't need to be precise, it should be here for completeness} While onfronted with high dimensional parameter spaces, we chose the approach of looking of ``special'' parameters in this space. The easiest mathematical definition of ``special'' in this context is the solution to an optimization problem. We therefore want to present here a numerical approach to numerical optimization. First of all, a dislaimer: this section is purely for review and completeness of the paper and doesn't present any new results. The problem we want to solve numerically is a generalised minimization problem with equality constraints, that would take the form \begin{align}\label{e:minimization_problem} \min g(x) : \quad F(x)= 0 \end{align} with $x\in\rr^k$, $g:\rr^k \rightarrow \rr$ and $F : \rr^k \rightarrow \rr^l$ with $1\leq l < k$. Let us remark here that we want to aproach this problem from a numerical perspective. We therefore will consider this problem only locally, and we will not be looking for any global minimum. We might consider doing so many local searches that we could be fairly sure of having converged to the global minimum, but it will always only be a strong hint, never a proof. In most of the cases we will discuss, the fuction $g$ will be trivial, while the constraints encapsulated in $F$ will be fairly complicated and require most of our work. Analytically, the \textit{Lagrangian formulation} allows to rewrite problem \eqref{e:minimization_problem} into a minimization problem without constraints. This formulation works as presented in the following Theorem. \begin{theorem} With the notation just introduced, let $\mu \in \rr^l$ of unknonws. The Lagrangian $$ \mathcal{L}(x,\mu) = g(x)- \mu^T F(x) $$ has a local minimum for every local solution to \eqref{e:minimization_problem}. \end{theorem} \begin{proof} Fuente, Angel de la (2000). Mathematical Methods and Models for Economists. Cambridge: Cambridge University Press. p. 285. or plenty others \end{proof} We are therefore left to solve the equivalent problem \begin{align*} &D_xg(x) - \mu^T D_xF(x) = 0,\\ &F(x) = 0, \end{align*} that is incidentally a probelm with $k+ l $ unknowns and $k+l$ equations. In the cases considered in this article, $F$ encloses all dynamical constraints, such as beng a saddle node bifucation, and $g$ is the optimization constraint. In the following, multiple examples with this structure will be presented. On the numerical level, this problem is solved following the trust region contrained algorithm, first presented in [Powell, M.J.D., Yuan, Y. A trust region algorithm for equality constrained optimization. Mathematical Programming 49, 189–211 (1990)] (\red{is this the right reference?}) \red{OLD version below} There are two problems with trying to minimize $\hill$ along the surface of saddle-node bifurcations. The first is that for fixed $\lambda$, $\hill := \hill(x, \lambda)$ implicitly depends on the coordinates of the equilibrium as well as the other parameters, $\lambda$. The bifurcation point is only known implicitly as well as a root of the $g(x,\lambda, \hill, v)$ defined above. Therefore, it is not easy to get our hands on the derivative of $\hill$ with respect to $x$ and $\lambda$. The second concern is that we want to optimize (minimize or maximize) other scalars along this surface, not just $\hill$. For these reasons we reformulate the saddle-node problem as an appropriate Lagrangian optimization problem. For simplicity, we will continue to separate the parameters of the Hill Model as $\hill$ and $\lambda$ but this does not restrict the generality of the discussion. We suppose we have a function, $h(x,\lambda, \hill, v)$ which we would like to optimize along the bifurcation surface. Recall that we have the following dimensions: \[ \hill \in \rr, \quad x \in \rr^n, \quad \lambda \in \rr^m, \quad v \in \rr^n \] and thus, we can consider to be of the form $h : \rr^{m + 2n + 1} \to \rr$. We write $M = 2n+1$ and recall that the saddle node zero finding problem is a function of the form, $g : \rr^{M} \to \rr^{M}$. A stationary point satisfies $D \cL = 0$ which amounts to a solution of the nonlinear system of equations \begin{align*} Dh(x, \lambda, \hill, v) & = \mu^T Dg(x, \lambda, \hill, v) \\ g(x, \lambda, \hill, v) & = 0 \end{align*} Therefore, we will look for optimizers of $h$ at stationary points of the Lagrangian, \[ \cL(h, x, \lambda, \hill, v, \mu) := h - \mu^T g(x, \lambda, \hill, v) \qquad \mu \in \rr^{M} \] where $\mu$ is a vector of unknown multipliers. Counting dimensions we see that $\cL : \rr^{m + 2M} \to \rr$ but $g = 0$ ``uses up'' $M$ of these free variables as constraints. So we are still left with an optimization over $m + M$ free variables as expected. Now, we compute $D \cL$ by writing $\mu \in \rr^M$ in components, \[ \mu = (\mu_1, \mu_2, \mu_3) \in \rr^n \times \rr^n \times \rr. \] This computation is easiest to work out for particular examples so we consider the example $h = \hill$. Then, we can write $\cL$ as \[ \cL(\hill, x, \lambda, v, \mu) = \hill - \mu_1^T f(x,\lambda, \hill) - \mu_2^T Df(x, \lambda, \hill) \cdot v - \mu_3 \paren*{\norm{v} - 1}. \] Then taking the derivative of $\cL$ we have \[ D \cL(\hill, x, \lambda, v, \mu) = \begin{pmatrix} 1 - \mu_1^T D_nf(x, \lambda, \hill) - \mu_2^T D_n \paren*{D_xf(x, \lambda, n) \cdot v} \\ -\mu_1^T D_x f(x,\lambda, \hill) - \mu_2^T D_x \paren*{D_x f(x, \lambda, \hill) \cdot v} \\ -\mu_1^T D_\lambda f(x, \lambda, \hill) - \mu_2^T D_\lambda \paren*{D_x f(x, \lambda, n) \cdot v} \\ -\mu_2^T D_x f(x, \lambda, \hill) - \frac{\mu_3}{\norm{v}} v \\ g(x, \lambda, \hill, v) \end{pmatrix} \] Observe that with the exception of $D_\lambda f$ and $D_\lambda (Df)$, we have already computed each of these quantities in the original zero finding problem for the saddle-node. \subsection{Hysteresis} Connected to the problem of finding a saddle node given some function to minimize is what we will call the hysteresis problem. In its abstract formulation, we are considering the system $$ \dot x = f(\lambda, \beta, x) $$ that is known to undergo two saddle node bifurcations w.r.t. $\lambda$ for some choices of $\beta$, that is the system undergoes two saddle node bifurcations at $(\lambda_0,\beta_0,x_0)$ and $(\lambda_1,\beta_1,x_1)$ with $\beta_0=\beta_1$. The problem is to chose $\beta$ such as to maximise the parameter distance of the two saddle nodes, that is $| \lambda_0-\lambda_1|$. This problem stems from biological concerns. It models the situation in which one parameter is responsible for a change in expression levels, but these levels depend on the hystory of the cell. Mathematically, this is modelled by an equilibrium brnach that undergoes two saddle node bifurcations as depicted in Figure \ref{f:hysteresis}. \red{references to hysteresis papers} \begin{figure} \begin{center} \includegraphics[width = 0.7\textwidth, trim= 2cm 13cm 8.25cm 1.7cm, clip]{hysteresis} \caption{Hysteresis is associated to a branch of equilibria undergoing two saddle node bifurcations as depicted. This created a center region of bistability that ``remembers'' the direction the system is coming from upon changing $\lambda$. The saddle nodes are indicated by a red diamond. Unstable equilibria are drawn as dotted lines.}\label{f:hysteresis} \end{center} \end{figure} The problem we want to solve: $(\lambda_1, \lambda_2, \beta, x_1, x_2, v_1, v_2)$ such that \begin{align*} f(\lambda_1, \beta, x_1) = 0,\\ f(\lambda_2, \beta, x_2) = 0,\\ Df(\lambda_1, \beta, x_1)v_1 = 0,\\ Df(\lambda_2, \beta, x_2)v_2 = 0,\\ \lambda_1 - \lambda_2 \text{ is maximal} \end{align*} algorithm: \begin{enumerate} \item set $\beta$, find intial $(\lambda_1, \lambda_2, x_1, x_2, v_1, v_2)$ using a modification of what is now \textit{find\_saddle\_node} with the understanding that $\lambda_1<\lambda_2$, \item run constrained minimization with SLSQP with minimization function $g(\lambda_1, \lambda_2, \beta, x_1, x_2, v_1, v_2) = \lambda_1 - \lambda_2$ and constraints $$ \begin{cases} f(\lambda_1, \beta, x_1) = 0,\\ f(\lambda_2, \beta, x_2) = 0,\\ Df(\lambda_1, \beta, x_1)v_1 = 0,\\ Df(\lambda_2, \beta, x_2)v_2 = 0. \end{cases} $$ \end{enumerate} \begin{remark} If we have the toggle switch and if we want in more generality, we also want to add the constraint $$ \hill_1 = \hill_2, \text{ i.e. } \hill_1 - \hill_2 =0. $$ and possibly the constraint $$ \| (\lambda_1, \lambda_2, x_1, x_2, v_1, v_2)\|_p = k $$ with a reasonable $k$, possibly $k = 3$ or even 4 and $p$ either 1 or $\infty$. \end{remark} \subsection{Hill models} \label{sec:hill_models} In this section we define a Hill model in terms of Equation \eqref{eq:ODE_model}. Each state variable measures a non-negative real quantity and therefore we start by taking our state space to be $X = [0, \infty)^N$. and we define the vector field on the tangent space at $x\in X$ which is isomorphic to $\rr^N$. For the moment we will assume that $\Lambda$ is given and that $\lambda \in \Lambda$ is fixed. Consequently, we will suppress dependence on this parameter for the remainder of this section by writing $f, \cL,$ and $\cH$ in place of $f_\lambda, \cL_\lambda,$ and $ \cH_\lambda$ respectively. We will discuss our choice for $\Lambda$ in detail in Section \ref{sec:parameter_space}. Next, we describe the vector field corresponding to a Hill model which amounts to defining $\cL$ and $ \cH$. Our first assumption is that for $1 \leq i \leq N$, $x_i$ decays exponentially with constant rate $\gamma_i > 0$. Specifically, the decay of each protein/gene depends on the parameter, $\gamma_i$, but does not depend on the state. Therefore, $\cL$ has the form \[ \cL(x) = -\Gamma x \] where $\Gamma$ is a diagonal $N \times N$ matrix with diagonal entries $\Gamma_{ii} := \gamma_i$. \begin{remark} Our assumptions on $\cL$ preclude modeling, among other phenomena, targeted degradation effects due to ubiquination or phosphorylation. However, these assumptions are motivated by the applications of interest in this work and are not a limitation of our analysis. Indeed, the numerical techniques proposed in this work apply with almost no modification to the case that $\cL(x) = -\Gamma(x)x$ where $\Gamma(x)$ is a state dependent, positive-definite matrix. The combinatorial techniques on the other hand do inherently require understanding the structure of the linear term to a greater extent. \end{remark} The description of the nonlinear part $f$ is a bit more technical. We begin with several definitions. \begin{definition} \label{def:interaction_function} A polynomial, $p \in \rr[z_1, \dotsc, z_N]$ is called an {\em interaction function} if it has the form \[ p = \prod_{m = 1}^{q} p_m, \qquad \text{where} \quad p_m = \sum_{j \in I_m} z_j, \] and $\setof{I_1, \dotsc, I_q}$ is a partition of the integers, $\setof{1, \dotsc, N}$. For $1 \leq m \leq q$ we refer to the linear polynomial, $p_m$, as the $m^{\rm th}$ {\em summand} of $p$. \end{definition} \begin{definition} \label{def:hill_function_response} Let $H^+ : [0, \infty) \to [0, \infty)$ denote the {\em positive Hill response} function defined by the formula \[ H^+\paren*{x} := \ell + \delta \frac{x^\hill}{\theta^\hill + x^\hill} \] and $H^- : [0, \infty) \to [0, \infty)$ denotes the {\em negative Hill response} function defined by the formula \[ H^{-}\paren*{x} := \ell + \delta \frac{\theta^\hill}{\theta^\hill + x^\hill} \] where $\setof{\ell, \delta, \theta, \hill}$ are non-negative real parameters. When convenient, we let $H^*$ denote an arbitrary Hill function of either sign. The {\em Hill coefficient}\footnote{ The Hill coefficient is also called the {\em Hill exponent} in the literature.} is the parameter denoted by $\hill$ which has a central role in our analysis. Observe that $H^+$ is monotonically increasing and satisfies \[ H^+(0) = \ell \qquad \lim\limits_{x \to \infty} H^+(x) = \ell + \delta. \] Similarly, $H^-$ is monotonically decreasing and satisfies \[ H^{-}(0) = \ell + \delta \qquad \lim\limits_{x \to \infty} H^-(x) = \ell. \] Observe that the zero function is also a Hill function which can be obtained by setting $\ell = 0 = \delta$ in a Hill function of either sign. We refer to this as the {\em trivial} Hill function. \end{definition} \corrc SK: We might want to modify the parameter restrictions to enforce $d \geq 1$. <<>> With these terms defined we turn to explicitly describing $\cH$ for a Hill model. We require each coordinate of $\cH$ to have the following structure. Fix $i \in \setof{1, \dotsc, N}$ and consider a single coordinate of $f$ which is a scalar of the form, $f_i : X \to \rr$, describing the time derivative of the state variable, $x_i$. This state variable satisfies the scalar differential equation \begin{equation} \label{eq:hill_model_coordinate} \dot x_i = f_i(x) = -\gamma_i x_i + \cH_i(x) \end{equation} where $\cH_i$ is the $i^{\rm th}$ coordinate of $\cH$. We say that $f$ is a {\em Hill model} if the coordinates of $\cH$ have the form \begin{equation} \label{eq:nonlinear_coordinate} \cH_i(x) = p_i \paren*{H^*_{i,1}(x_1), \dotsc, H^*_{i,N}(x_N)} \end{equation} where $p_i$ is an interaction function and $H^*_{i,j}$ is a Hill function for $1 \leq j \leq N$. \corrc I am not sure how much of the following discussion belongs to this paper. I have left it in because we might want to make use of some of it in the concluding remarks. KM <<>> Our use of Hill functions in ODE models for gene regulation is not novel \cite{ }. Hill functions arise naturally when analyzing slow-fast dynamics of enzymatic reactions. Specifically, in the case that $p_i$ is linear for all $1 \leq i \leq N$, the associated Hill model can be semi-rigorously derived as a reasonable approximation for an ODE modeling mass action kinetics satisfying a quasi-steady state condition \cite{ }. However, it is important to note that arbitrary ODEs comprised of compositions of Hill functions may not fit our definition of a Hill model (see Remark \ref{rem:other_Hill_models}). Additionally, observe that our definition of a Hill model is used in other contexts with slightly different names. Definition \ref{def:hill_function_response} is equivalent to the Holling type 3 response function which is widely used in ecological modeling of trophic networks. Additionally, a Hill function with Hill coefficient, $d =1$, is also known as the Michaelis-Menten response function or the Holling type 2 response function in other contexts. There is a natural way to associate a Hill model with a given GRN topology, specified as a digraph with edges specified as up or down regulation. If there is a directed edge, $(j,i)$ in the GRN, then a nontrivial Hill function, $H^*_{i,j}$, depending on the state variable $x_j$, appears in the expression which defines $\cH_i$. If gene $i$ is up-regulated by gene $j$, then $H^*_{i,j}$ is a positive Hill function and consequently, $\cH_i$ increases monotonically as a function of $x_j$. Similarly, if gene $i$ is down-regulated by gene $j$, we take $H^*_{i,j}$ to be a negative Hill function and $\cH_i$ is a decreasing function of $x_j$ as expected. Finally, if gene $i$ is not regulated by gene $j$, then we define $H^*_{i,j}$ to be a trivial Hill function. Definition \ref{def:interaction_function} implies that $p_i$ is monotonically increasing in all variables and therefore, $\cH_i$ is an increasing function of $x_j$ in case of up-regulation, and a decreasing function of $x_j$ in case of down-regulation. Observe that this framework does not exclude the possibility that gene $i$ regulates itself via a nonlinear feedback mechanism. In this case, $\cH_i$ is still a monotone function of $x_i$, however, $f_i$ need not be monotone with respect to $x_i$ due to contributions from $\cL$. By construction, the every nonlinear term in a Hill model is a monotone function with respect to each state variable. Despite this assumption, the definition of an interaction function and the parameters associated to each individual Hill function provide a great deal of flexibility in tuning a Hill model. However, with this flexibility comes complexity as there are many choices to be made for both the Hill function parameters as well as the interaction types. This complexity typically prohibits any rigorous global study of the dynamics, including equilibria and saddle-node bifurcations, for nontrivial networks. \begin{remark} \label{rem:other_Hill_models} It is important to distinguish our definition of a Hill model from another class of models which also appears in the literature and utilizes Hill functions. These models are defined by first restricting the response functions in Definition \ref{def:hill_function_response} to the case that $\ell = 0$. We will refer to this case as a {\em type 2} Hill function which depends only on the parameters $\setof{\delta, \theta, \hill}$. Now, for $1 \leq i \leq N$, define the {\em basal production rate} of $x_i$ to be $b_i >0$. One now considers an ODE model for a GRN given by a system of ODEs of the form \[ \dot x = -\Gamma x + b + \cH(x), \qquad b = (b_1, \dotsc, b_N) \in \rr^M \] where we now assume that $\cH$ is a composition of an interaction function with type 2 Hill functions. We refer to this as a type 2 Hill model and the definition presented in this work as a type 1 Hill model. Observe that the appearance of the inhomogenous term in this ODE is not more general than the nonlinearity appearing in a type 1 Hill model. In particular, we can extract the inhomogenous term from Equation \ref{eq:hill_model_coordinate} by defining $b_i = p_i(\ell_{i,1}, \dotsc, \ell_{i,N})$ and revising the definition of $\cH$ appropriately. We refer to this as a {\em generalized basal production} as it allows each gene which regulates gene $i$ to contribute a term independently to the basal production of gene $i$. However, its important to note that type 1 Hill models are also not more general than type 2 Hill models. In particular, suppose $U_1, U_2$ are the collections of Hill models of type $1$ and $2$ respectively which are considered as subsets of $C(\rr^N, \rr^N)$ with the uniform topology. Then one can show that \[ \inf \setof*{\norm{f_1 - f_2}_\infty : f_1 \in U_1, f_2 \in U_2} = 0 \quad \text{if and only if} \ N = 1. \] In other words, models in $U_1$ do not uniformly approximate models in $U_2$ and vice versa. With this in mind we make two observations. The first is that determining whether or not a particular dynamical phenotype is exhibited by one type of model but not the other does not appear to be tractable. Even finding examples which have different dynamics for networks of interest seems to be difficult. The purpose of this discussion is only to clearly distinguish between these two types of models, both of which are rightfully referred to as Hill models in the literature. However, comparing them is outside the scope of this work. The second observation is that the methods employed in this work apply equally well in principle for type 2 Hill models. In this work our restriction to type 1 Hill models is based only on the fact that both DSGRN and HillCont are implemented with these models. In fact, the methodology itself can be more broadly applied to other models based on monotone response functions such as Boolean, soft Heavyside, or leaky RELU models. Comparing the dynamics of these models to those produced by DSGRN is an active area of research as is generalizing the DSGRN methodology to non-montone models. \corrc SK: Maybe I'm saying way too much in this remark. Alternatively, if we want to include all of this discussion then some of it probably belongs in the introduction or it might be an entire subsubsection instead. \\ EQ: it is a very long remark, I would indeed move most either in the introduction or in a separate subsection. I don't think we should cut it though <<>> \end{remark} \subsection{The parameter space of a Hill model} \label{sec:parameter_space} In this section we define the parameter space for a Hill model. Specifically, if $f : X \times \Lambda \to T_X$ is a Hill model associated with a gene regulatory network comprised of $N$ genes, we consider the question of choosing an appropriate definition for $\Lambda$. We start by letting $G_i \subseteq \setof{1,\dotsc, N}$ denote the set of genes which regulate gene $i$. In terms of the gene regulatory network, $G_i$ is the set of genes with a directed edge targeting $i$. From the definition of a Hill model, for each $1 \leq i \leq N$ and for each $j \in G_i$, $f_i$ depends on an associated Hill function, $H^{*}_{i,j}$, which depends on $x_j$ as well as $4$ positive parameters, $\setof{\ell_{i,j}, \delta_{i,j}, \theta_{i,j}, \hill_{i,j}}$. \corrc SK: If we change any constraints e.g.~$d$ in Definition \ref{def:hill_function_response}, we need to change this sentence accordingly.<<>> Observe that if $j \notin G_i$, then $H^*_{i,j}$ is the trivial Hill function which still technically contributes $4$ additional parameters. However, in this case $\ell_{i,j} = 0 = \delta_{i,j}$ and $\theta_{i,j}, d_{i,j}$ are ``inert''. Moreover, if we assign $j$ to $I_1$ (i.e. we assume $H^*_{i,j}$ appears in the first summand of $p_i$), then $\cH_i$ has the form required by Equation \eqref{eq:nonlinear_coordinate} while also having no dependence on $x_j$. Therefore, without loss of generality we are free to ignore the parameters contributed by each of these trivial Hill functions. Consequently, interactions arising from the nonlinear regulation of gene $i$ contribute exactly $4\#G_i$ parameters to our model. Let \[ M := \sum_{i = 1}^{N} \# G_i \] denote the total number of nonlinear regulatory interactions (i.e.~the number of edges in the gene regulatory network). Recalling that $\cL$ contributes $N$ linear decay parameters, we obtain a final count of $N + 4M$ real parameters in this Hill model. Under the biologically relevant assumption that each nontrivial Hill function has strictly positive parameters, we may simply define $\Lambda = (0, \infty)^{N + 4M}$. Specifically, we consider $\Lambda$ as a subset of $\rr^{N + 4M}$ and write parameters as vectors, $\lambda \in \rr^{N+4M}$, with respect to some fixed (yet unspecified), ordered basis. Occasionally, the order of these parameters or their role in the Hill model is relevant and we may also express parameter vectors in the form $\lambda = \paren*{\gamma, \ell, \delta, \theta, \hill}$ where $\gamma \in \rr^N$ and $\ell, \delta, \theta, \hill \in \rr^M$, again, with respect to some fixed but unspecified ordered basis of $\rr^N \times \rr^M \times \rr^M \times \rr^M \times \rr^M$. Having defined the parameter space, our first observation is that $\Lambda$ is typically ``high'' dimensional, even for ``simple'' gene regulatory network models, alluding to the complexity previously mentioned. The so called ``curse of dimension'' presents an immediate barrier to studying the global dynamics numerically, because the sample size for the parameter space must increase exponentially with the number of dimensions to maintain the same resolution. Given a particular GRN of interest, there are two common approaches to overcoming this barrier. Both approaches are relatively common in the literature when studying GRNs, so we briefly summarize them. \longcorrl EQ: let's make it shorter and clearer, too much unecessary explanations - maybe even in the introduction << The first method is to reduce the dimension by fixing some parameters. The main drawback in this approach is due to the fact that oftentimes parameters represent (either directly or by a proxy relationship) physical constants of interest which are either difficult or practically impossible to estimate, at other times the relationship between these physical constants and the parameters is itself unknown. The main exception to this are the Hill coefficients (i.e.~the components of $\hill$). In the context of gene regulation, this parameter is often associated with the required number of ligand binding sites which must be occupied in order to initiate transcription. In particular, components of $\hill$ are often assumed to be integer valued. Moreover, these integers are assumed to be ``small'' (e.g. the bounds $1 \leq d_i \leq 10$ are considered quite generous in applications). However, even if these Hill coefficients can be estimated or even assumed, this often leaves an enormous number of parameters which can not be fixed. This in turn raises questions about how reliably the observed dynamics of the reduced Hill model aligns with reality. \red{EQ: how is this connected to the curse of dimensionality?} The second technique is to randomly sample parameters. Assume that for a given parameter, $\lambda \in \Lambda$, it is feasible to study the dynamics of $f_\lambda$. Then one considers a compact subset, $K \subset \Lambda$, and repeats this study for a ``dense'' collection of parameters ``covering'' $K$.\red{Many quote marks} One hopes that the dynamics observed at these samples is representative of the dynamics for $f$ across all of $\Lambda$. In this case the main difficulty is sampling effectively and efficiently. {\em Effective} sampling is achieved,first, by choosing $K$ large enough to cover the parameter regions which are biologically relevant for the GRN and, then, by sampling at a density which is sufficient to observe all possible dynamic phenotypes of interest at some scale. On the other hand, we consider sampling to be {\em efficient} if it is computationally feasible to study specific questions regarding the dynamics of a GRN. Evidently, these are competing goals and a tight balance must be struck between effectiveness and efficiency. As a heuristic demonstration of the difficulty in sampling both effectively and efficiently, consider the simple case where we define $K := [0, r]^{N+4M}$ for a single parameter, $r > 0$. The difficulty estimating parameters mentioned previously suggests that we must choose $r$ ``large'' to be effective. Suppose we wish to sample each interval factor uniformly with a mesh size of $\epsilon$, then the corresponding uniform covering of $K$ is comprised of approximately $(\frac{r}{\epsilon})^{N + 4M}$ parameters for which we must study the dynamics of $f_\lambda$. To further illustrate the problem, suppose we choose sampling parameters so that $\frac{r}{\epsilon} = 10$, then effectively sampling the GRN in Figure {some huge GRN in the introduction} would require studying the dynamics of $f_\lambda$ for [AN ABSURDLY INFEASIBLE NUMBER OF PARAMETERS]. || The first approach is based on reducing the dimension of the parameter space by fixing some of the parameter values. Considering the difficulty in determining parameter values, this method clearly restrict the predictive ability of the model and its generality. The second competing approach is to accept the high dimensionality of the parameter space and sample on all of it. While this method does not restrict the model to any specific cases, concerns are raised on how representative is the found dynamics, considering the sample size cannot grow exponentially with the growth of the parameter space. [more?] >> Roughly speaking, our approach is a development of the sampling approach: we employ combinatorial techniques in order to reduce the subset of $\Lambda$ in which we sample, making our approach more computationally accessible than the standard sampling method. \subsection{Combinatorial dynamics and DSGRN} \label{sec:switching_systems_and_DSGRN} \longcorrl SK: Need to add review of DSGRN basics.\\ SK: I removed all discussion of switching systems << Next, we define the relationship between switching systems and Hill models. \begin{definition} \label{def:switching_model} Consider a Hill function with parameters, $\setof{\ell, \delta, \theta, \hill}$. We define the positive and negative {\em switch} response functions by the pointwise limits \[ S^\pm := \lim\limits_{\hill \to \infty} H^\pm \] which depend on the remaining parameters, $\setof{\ell, \delta, \theta}$. In particular, we have the explicit formula \begin{equation} \label{eq:switch_reponse_formulas} S^\pm(x) = \begin{cases} \ell & \pm x > \pm \theta \\ \ell + \delta \qquad & \text{otherwise}. \end{cases} \end{equation} If $f : X \times \Lambda \to X$ is a given Hill model, there is a corresponding {\em switching} model, denoted by $\tilde f : X \times \Lambda \to X$, defined by \[ \tilde f_\lambda(x) = \cL_\lambda(x) + \cS_\lambda(x) \] where the linear part is as defined for $f$, and the nonlinear part is defined by the formula \begin{equation} \label{eq:nonlinear_part_switching} \cS_i(x) = p_i \paren*{S^*_{i,1}(x_1), \dotsc, S^*_{i,N}(x_N)}, \end{equation} where $S^*_{i,j}$ is the switching response function obtained from $H^*_{i,j}$. By comparing Equations \eqref{eq:nonlinear_coordinate} and \eqref{eq:nonlinear_part_switching} we observe that a switching model is the vector field obtained from a Hill model as all Hill coefficients tend to infinity. \end{definition} Switching models were first studied in \cite{glass_kauffman} and have been widely studied and applied to study gene regulation \cite{,}. Observe that $\tilde f$ is still a nonlinear model, however, for fixed $\lambda$, $\cS_\lambda$ is a simple function and thus there are efficient techniques for studying the global dynamics of such a model. We briefly review these techniques below. || Hill models represent a class of ODEs which are appropriate for modeling many gene regulation networks. From Definition \ref{def:hill_function_response}, we notice that, for the Hill coefficient $d$ tending to infinity, the Hill response $H^\pm(x)$ tends to a step function $S^\pm(x)$. It is of notice that the step function still depends on the parameters $\{\ell, \delta, \theta\}$. It is then of interest to consider the {\em switching system}, the ODE system defined by replacing each Hill response $H^\pm(x; \ell, \delta, \theta, d)$ by its discontinuous limit $S^\pm(x;\ell, \delta, \theta) $. DSGRN considers the dynamical behaviour of the discontinuous step function system and returns a subdivision of parameter space into parameter regions, where each parameter region is associated to a Morse graph describing the dynamics of the switching system for all parameters in the parameter region. The parameter regions are given as a semialgebraic set, where each parameter region is a subset of $\rr^N$ of all parameters satisfying a set of algebriac constraints. This representation of the dynamics is purely combinatorial, and it is under study to detect bifurcations happening at the boundary between parameter regions. Assuming there is a given dynamical behaviour we are particularly interested in, and that DSGRN finds such behaviour for a given parameter region, it is of interest to ask if it is possible to extract information about the smooth system, that is about $d<\infty$ from the DSGRN results. In particular, in biological applications $d$ is expected to be quite small, often $d<5$. In this paper, we present a technique to extract information about dynamical behaviour of the smooth dynamical system \eqref{eq:ODE_model} \red{ or \eqref{eq:hill_model_coordinate}?} from the combinatorial information given by DSGRN. In this context, our main result is that DSGRN parameter regions remain relevant for low values of the Hill coefficient $d$, and for almost all values of $d$ they present a good approximation of the parameter regions that sustain the dynamics of interest. >> \section{Introduction} In applications, it might be interesting to generate data that samples multiple areas of interest. We assume the sum of these areas covers $\rr^N$. In particular, we want our sampling to satisfy two conditions: \begin{itemize} \item the points are iid, \item each area is covered by roughly the same number of points. \end{itemize} We will make these requests more precise with the introduction of some definitions. This problem appears in a variety of situations, but we will concentrate in particular on the output of DSGRN. This program takes as input a dynamical system with parameters in $\rr^N_+$ and returns a tiling of $\rr^N_+$ by semi-algebraic sets such that each semi-algebraic set sustains the same dynamics. It is now of interst to sample the dynamics, thus to create a large number of parameters that covers the given semi-algebraic sets. To be able to sample all dynamics detected dy DSGRN, we require the dataset of parameters to cover each parameter region with (almost) equal likelyhood. \section{background} Let $\rr^N_+$ be tiled by semi-algebriac sets $\mathcal{P} = \{p_1,p_2,\dots, p_n\}$. We expect that each semi-algebraic set is non-empty, following the resutls by [Shane and Lun]. Then, we want to create a distribution $\mathcal{F}$ such that if $x$ is sampled from $\mathcal{F}$ or $x\sim \mathcal{F}$, then $\mathbb{E}(x\in p_i) \approx \frac{1}{n}$ for all $i$. We define $R: \rr^N_+ \rightarrow \{1, \dots, n\}$ as the function that associate to each point $x\in \rr^N_+$ the unique index $i$ such that $x \in p_i$. Due to $\mathcal{P}$ being a semi-algebraic set, $i$ is unique, since $p_i \cap p_j = \emptyset$ for any $i\neq j$. \section{Idea} We assume that, given any value $x\in \rr^N_+$, it is computationally possible to compute $R(x)$. We expect this computation to be efficient. Considering a given distribution $\mathcal{F}$, and a sample $x_1, \dots x_k$ of iid sampled from $\mathcal {F}$, then we can count how many of these samples belong to each algebraic set, first defining the boolean function $$ b(x,i) = \begin{cases} 1 \quad & \text{if } R(x) == i,\\ 0 & \text{otherwise,} \end{cases} $$ then defining the counting function $$ c_i(x_1, \dots, x_k) = \sum_{j=1}^k b(x_j, i). $$ For ease of writing, we combine all counting functions in an array $C(x_1, \dots x_k) = ( c_1, c_2, \dots c_n)$, and we define the \emph{scoring} of the distribution $\mathcal{F}$ as \begin{equation}\label{eq:scoring} S(\mathcal{F} ) = \lim_{k\rightarrow\infty} \frac{\min(C(x_1,\dots, x_k)}{\max(C(x_1,\dots, x_k)}. \end{equation} It is possible to view this scoring in another way. Considering the likelihood of $R(x)$ to be equal to a given $i$, we can view $R$ as being itself a random variable, according to an unknown distribution over $\{1, \dots, n\}$. Then, we can consider the discrete probability distribution $\mathcal{F}_n$ of $R$ over $\{1, \dots, n\}$, that is a list of real positive values $v_1, v_2, \dots, v_n$, such that they sum up to 1. The scoring then is the ratio of the lowest probability over the biggest one. Note that $\mathcal{F}_n$ depends both on the probability distribution $\mathcal{F}$ and on the semi-algebraic set $\mathcal{P}$. It is possible to approximate the scoring of the distribution $\mathcal{F}$ by truncating the limit presented in \eqref{eq:scoring} and thus considering a finite sample $x_1, \dots, x_k$. We call this the \emph{computable score} of size $k$, using the notation $S_k(\mathcal{F})$. We have here constructed a relationship from a semi-algebraic set and a distribution to a computable quantity $S_k \in [0,1]$. We might note here how the computable score of size $k$ is itself a random variable, whose variance tends to 0 for $k$ tending to infinity. Thus, to have a reliable representation of the scoring of $\mathcal{F}$, one needs to choose $k$ large enough. Now, the problem of finding the ``best'' distribution, as discussed in the introduction, becomes the problem of maximizing the scoring of the distribution. We have at hand a method to rank distributions. Consider now how the distribution $\mathcal{F}$ could be itself determined by an array of coefficients, such as mean and variance for the Gaussian distribution. If that is the case, we can reformulate the problem of finding the best distribution into the problem of finding the best coefficients. Let us formulate this idea in a more rigorous fashion. To span $\rr_+$, let us consider the Fisher distribution, also called $F$-distribution. The Fisher distribution depends on two coefficients, usually denoted as $d_1$ and $d_2$. Then, to sample $\rr^N_+$, we will use $N$ independent Fisher distributions, each with different coefficients $(d_1^j,d_2^j)$, for $j = 1,\dots N$. The coefficients $d = \left((d_1^j,d_2^j)_{j = 1,\dots N}\right)\in \rr^{2N }$ fully determine the probability distribution on $\rr^N_+$. Thus, we associate each $d\in \rr^{2N}$ with the computable score of size $k$ of the Fisher distribution defined by $d$. We have successfully created a function \begin{align*} f : \rr^{2N} &\rightarrow [0,1]\\ d&\mapsto \frac{\min(C(x_1,\dots, x_k)}{\max(C(x_1,\dots, x_k)}, \quad x_1, \dots, x_k \sim F(d_1, d2) \times F(d_3, d_4) \times \dots F(d_{2N-1}, d_{2N}) \end{align*} that ranks distributions. The final element of our method consists of applying a minimization algorithm to $1-f$ on $\mathbb{R}^{2N}$ from any random starting point. Let us note how, for $k$ tending to infinity, $f$ is a smooth function, but for $k$ finite $f$ is a random function, thus has no expected smoothness. Thus, when considering which minimization algorithm to apply to $1-f$ we must take into account the limitations imposed by the form of $f$ itself. In particular, the lack of derivatives of any order. Assuming $k$ is large enough such that the variance of $f$ is small, we can use the Nelder-Mead algorithm. This algorithm has specifically been developed for $C^0$ functions and does not require the knowledge of a derivative to run. In our implemetation, we used the \texttt{numpy.optimize} built-in Nelder-Mead algorithm to run our computations. With the idea of shifting the problem from finding a random distribution to finding optimal coefficients defining a random distribution, it is then possible to consider a variety of different distributions. In our case, the Fisher distribution is interesting because it is defined over the unbounded interval $[0,\infty)$, that is the space our unknown parameters live in. The main constraint of the Fisher distribution is that it is only a one-dimensional random distribution. To include correlation within our generated variables, we need to consider some other distribution. In this paper, we consider the square of a multivariate Gaussian distribution. This distribution takes into consideration the correlation between the various dimensions and is properly defined over $\mathbb{R}^N_+$, but the number of unknowns now needs to include the correlation matrix, thus making the number of unknowns $N^2+N$. This has clear computational implications, but allows to avoid "outliers", that is generated parameters with elements of different order of magnitude. \section{tests and conclusion} \subsection{DSGRN - again!} Why are we interested in semi-algebraic sets? \subsection{The 9 regions of the Toggle Switch} Why the toggle switch? \subsection{Extensive testing on the Toggle Switch} at least: - comparison with uniform distribution - possible use in determining the relative areas of the regions - different starting point - discuss lack of convergence \subsection{Another example: the Soggle Twitch and a fast overview of the results} \subsection{Extreme result} Consider a situation where fidning even just one sample in each region would take thousands of samples: then it's important to choose $k$ wisely. Show off a bit! \subsection{Equilibria} The equilibria of this system are denoted by $\hat x = (\hat x, \hat y)$ and must satisfy \begin{equation} \gamma_1 \hat x = \ell_1 + \delta_1 \frac{\theta_2^{n_1}}{\theta_2^{n_1} + \hat{y}^{n_1}} \qquad \gamma_2 \hat{y} = \ell_2 + \delta_2 \frac{\theta_1^{n_2}}{\theta_1^{n_2} + \hat{x}^{n_2}} . \end{equation} If $\hat x$ is hyperbolic, then the stability is determined by the eigenvalues of the linearization given by \begin{equation} Df(\hat x) = \begin{pmatrix} -\gamma_1 & -\frac{n_1 \delta_1 \theta_2^{n_1} \hat{y}^{n_1 -1}}{\paren*{\theta_2^{n_1} + \hat{y}^{n_1}}^2} \\ -\frac{n_2 \delta_2 \theta_1^{n_2} \hat{x}^{n_2 - 1}}{\paren*{\theta_1^{n_2} + \hat{x}^{n_2}}^2} & -\gamma_2 \end{pmatrix} \end{equation} Equivalently, $(\hat x, \hat y)$ are positive real roots of the map, $F : \rr^2 \to \rr^2$ defined by \begin{align} F(x) & = \begin{pmatrix} \gamma_1 x_1 x_2^{n_1} - \ell_1 x_2^{n_1} + \gamma_1 \theta_2^{n_1} x_1 - \ell_1 \theta_2^{n_1} - \delta_1\theta_2^{n_1} \\ \gamma_2 x_1^{n_2} x_2- \ell_2 x_1^{n_2} + \gamma_2 \theta_1^{n_2} x_2 - \ell_2 \theta_1^{n_2} - \delta_2\theta_1^{n_2} \\ \end{pmatrix} \\ & = \begin{pmatrix} a_3 x_1 x_2^n - a_2 x_2^n + a_1 x_1 - a_0 \\ b_3 x_1^m x_2 - b_2 x_1^m + b_1 x_2 - b_0 \end{pmatrix} \end{align} where the coefficients are determined uniquely by the parameters. We compute the derivative of $F$ at a point by \begin{equation} DF(x) = \begin{pmatrix} a_3x_2^n + a_1 & nx_2^{n-1} \paren*{a_3x_1 - a_2} \\ mx_1^{m-1}\paren*{b_3x_2-b_2} & b_3x_1^m + b_1 \end{pmatrix} \end{equation} \subsection{Hill models} \label{sec:hill_models} In this section we define a Hill model in terms of Equation \eqref{eq:ODE_model}. Each state variable measures a non-negative real quantity and therefore we start by taking our state space to be $X = [0, \infty)^N$. and we define the vector field on the tangent space at $x\in X$ which is isomorphic to $\rr^N$. For the moment we will assume that $\Lambda$ is given and that $\lambda \in \Lambda$ is fixed. Consequently, we will suppress dependence on this parameter for the remainder of this section by writing $f, \cL,$ and $\cH$ in place of $f_\lambda, \cL_\lambda,$ and $ \cH_\lambda$ respectively. We will discuss our choice for $\Lambda$ in detail in Section \ref{sec:parameter_space}. Next, we describe the vector field corresponding to a Hill model which amounts to defining $\cL$ and $ \cH$. Our first assumption is that for $1 \leq i \leq N$, $x_i$ decays exponentially with constant rate $\gamma_i > 0$. Specifically, the decay of each protein/gene depends on the parameter, $\gamma_i$, but does not depend on the state. Therefore, $\cL$ has the form \[ \cL(x) = -\Gamma x \] where $\Gamma$ is a diagonal $N \times N$ matrix with diagonal entries $\Gamma_{ii} := \gamma_i$. \begin{remark} Our assumptions on $\cL$ preclude modeling, among other phenomena, targeted degradation effects due to ubiquination or phosphorylation. However, these assumptions are motivated by the applications of interest in this work and are not a limitation of our analysis. Indeed, the numerical techniques proposed in this work apply with almost no modification to the case that $\cL(x) = -\Gamma(x)x$ where $\Gamma(x)$ is a state dependent, positive-definite matrix. The combinatorial techniques on the other hand do inherently require understanding the structure of the linear term to a greater extent. \end{remark} The description of the nonlinear part $f$ is a bit more technical. We begin with several definitions. \begin{definition} \label{def:interaction_function} A polynomial, $p \in \rr[z_1, \dotsc, z_N]$ is called an {\em interaction function} if it has the form \[ p = \prod_{m = 1}^{q} p_m, \qquad \text{where} \quad p_m = \sum_{j \in I_m} z_j, \] and $\setof{I_1, \dotsc, I_q}$ is a partition of the integers, $\setof{1, \dotsc, N}$. For $1 \leq m \leq q$ we refer to the linear polynomial, $p_m$, as the $m^{\rm th}$ {\em summand} of $p$. \end{definition} \begin{definition} \label{def:hill_function_response} Let $H^+ : [0, \infty) \to [0, \infty)$ denote the {\em positive Hill response} function defined by the formula \[ H^+\paren*{x} := \ell + \delta \frac{x^\hill}{\theta^\hill + x^\hill} \] and $H^- : [0, \infty) \to [0, \infty)$ denotes the {\em negative Hill response} function defined by the formula \[ H^{-}\paren*{x} := \ell + \delta \frac{\theta^\hill}{\theta^\hill + x^\hill} \] where $\setof{\ell, \delta, \theta, \hill}$ are non-negative real parameters. When convenient, we let $H^*$ denote an arbitrary Hill function of either sign. The {\em Hill coefficient}\footnote{ The Hill coefficient is also called the {\em Hill exponent} in the literature.} is the parameter denoted by $\hill$ which has a central role in our analysis. Observe that $H^+$ is monotonically increasing and satisfies \[ H^+(0) = \ell \qquad \lim\limits_{x \to \infty} H^+(x) = \ell + \delta. \] Similarly, $H^-$ is monotonically decreasing and satisfies \[ H^{-}(0) = \ell + \delta \qquad \lim\limits_{x \to \infty} H^-(x) = \ell. \] Observe that the zero function is also a Hill function which can be obtained by setting $\ell = 0 = \delta$ in a Hill function of either sign. We refer to this as the {\em trivial} Hill function. \end{definition} \corrc SK: We might want to modify the parameter restrictions to enforce $d \geq 1$. <<>> With these terms defined we turn to explicitly describing $\cH$ for a Hill model. We require each coordinate of $\cH$ to have the following structure. Fix $i \in \setof{1, \dotsc, N}$ and consider a single coordinate of $f$ which is a scalar of the form, $f_i : X \to \rr$, describing the time derivative of the state variable, $x_i$. This state variable satisfies the scalar differential equation \begin{equation} \label{eq:hill_model_coordinate} \dot x_i = f_i(x) = -\gamma_i x_i + \cH_i(x) \end{equation} where $\cH_i$ is the $i^{\rm th}$ coordinate of $\cH$. We say that $f$ is a {\em Hill model} if the coordinates of $\cH$ have the form \begin{equation} \label{eq:nonlinear_coordinate} \cH_i(x) = p_i \paren*{H^*_{i,1}(x_1), \dotsc, H^*_{i,N}(x_N)} \end{equation} where $p_i$ is an interaction function and $H^*_{i,j}$ is a Hill function for $1 \leq j \leq N$. \corrc I am not sure how much of the following discussion belongs to this paper. I have left it in because we might want to make use of some of it in the concluding remarks. KM <<>> Our use of Hill functions in ODE models for gene regulation is not novel \cite{ }. Hill functions arise naturally when analyzing slow-fast dynamics of enzymatic reactions. Specifically, in the case that $p_i$ is linear for all $1 \leq i \leq N$, the associated Hill model can be semi-rigorously derived as a reasonable approximation for an ODE modeling mass action kinetics satisfying a quasi-steady state condition \cite{ }. However, it is important to note that arbitrary ODEs comprised of compositions of Hill functions may not fit our definition of a Hill model (see Remark \ref{rem:other_Hill_models}). Additionally, observe that our definition of a Hill model is used in other contexts with slightly different names. Definition \ref{def:hill_function_response} is equivalent to the Holling type 3 response function which is widely used in ecological modeling of trophic networks. Additionally, a Hill function with Hill coefficient, $d =1$, is also known as the Michaelis-Menten response function or the Holling type 2 response function in other contexts. There is a natural way to associate a Hill model with a given GRN topology, specified as a digraph with edges specified as up or down regulation. If there is a directed edge, $(j,i)$ in the GRN, then a nontrivial Hill function, $H^*_{i,j}$, depending on the state variable $x_j$, appears in the expression which defines $\cH_i$. If gene $i$ is up-regulated by gene $j$, then $H^*_{i,j}$ is a positive Hill function and consequently, $\cH_i$ increases monotonically as a function of $x_j$. Similarly, if gene $i$ is down-regulated by gene $j$, we take $H^*_{i,j}$ to be a negative Hill function and $\cH_i$ is a decreasing function of $x_j$ as expected. Finally, if gene $i$ is not regulated by gene $j$, then we define $H^*_{i,j}$ to be a trivial Hill function. Definition \ref{def:interaction_function} implies that $p_i$ is monotonically increasing in all variables and therefore, $\cH_i$ is an increasing function of $x_j$ in case of up-regulation, and a decreasing function of $x_j$ in case of down-regulation. Observe that this framework does not exclude the possibility that gene $i$ regulates itself via a nonlinear feedback mechanism. In this case, $\cH_i$ is still a monotone function of $x_i$, however, $f_i$ need not be monotone with respect to $x_i$ due to contributions from $\cL$. By construction, the every nonlinear term in a Hill model is a monotone function with respect to each state variable. Despite this assumption, the definition of an interaction function and the parameters associated to each individual Hill function provide a great deal of flexibility in tuning a Hill model. However, with this flexibility comes complexity as there are many choices to be made for both the Hill function parameters as well as the interaction types. This complexity typically prohibits any rigorous global study of the dynamics, including equilibria and saddle-node bifurcations, for nontrivial networks. \begin{remark} \label{rem:other_Hill_models} It is important to distinguish our definition of a Hill model from another class of models which also appears in the literature and utilizes Hill functions. These models are defined by first restricting the response functions in Definition \ref{def:hill_function_response} to the case that $\ell = 0$. We will refer to this case as a {\em type 2} Hill function which depends only on the parameters $\setof{\delta, \theta, \hill}$. Now, for $1 \leq i \leq N$, define the {\em basal production rate} of $x_i$ to be $b_i >0$. One now considers an ODE model for a GRN given by a system of ODEs of the form \[ \dot x = -\Gamma x + b + \cH(x), \qquad b = (b_1, \dotsc, b_N) \in \rr^M \] where we now assume that $\cH$ is a composition of an interaction function with type 2 Hill functions. We refer to this as a type 2 Hill model and the definition presented in this work as a type 1 Hill model. Observe that the appearance of the inhomogenous term in this ODE is not more general than the nonlinearity appearing in a type 1 Hill model. In particular, we can extract the inhomogenous term from Equation \ref{eq:hill_model_coordinate} by defining $b_i = p_i(\ell_{i,1}, \dotsc, \ell_{i,N})$ and revising the definition of $\cH$ appropriately. We refer to this as a {\em generalized basal production} as it allows each gene which regulates gene $i$ to contribute a term independently to the basal production of gene $i$. However, its important to note that type 1 Hill models are also not more general than type 2 Hill models. In particular, suppose $U_1, U_2$ are the collections of Hill models of type $1$ and $2$ respectively which are considered as subsets of $C(\rr^N, \rr^N)$ with the uniform topology. Then one can show that \[ \inf \setof*{\norm{f_1 - f_2}_\infty : f_1 \in U_1, f_2 \in U_2} = 0 \quad \text{if and only if} \ N = 1. \] In other words, models in $U_1$ do not uniformly approximate models in $U_2$ and vice versa. With this in mind we make two observations. The first is that determining whether or not a particular dynamical phenotype is exhibited by one type of model but not the other does not appear to be tractable. Even finding examples which have different dynamics for networks of interest seems to be difficult. The purpose of this discussion is only to clearly distinguish between these two types of models, both of which are rightfully referred to as Hill models in the literature. However, comparing them is outside the scope of this work. The second observation is that the methods employed in this work apply equally well in principle for type 2 Hill models. In this work our restriction to type 1 Hill models is based only on the fact that both DSGRN and HillCont are implemented with these models. In fact, the methodology itself can be more broadly applied to other models based on monotone response functions such as Boolean, soft Heavyside, or leaky RELU models. Comparing the dynamics of these models to those produced by DSGRN is an active area of research as is generalizing the DSGRN methodology to non-montone models. \corrc SK: Maybe I'm saying way too much in this remark. Alternatively, if we want to include all of this discussion then some of it probably belongs in the introduction or it might be an entire subsubsection instead. \\ EQ: it is a very long remark, I would indeed move most either in the introduction or in a separate subsection. I don't think we should cut it though <<>> \end{remark} \subsection{The parameter space of a Hill model} \label{sec:parameter_space} In this section we define the parameter space for a Hill model. Specifically, if $f : X \times \Lambda \to T_X$ is a Hill model associated with a gene regulatory network comprised of $N$ genes, we consider the question of choosing an appropriate definition for $\Lambda$. We start by letting $G_i \subseteq \setof{1,\dotsc, N}$ denote the set of genes which regulate gene $i$. In terms of the gene regulatory network, $G_i$ is the set of genes with a directed edge targeting $i$. From the definition of a Hill model, for each $1 \leq i \leq N$ and for each $j \in G_i$, $f_i$ depends on an associated Hill function, $H^{*}_{i,j}$, which depends on $x_j$ as well as $4$ positive parameters, $\setof{\ell_{i,j}, \delta_{i,j}, \theta_{i,j}, \hill_{i,j}}$. \corrc SK: If we change any constraints e.g.~$d$ in Definition \ref{def:hill_function_response}, we need to change this sentence accordingly.<<>> Observe that if $j \notin G_i$, then $H^*_{i,j}$ is the trivial Hill function which still technically contributes $4$ additional parameters. However, in this case $\ell_{i,j} = 0 = \delta_{i,j}$ and $\theta_{i,j}, d_{i,j}$ are ``inert''. Moreover, if we assign $j$ to $I_1$ (i.e. we assume $H^*_{i,j}$ appears in the first summand of $p_i$), then $\cH_i$ has the form required by Equation \eqref{eq:nonlinear_coordinate} while also having no dependence on $x_j$. Therefore, without loss of generality we are free to ignore the parameters contributed by each of these trivial Hill functions. Consequently, interactions arising from the nonlinear regulation of gene $i$ contribute exactly $4\#G_i$ parameters to our model. Let \[ M := \sum_{i = 1}^{N} \# G_i \] denote the total number of nonlinear regulatory interactions (i.e.~the number of edges in the gene regulatory network). Recalling that $\cL$ contributes $N$ linear decay parameters, we obtain a final count of $N + 4M$ real parameters in this Hill model. Under the biologically relevant assumption that each nontrivial Hill function has strictly positive parameters, we may simply define $\Lambda = (0, \infty)^{N + 4M}$. Specifically, we consider $\Lambda$ as a subset of $\rr^{N + 4M}$ and write parameters as vectors, $\lambda \in \rr^{N+4M}$, with respect to some fixed (yet unspecified), ordered basis. Occasionally, the order of these parameters or their role in the Hill model is relevant and we may also express parameter vectors in the form $\lambda = \paren*{\gamma, \ell, \delta, \theta, \hill}$ where $\gamma \in \rr^N$ and $\ell, \delta, \theta, \hill \in \rr^M$, again, with respect to some fixed but unspecified ordered basis of $\rr^N \times \rr^M \times \rr^M \times \rr^M \times \rr^M$. Having defined the parameter space, our first observation is that $\Lambda$ is typically ``high'' dimensional, even for ``simple'' gene regulatory network models, alluding to the complexity previously mentioned. The so called ``curse of dimension'' presents an immediate barrier to studying the global dynamics numerically, because the sample size for the parameter space must increase exponentially with the number of dimensions to maintain the same resolution. Given a particular GRN of interest, there are two common approaches to overcoming this barrier. Both approaches are relatively common in the literature when studying GRNs, so we briefly summarize them. \longcorrl EQ: let's make it shorter and clearer, too much unecessary explanations - maybe even in the introduction << The first method is to reduce the dimension by fixing some parameters. The main drawback in this approach is due to the fact that oftentimes parameters represent (either directly or by a proxy relationship) physical constants of interest which are either difficult or practically impossible to estimate, at other times the relationship between these physical constants and the parameters is itself unknown. The main exception to this are the Hill coefficients (i.e.~the components of $\hill$). In the context of gene regulation, this parameter is often associated with the required number of ligand binding sites which must be occupied in order to initiate transcription. In particular, components of $\hill$ are often assumed to be integer valued. Moreover, these integers are assumed to be ``small'' (e.g. the bounds $1 \leq d_i \leq 10$ are considered quite generous in applications). However, even if these Hill coefficients can be estimated or even assumed, this often leaves an enormous number of parameters which can not be fixed. This in turn raises questions about how reliably the observed dynamics of the reduced Hill model aligns with reality. \red{EQ: how is this connected to the curse of dimensionality?} The second technique is to randomly sample parameters. Assume that for a given parameter, $\lambda \in \Lambda$, it is feasible to study the dynamics of $f_\lambda$. Then one considers a compact subset, $K \subset \Lambda$, and repeats this study for a ``dense'' collection of parameters ``covering'' $K$.\red{Many quote marks} One hopes that the dynamics observed at these samples is representative of the dynamics for $f$ across all of $\Lambda$. In this case the main difficulty is sampling effectively and efficiently. {\em Effective} sampling is achieved,first, by choosing $K$ large enough to cover the parameter regions which are biologically relevant for the GRN and, then, by sampling at a density which is sufficient to observe all possible dynamic phenotypes of interest at some scale. On the other hand, we consider sampling to be {\em efficient} if it is computationally feasible to study specific questions regarding the dynamics of a GRN. Evidently, these are competing goals and a tight balance must be struck between effectiveness and efficiency. As a heuristic demonstration of the difficulty in sampling both effectively and efficiently, consider the simple case where we define $K := [0, r]^{N+4M}$ for a single parameter, $r > 0$. The difficulty estimating parameters mentioned previously suggests that we must choose $r$ ``large'' to be effective. Suppose we wish to sample each interval factor uniformly with a mesh size of $\epsilon$, then the corresponding uniform covering of $K$ is comprised of approximately $(\frac{r}{\epsilon})^{N + 4M}$ parameters for which we must study the dynamics of $f_\lambda$. To further illustrate the problem, suppose we choose sampling parameters so that $\frac{r}{\epsilon} = 10$, then effectively sampling the GRN in Figure {some huge GRN in the introduction} would require studying the dynamics of $f_\lambda$ for [AN ABSURDLY INFEASIBLE NUMBER OF PARAMETERS]. || The first approach is based on reducing the dimension of the parameter space by fixing some of the parameter values. Considering the difficulty in determining parameter values, this method clearly restrict the predictive ability of the model and its generality. The second competing approach is to accept the high dimensionality of the parameter space and sample on all of it. While this method does not restrict the model to any specific cases, concerns are raised on how representative is the found dynamics, considering the sample size cannot grow exponentially with the growth of the parameter space. [more?] >> Roughly speaking, our approach is a development of the sampling approach: we employ combinatorial techniques in order to reduce the subset of $\Lambda$ in which we sample, making our approach more computationally accessible than the standard sampling method. \subsection{Combinatorial dynamics and DSGRN} \label{sec:switching_systems_and_DSGRN} \longcorrl SK: Need to add review of DSGRN basics.\\ SK: I removed all discussion of switching systems << Next, we define the relationship between switching systems and Hill models. \begin{definition} \label{def:switching_model} Consider a Hill function with parameters, $\setof{\ell, \delta, \theta, \hill}$. We define the positive and negative {\em switch} response functions by the pointwise limits \[ S^\pm := \lim\limits_{\hill \to \infty} H^\pm \] which depend on the remaining parameters, $\setof{\ell, \delta, \theta}$. In particular, we have the explicit formula \begin{equation} \label{eq:switch_reponse_formulas} S^\pm(x) = \begin{cases} \ell & \pm x > \pm \theta \\ \ell + \delta \qquad & \text{otherwise}. \end{cases} \end{equation} If $f : X \times \Lambda \to X$ is a given Hill model, there is a corresponding {\em switching} model, denoted by $\tilde f : X \times \Lambda \to X$, defined by \[ \tilde f_\lambda(x) = \cL_\lambda(x) + \cS_\lambda(x) \] where the linear part is as defined for $f$, and the nonlinear part is defined by the formula \begin{equation} \label{eq:nonlinear_part_switching} \cS_i(x) = p_i \paren*{S^*_{i,1}(x_1), \dotsc, S^*_{i,N}(x_N)}, \end{equation} where $S^*_{i,j}$ is the switching response function obtained from $H^*_{i,j}$. By comparing Equations \eqref{eq:nonlinear_coordinate} and \eqref{eq:nonlinear_part_switching} we observe that a switching model is the vector field obtained from a Hill model as all Hill coefficients tend to infinity. \end{definition} Switching models were first studied in \cite{glass_kauffman} and have been widely studied and applied to study gene regulation \cite{,}. Observe that $\tilde f$ is still a nonlinear model, however, for fixed $\lambda$, $\cS_\lambda$ is a simple function and thus there are efficient techniques for studying the global dynamics of such a model. We briefly review these techniques below. || Hill models represent a class of ODEs which are appropriate for modeling many gene regulation networks. From Definition \ref{def:hill_function_response}, we notice that, for the Hill coefficient $d$ tending to infinity, the Hill response $H^\pm(x)$ tends to a step function $S^\pm(x)$. It is of notice that the step function still depends on the parameters $\{\ell, \delta, \theta\}$. It is then of interest to consider the {\em switching system}, the ODE system defined by replacing each Hill response $H^\pm(x; \ell, \delta, \theta, d)$ by its discontinuous limit $S^\pm(x;\ell, \delta, \theta) $. DSGRN considers the dynamical behaviour of the discontinuous step function system and returns a subdivision of parameter space into parameter regions, where each parameter region is associated to a Morse graph describing the dynamics of the switching system for all parameters in the parameter region. The parameter regions are given as a semialgebraic set, where each parameter region is a subset of $\rr^N$ of all parameters satisfying a set of algebriac constraints. This representation of the dynamics is purely combinatorial, and it is under study to detect bifurcations happening at the boundary between parameter regions. Assuming there is a given dynamical behaviour we are particularly interested in, and that DSGRN finds such behaviour for a given parameter region, it is of interest to ask if it is possible to extract information about the smooth system, that is about $d<\infty$ from the DSGRN results. In particular, in biological applications $d$ is expected to be quite small, often $d<5$. In this paper, we present a technique to extract information about dynamical behaviour of the smooth dynamical system \eqref{eq:ODE_model} \red{ or \eqref{eq:hill_model_coordinate}?} from the combinatorial information given by DSGRN. In this context, our main result is that DSGRN parameter regions remain relevant for low values of the Hill coefficient $d$, and for almost all values of $d$ they present a good approximation of the parameter regions that sustain the dynamics of interest. >> \subsection*{Ideas which may belong in this paper} \begin{itemize} \item Add Network 12 as an example. I will fix the HillModel implementation and we will demonstrate the techniques from this paper by showing that we can find saddle-nodes in regions where DSGRN says to look. No other analysis. \item Explore Hill coefficient level sets for the Toggle Switch. By fixing the Hill coefficient and using existing saddle-node points as an initial guess, we can try to continue along the SN maniflold to a DSGRN parameter which has the fixed Hill coefficient and then visualize the projection of these level sets onto our planar coordinates. \item study saddle nodes happening outside the center region (we have proof of existence) \item a more talkative explanation of why we even try the chi test by showing a heat map \item\textit{later} the chi test \end{itemize} \section{Introduction} \label{sec:introduction} \input{intro_dynamics.tex} \clearpage \section{Hill models} \label{sec:Hill_models} \input{hill_models_dyn2.tex} \clearpage \section{DSGRN} \label{sec:dsgrn} \input{dsgrn.tex} \clearpage \section{Equilibria} \label{sec:equilibria} \input{equilibria.tex} \clearpage \section{Saddle-node bifurcations} \label{sec:SN} \input{saddle_node.tex} \clearpage \section{Equilibria and saddle-node bifurcations} \label{sec:eq_and_SN} \input{bifs_dyn.tex} \clearpage \section{The toggle switch} \label{sec:toggle_switch} \input{toggle.tex} \clearpage \section{Conclusion and further work} \label{sec:conclusion} Maybe include this section, maybe not. \red{EQ: I think an overview section is always nice} \subsection{Analysis of the Toggle Switch Hill model} \longcorrl SK: I want to define the Hill model without any reduction first and then carry out the nondimensionalization and identifcation of Hill coefficients before doing any discussion of equilibria/saddle-nodes. << Following the Hill model construction defined in Section \ref{sec:hill_models} for the Toggle Switch yields the corresponding Hill function model defined by \begin{equation} \label{eq:hill_toggle_switch} f(x) = -\Gamma x + \cH(x), \quad x \in X:= (0, \infty)^2 \end{equation} where \begin{equation} \Gamma := \begin{pmatrix} \gamma_1 & 0 \\ 0 & \gamma_2 \end{pmatrix}, \qquad \gamma = (\gamma_1, \gamma_2) \in (0, \infty)^2. \end{equation} The nonlinear term is \begin{equation} \cH(x) := \begin{pmatrix} H_{1,2}^-(x_2; \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \hill) \\ H_{2,1}^-(x_1; \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, \hill) \end{pmatrix} \\ = \begin{pmatrix} \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^\hill}{\theta_{1,2}^{\hill} + x_2^d} \\ \ell_{2,1} + \delta_{2,1} \frac{\theta_{2,1}^\hill}{\theta_{2,1}^{\hill} + x_1^d} \end{pmatrix}, \label{eq:hill_toggle_nonlinearity} \end{equation} where we have explicitly expressed the dependence of each Hill function on both the state variable and its associated parameters. We have also identified both Hill coefficients, i.e. we have imposed the constraint $d = d_{1,2} = d_{2,1}$. When convenient, we will collect the parameters associated to $f$ into a vector as in previous sections. We define the parameter vector \[ \lambda := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d} \in \rr^9 \] and we let $\Lambda := (0, \infty)^9 \subset \rr^9$ denote the parameter space of this Hill model. || Following the Hill model construction defined in Section \ref{sec:hill_models} for the Toggle Switch yields the following Hill function model. We first observe that the Toggle Switch has two state variables of interest, denoted by $x_1, x_2$, and thus the appropriate phase space is $X := (0, \infty)^2$. For $i = 1,2$, the state variable $x_i$ is assigned a linear decay parameter, $\gamma_i$. In addition, each edge of the GRN is assigned a Hill function which are both negative since both edges of the Toggle Switch model are repressing. These Hill functions, denoted by $H^-_{1,2}$ and $H^-_{2,1}$, contribute the additional 8 parameters to the model, $\setof*{\ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}}$. Thus, we define our parameter space for the Hill function Toggle Switch to be $\Lambda := (0, \infty)^{10}$ and as described in Section \ref{sec:Hill_models} We collect the parameters into a vector defined by \[ \lambda := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}} \in \Lambda. \] Observe that since each node of the Toggle Switch has only a single incoming edge we associate each coordinate of $\cH$ with the same trivial interaction function defined by $p(z) = z$. Therefore, for fixed $\lambda \in \Lambda$, the linear and nonlinear terms defining the Hill model, $f_\lambda : X \to TX$ are given by \begin{equation} \cL_{\lambda}(x) = -\begin{pmatrix} \gamma_1 & 0 \\ 0 & \gamma_2 \end{pmatrix} x \qquad \cH_{\lambda}(x) = \begin{pmatrix} \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^{\hill_{1,2}}}{\theta_{1,2}^{\hill_{1,2}} + x_2^{\hill_{1,2}}} \\ \ell_{2,1} + \delta_{2,1} \frac{\theta_{2,1}^{\hill_{2,1}}}{\theta_{2,1}^{\hill_{2,1}} + x_1^{\hill_{2,1}}} \end{pmatrix} = \begin{pmatrix} H^-_{1,2}(x_2) \\ H^-_{2,1}(x_1) \end{pmatrix} \end{equation} Expressing these together yields and explicit formula the Toggle Switch Hill model given by \begin{equation} \label{eq:hill_toggle_switch} f_{\lambda}(x) = \begin{pmatrix} -\gamma_1 x_1 + \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^{\hill_{1,2}}}{\theta_{1,2}^{\hill_{1,2}} + x_2^{\hill_{1,2}}} \\ -\gamma_2 x_2 + \ell_{2,1} + \delta_{2,1} \frac{\theta_{2,1}^{\hill_{2,1}}}{\theta_{2,1}^{\hill_{2,1}} + x_1^{\hill_{2,1}}} \end{pmatrix} \end{equation} >> Observe that the identification of the Hill coefficients is not biologically justified nor is necessary in our mathematical analysis and software implementations. It is inserted in this context for ease of exposition since we aim to compare the combinatorial dynamics of the Hill model for small Hill coefficients with the dynamics when Hill coefficients are ``near infinity'', that is the dynamics studied by DSGRN. This identification could be replaced by other constraints, but having only one Hill coeffiicient makes our exposition clearer. The toggle switch produces one of the simplest examples of a Hill model and has been widely studied \cite{} \cite{}. Observe that even this simple model already has a $10$ dimensional parameter space, that is reduced to $9$ dimensions after identifying Hill coefficients. Consequently, studying the global dynamics is a challenge despite its simple digraph representation. In fact, we show in Section \ref{sec:toggle_switch_results} that even this model exhibits dynamics which are far from completely understood. The toggle switch is knwon for having only two modes of behaviour: either there is a unique stable equilibrium, that has basin of attraction the full plane, or it has three equilibria, two stable and one unstable. The first case is called {\em monostable}, while the second is called {\em bistable}. Clearly, the difference between the two regimes is determined by the choice of the parameter $\lambda$. In this article, we want to present a method to find bistable parameters. \corrc EQ: how much should we say about possible configurations in the toggle switch? I think we should at least point out how we *know* that there are at most 3 equilibria, but maybe here is not the right place. \\ SK: I added this to the introductory paragraphs of Section 4. <<>> \subsection{Reducing the number of parameters} \label{sec:TS_reducing_parameters} To make the analysis and discussion of the Toggle Switch example simpler, we will make several changes to the model in order to reduce the dimension of the parameter space. However, none of these changes are required for the methods discussed in this work. We will reduce the number of parameters via two mechanisms. First we will equate the Hill coefficients associated to both edges in the Toggle Switch. This amounts to assuming that $d := \hill_{1,2} = \hill_{2,1}$, is the common value of both Hill coefficients which reduces the number of parameters in the model by $1$. \corrc SK: Probably should make a comment describing the impact (or lack thereof) of this choice on the algorithms in the paper. <<>> Second, we will further reduce the dimension of the parameter space by 3 via non-dimensionalization of the parameters as follows. Begin by letting $f_\lambda$ denote the Hill model for the Toggle Switch previously defined in Equation \eqref{eq:hill_toggle_switch}. Suppose, $x : (-\epsilon, \epsilon) \to X$ parameterizes a trajectory segment for $f_\lambda$ and a typical point along this trajectory is denoted by $x(t) = (x_1(t), x_2(t))$. We consider the rescaled state and time variables defined by \[ x_1 := k_1 y_1, \quad x_2 := k_2 y_2, \quad t := k_t \tau, \] for some positive constants, $\setof{k_1, k_2, k_t}$. The differential equation satisfied by $x_1$ can be rewritten as \[ \frac{k_1}{k_t} \ddx{y_1}{\tau} = \ddx{x_1}{t} = -\gamma_1 k_1 y_1 + \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^{\hill}}{\theta_{1,2}^{\hill} + \paren{k_2 y_2}^{\hill}}. \] After a similar computation for $x_2$ and multiplying through by $\frac{k_1}{k_t}$ and $\frac{k_2}{k_t}$ respectively, we obtain the system of differential equations in the new variables given by \begin{align*} \ddx{y_1}{\tau} & = -\gamma_1 k_t x_1 + \frac{\ell_{1,2} k_t}{k_1} + \frac{\delta_{1,2} k_t}{k_1} \frac{\paren{\frac{\theta_{1,2}}{k_2}}^{\hill}}{\paren{\frac{\theta_{1,2}}{k_2}}^{\hill} + y_2^{\hill}} \\ \ddx{y_2}{\tau} & = -\gamma_2 k_t x_2 + \frac{\ell_{2,1} k_t}{k_2} + \frac{\delta_{2,1} k_t}{k_2} \frac{\paren{\frac{\theta_{2,1}}{k_1}}^{\hill}}{\paren{\frac{\theta_{2,1}}{k_1}}^{\hill} + y_1^{\hill}}. \end{align*} Observe that for any choice of $k_1,k_2,k_t$, these differential equations still have the form of a Hill model and the coordinate change, $(x_1,x_2,t) \mapsto (y_1,y_2, \tau)$, is a dynamical conjugacy between these Hill models. In particular, we choose the rescaling parameters: \[ k_1 = \theta_{2,1}, \quad k_2 = \theta_{1,2}, \quad k_t = \frac{1}{\gamma_1}, \] and define the {\em reduced parameters} \begin{align*} \ell_{1,2}^* & := \frac{\ell_{1,2}}{\gamma_1 \theta_{2,1}} \\ \delta_{1,2}^* & := \frac{\delta_{1,2}}{\gamma_1 \theta_{2,1}} \\ \ell_{2,1}^* & := \frac{\ell_{2,1}}{\gamma_1 \theta_{1,2}} \\ \delta_{2,1}^* & := \frac{\delta_{2,1}}{\gamma_1 \theta_{1,2}} \\ \gamma_2^* & := \frac{\gamma_2}{\gamma_1}, \end{align*} so that the non-dimensionalized version of Equation \eqref{eq:hill_toggle_switch} with identical Hill coefficients is the Hill model defined by the formula \begin{equation} f^*(y) := \begin{pmatrix} - y_1 + \ell_1^* + \delta_1^* \frac{1}{1 + y_2^\hill} \\ - \gamma_2^* y_2 + \ell_2^* + \delta_2^* \frac{1}{1 + y_1^\hill} \end{pmatrix}. \end{equation} We refer to $f^*$ as the {\em reduced} Toggle Switch Hill model and observe that $f^*$ has only $6$ free parameters (one of which is the common value of both Hill coefficients) and 3 fixed parameters (both threshold parameters and the linear decay parameter on $y_1$). Thus, we define the reduced parameter space associated to $f^*$ as the subspace, $\Lambda^* \subset \Lambda$, defined by \[ \Lambda^* := \setof*{\lambda \in \Lambda : \gamma_1 = \theta_{2,1} = \theta_{1,2} = 1, \ \hill_{1,2} = \hill_{2,1} = d} \cong (0, \infty)^6. \] Consequently, we denote a typical reduced parameter in $\Lambda^*$ by \[ \lambda^* := \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*, d}. \] \corrl SK: We can also impose the following additional constraint if we like << || In addition, observe that the dimensionless parameter, $\gamma_2^*$, is the ratio of the linear decay rates for $x_1$ and $x_2$ and since the model is symmetric with respect to these state variables, we can assume without loss of generality that $\gamma_2^* \geq 1$. >> \corrc EQ: I would state it later, to ``double'' the info we have <<>> Since the dynamics generated by $f$ and $f^*$ are conjugate restricted to the subset \\ $\setof*{\lambda \in \Lambda : d_{1,2} = d_{2,1}}$, we have performed all computations described in this Seection using the reduced Toggle Switch model or equivalently, under the assumption that the Hill model in Equation \eqref{eq:hill_toggle_switch} has fixed parameters $\theta_{2,1} = \theta_{1,2} = \gamma_1 = 1$ and equal Hill coefficients. We also note that the HillCont library has been written to allow these sorts of constraints to be implemented just as easily as a general Hill model, and takes advantage of the reduced number of parameters for faster computation. However, none of the algorithms in this paper rely on either of the reductions performed on this example. \corrl SK: Moved all of this to the DSGRN section. Also Figure \ref{fig:projected_param_reg} should be removed if Table 1 is accepted. << After imposing similar constraints for the switch system we obtain a non-dimensional parameter space $\Xi \subset \rr^5$. Observe that the $9$ DSGRN parameter nodes in Figure \ref{fig:DSGRN_toggle} now represent the semi-algebraic subsets of $\rr^5$ defined by the three possible ordering of the sets $\{ 1, \ell_{1,2}, \ell_{1,2} + \delta_{1,2} \}$ and $\{\gamma_2 , \ell_{2,1} ,\ell_{2,1} + \delta_{2,1}\}$, with the restriction that, due to positivity of all parameters $\ell_{i,j}< \ell_{i,j} + \delta_{i,j}$, for $i, j = 1,2, i\neq j$, as shown in Figure \ref{fig:projected_param_reg} \begin{figure}[h] \begin{center} \includegraphics[width = 0.5\textwidt ]{DSGRN_parameters_toggle_projected} \caption{The projected combinatorial parameter space for the toggle switch produced by DSGRN. } \label{fig:projected_param_reg} \end{center} \end{figure} Analogous to the discussion in Section \ref{} expected, we define $\pi_\Xi : \Lambda \to \Xi$ to be projection onto the first $5$ coordinates and in this context, our claim from the previous section can be restated as follows. With high probability, $\lambda \in \Lambda \subset \rr^6$ is a bistable parameter if and only if $\pi_\Xi \lambda \in U_5$. \red{EQ: I got a bit lost in the definitions... maybe we should define the projection here only? or defined them properly again? I don't kow} || >> \subsection{Analysis of the Toggle Switch in DSGRN} \label{sec:TS_DSGRN_analysis} \longcorrl << We collect the parameters into an associated vector, \[ \xi := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}} \in \rr^8 \] and let $\Xi := (0, \infty)^8 \subset \rr^8$ and $\tilde{f}_\xi$ denote the associated parameter space and switching system respectively. Despite the fact that $\Xi$ is $8$ dimensional, DSGRN computes the combinatorial dynamics for the equivalence class containing $\tilde{f}_\xi$ in less than a second on a basic laptop. There are $9$ parameter regions which are represented as combinatorial parameter nodes shown in Figure \ref{fig:DSGRN_toggle}. Each node represents a semi-algebraic subset of $\rr^9$. We define a bistable parameter for the switching system to be a parameter for which the Morse graph has exactly two minimal nodes of FP type. Looking at Figure \ref{fig:DSGRN_toggle} we see that this occurs only for the {\em critical parameter node} in the center (labeled as region $5$). Specifically, the critical node represents the semi-algebraic set \[ C := \setof{\xi \in \rr^8 : 0 < \ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2} \ \text{and} \ 0 < \ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}}. \] For the remaining $8$ parameters, there is only one minimal node of FP type in the Morse graph and these regions are the combinatorial analog of a monostable parameter. Given the efficiency of DSGRN and the fact that the computations are rigorous, we explore two questions in this example. || {\em In this Section, we introduce the output of DSGRN and consider DSGRN itself as a black box. Furthermore, we introduce the output without talking about switching systems and the like, only of parameter regions.} The software package DSGRN \cite{} \cite{} takes as input a digraph and returns a partition of parameter space, each element of the partition annotated with some information on the dynamics associated to the digraph. In particular, the dynamics studied is the one associated to the digraph through a Hill model, such as the one introduced in \eqref{eq:hill_model_coordinate}\red{which one to reference here?}, with the additional constraint that all Hill coefficients $d_{ij}$ are set to infinity. While it is beyond the scope of this article to discuss the mathematical background of such feat, we want to highlight how such dynamical description is easily understood in the context of Hill systems as a limiting behaviour when all hill coefficients in the model tend to infinity. \red{EQ: I think this past paragraph has already been discussed in Section 2.3 (Combinatorial dynamics and DSGRN), so I don't think we should re-introduce DSGRN again.} For the toggle switch, DSGRN returns us a division of parameter space into 9 regions, as presented in Figure \ref{fig:DSGRN_toggle}. In this setting, what we want to read in this figure is the existence and definition of the 9 parameter regions by use of algebraic equations and the stable dynamical behaviour of these regions. In Figure \ref{fig:DSGRN_toggle}, we can read that all regions except the center one exhibit a unique stable fixed point, while the center region has two stable fixed points. The center region, or {\em critical parameter node} , is the algebraic set \[ C := \setof{\xi \in \rr^8 : 0 < \ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2} \ \text{and} \ 0 < \ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}}. \] >> Our first interest is to discuss how well DSGRN predicts the dynamics of the associated Hill model. Specifically, let $\pi_\Xi : \Lambda \to \Xi$ be defined by projection onto the first $8$ coordinates, that is all coordinates except the identified Hill coefficient. We aim to provide numerical evidence that, with high probability, \red{this high probability is unclear, and what is the coice for $d$? $d$ needs to be higher than 1? 2? small enough? details missing} \begin{itemize} \item If $\pi_\Xi \lambda \in C$, then $\lambda$ is a bistable parameter for $f$. \item If $\pi_\Xi \lambda \notin C$, then $\lambda$ is not a bistable parameter for $f$. \end{itemize} This analysis strongly motivates our choice to identify the Hill coefficients in the toggle switch. To see why, observe that if $d < 2$, it can be shown that $f_\lambda$ admits exactly $1$ equilibrium which is stable. \red{for any choice of the other parameters? is there a reference?} However, if $\pi_\Xi \lambda \in C$, then for some $2 < \hat{d} < \infty$, $f$ undergoes a bifurcation in which an additional stable equilibrium appears for $d>\hat{d}$. \corrc SK: Should we prove this? <<>> \corrc EQ: I think we should spend more time explaining this, because it's a big stone of the whole construction<<>> Thus, if we seek to determine whether or not a given $\lambda \in \Lambda$ is a bistable parameter, it is natural to look for saddle-node bifurcations with respect to the Hill coefficient. Since we have identified the Hill coefficients we obtain a $1$-dimensional saddle-node bifurcation problem.\red{ too dense} \longcorrl EQ << Second, we study the values of the Hill coefficients at which saddle-node bifurcations occur over fibers of $\pi_\Xi$. This remains a high dimensional algebraic geometry problem which is quite complicated but we can still analyze fibers over the boundary of the center region. Our aim is to understand how this center region deforms as a function of decreasing Hill coefficients and also to study the values of the Hill coefficient parameter where saddle-node bifurcations occur as a function of the location on the boundary where the fiber is attached. || Our second aim in this example is to study the deformation undergone by the bistability region when the Hill coefficient drops from infinity to the application regime, $d<10$. In this context, we consider fibers of $\pi_\Xi$ and we study the values of the Hill coefficients at which saddle-node bifurcations occur over such fibers. The difficulty in such approach is in the high dimensionality of the problem. >> \longcorrl SK: << The understanding of these Morse graphs is that a small smooth perturbation of the switching system \eqref{eq:switching_toggle_switch} would result in a central parameter node with three fixed points, two stable and one unstable, and a saddle node occurs when passing from the center parameter node to any of the neihgboring parameter nodes, i.e. the parameter nodes numbered 2, 4, 6 and 8. In addition, we expect a ptichfork bifurcation to happen at the top right and bottom left ``corners'' of the center parameter node. The difficult of understanding this Figure \ref{fig:DSGRN_toggle} is that the space represented in 8 dimensional, that is that the boundary between paramter nodes are codimesion 1 surfaces in $\rr^8$ and the ``corners'' are codimension 2 surfaces. The additional problem is that it is unclear how and how far we can transport the information harvested at $\hill \rightarrow \infty$ towards low values of $\hill$. This paper tries to answer many such questions. || When considering DSGRN outputs, and in particular referring to Figure \ref{fig:DSGRN_toggle}, the expectation is that a small smooth perturbation of the switching system \eqref{eq:switching_toggle_switch} would result in a central parameter node \red{would ``region'' be better} with three fixed points, two stable and one unstable. In the other parameter regions, such perturbation would not change the number nor stability of the fixed point. Thus, we expect a saddle node to occur when passing from the center parameter node to any of the neihgboring, monostable, parameter nodes, i.e. the parameter nodes numbered 2, 4, 6 and 8. Extending such understanding to the corner of region 5, we expect a pitchfork bifurcation to happen at the top right and bottom left ``corners'' of the center parameter node. Let us here reminde ourselves that Figure \ref{fig:DSGRN_toggle} is a represnetation of a 8 dimensional space. This means that the boundary between parameter nodes are codimesion 1 surfaces in $\rr^8$ and the ``corners'' are codimension 2 surfaces. We are interested in discussing how and how far we can transport the information harvested at $\hill \rightarrow \infty$ towards low values of $\hill$. >> \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \begin{table*}\centering \begin{tabular}{@{}clll@{}} \toprule Region & Phenotype & Inequalities & Reduced Inequalities \\ \midrule \multirow{2}{*}{$1$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$2$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\ \midrule \multirow{2}{*}{$3$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \midrule \multirow{2}{*}{$4$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$5$} &\multirow{2}{*}{Bistable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\\midrule \multirow{2}{*}{$6$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \midrule \multirow{2}{*}{$7$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$8$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\ \midrule \multirow{2}{*}{$9$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \bottomrule \end{tabular} \caption{The result of the DSGRN analysis for the Toggle Switch. $\Xi$ is subdivided into $9$ distinguished parameter regions on which the combinatorial dynamic phenotype is known and constant. These regions are the semi-algebraic subsets of $\Xi$ satisfying the inequalities in the third column. For the reduced parameter space defined in Section \ref{sec:TS_reducing_parameters}, the corresponding parameter regions are the semi-algebraic subsets of $\Xi^*$ described by the inequalities in the column four.} \label{tab:parameter_regions} \end{table*} \begin{figure}[h] \begin{center} \includegraphics[width = 0.5\textwidt ]{DSGRN_parameters_toggle} \caption{The combinatorial parameter space for the toggle switch produced by DSGRN. Each of the $9$ cells represents a parameter region and contains a summary of the combinatorial dynamics for that region.} \label{fig:DSGRN_toggle} \end{center} \end{figure} \corrc SK: Change the $l,u$ parameters in Figure \ref{fig:DSGRN_toggle} to $\ell, \delta$. Additionally, the subscripts on the first inequality of region 3 are wrong so all subscripts should be checked. <<>> \corrc EQ: I changed the figure, but I am not sure of the indeces, so I copied them from th eprevious figure. The previous figure was copied from the original DSGRN article... if it's wrong, I don't know how to fix it. <<>> \corrc SK: I moved the information from this Table into a new version (Table \ref{tab:parameter_regions}) with no vertical or double rules. I also added the reduced inequalities so we don't need a second table in the nondimensionalization section. I like it better this way but let me know if you don't agree. <<>> \corrc EQ: I'm not sure... on one hand, it looks very neat like this, on the other is harder to notice that the inequalities repeat... <<>> \subsection{The toggle switch saddle-node bifurcation problem} \label{sec:toggle_SN_problem} In this section we demonstrate how the saddle-node bifurcation problem described in Section \ref{sec:eq_and_SN} is defined and solved for the toggle switch. Recalling our previous discussion, the parameter of interest is the shared Hill coefficient $\hill$ for values $\hill \geq 1$ and we write the remaining parameters as a vector, $\mu := \paren*{\ell_{1,2}, \delta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}} \in \image \pi_{\Xi}$. Observe that if $\mu$ is fixed, then the toggle switch Hill model is a one parameter family of the form, $f_{\mu} : [0, \infty)^2 \times [1, \infty]$, given explicitly by the formula \begin{equation} f_{\mu}(x, \hill) = - \begin{pmatrix} 1 & 0 \\ 0 & \gamma_2 \end{pmatrix} x + \begin{pmatrix} H_{1,2}^- (x_2, \hill) \\ H_{2,1}^- (x_1, \hill) \end{pmatrix} = \begin{pmatrix} -x_1 + \ell_{1,2} + \delta_{1,2} \frac{1}{1 + x_2^\hill} \\ -\gamma_2 x_2 + \ell_{2,1} + \delta_{2,1} \frac{1}{1 + x_1^\hill} \\ \end{pmatrix} \end{equation} To define the saddle-node bifurcation problem we first consider the algorithm for finding equilibria. Both $\cH_1$ and $\cH_2$ satisfy Definition \ref{def:monotone_factorization} so the bootstrap algorithm can be applied. The bootstrap map is $\Phi : \rr^4 \to \rr^4$ defined by \[ \Phi(\alpha, \beta) = \begin{pmatrix} H_{1,2}(u_4) \\ H_{2,1}(u_3) \\ \frac{1}{\gamma_2} H_{1,2}(u_2)\\ \frac{1}{\gamma_2} H_{2,1}(u_1), \end{pmatrix} \qquad u \in \rr^4. \] Following Algorithm \ref{alg:bootstrap_equilibria} we compute the iterates of $\Phi$ defined by \[ u^{(0)} := \begin{pmatrix} \ell_{1,2} \\ \ell_{2,1} \\ \ell_{1,2} + \delta_{1,2} \\ \ell_{2,1} + \delta_{2,1} \\ \end{pmatrix} \qquad u^{(n)} := \Phi(u^{(n-1)}) \forall n \geq 1. \] Theorem \ref{thm:bootstrap_eqbounds} ensures that $u^{(n)}$ converges to a fixed point of $\Phi$, $\hat{u} = (\hat{\alpha}, \hat{\beta})$ and that all equilibria for $f_\mu$ are contained in the subset $[\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$. However, in the case of the toggle switch we can prove a stronger version of Theorem \ref{thm:bootstrap_eqbounds} \begin{theorem} \label{thm:toggle_bootstrap_eqbounds} Suppose $d \geq 1$ and $\mu \in \image \pi_\Xi$ is a parameter such that the nullclines of $f_\mu$ only intersect transversally. Let $\Phi : \rr^4 \to \rr^4$ be the associated bootstrap map and suppose the orbit through $u^{(0)}$ converges to $\hat{u} := (\hat{\alpha}, \hat{\beta}) \in \rr^2 \times \rr^2$ and define $ \hat R := [\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$. Then $\hat{u}$ is asymptotically stable and exactly one of the following is true. \begin{enumerate} \item $\hat{R}$ is a degenerate rectangle (i.e.~for $i = 1,2$, $\hat{\alpha}_i = \hat{\beta}_i$) and $f_\mu$ has a unique equilibrium, $\hat{x} = (\hat{\alpha}_1, \hat{\beta}_1)$. Moreover, $\hat{x}$ is stable. \item $f_\mu$ has exactly three equilibria. Two equilibria are stable and lie at the corners of $\hat{R}$. Specifically, \[ \hat{x}_1 = (\hat{\alpha}_1, \hat{\beta}_2), \qquad \hat{x}_2 = (\hat{\beta}_1, \hat{\alpha}_2) \] are stable equilibria. The third equilibrium denoted by $\hat{x}_3$ is unstable and lies in the interior of $\hat{R}$. \end{enumerate} \corrc SK: I will probably separate this Theorem into 2 pieces. The first proving that the corners of the rectangle are stable equilibria and the second proving that the TS has (generically) 1 stable eq or 2 stable/1 saddle equilibria. s <<>> \begin{proof} The stability of $\hat{u}$ follows directly from the computation \[ D\Phi(\hat{u}) = \begin{pmatrix} 0 & 0 & 0 & H_1'(\hat{\beta}_2) \\ 0 & 0 & \frac{1}{\gamma_2} H_2'(\hat{\beta}_2) & 0 \\ 0 & H_1'(\hat{\alpha}_2) & 0 & 0 \\ \frac{1}{\gamma_2} H_2'(\hat{\alpha}_1) & 0 & 0 & 0 \end{pmatrix} \] and since $H_1, H_2$ are negative Hill functions, every nonzero entry of this matrix is negative. However, these nonzero entries are precisely the eigenvalues of $D\Phi(\hat{u})$. To prove the second claim, define $\hat{x}_1 := (\hat{\alpha}_1, \hat{\beta}_2)$ and $\hat{x}_2 := (\hat{\alpha}_2, \hat{\beta}_1)$. Observe that since $(\hat{\alpha}, \hat{\beta})$ is a fixed point of $\Phi$ we have by direct computation \begin{align*} H_1(\hat{\beta}_2) & = \hat{\alpha}_1 \\ H_2(\hat{\beta}_1) & = \gamma_2 \hat{\alpha}_2 \\ H_1(\hat{\alpha}_2) & = \hat{\beta}_1 \\ H_2(\hat{\alpha}_1) & = \gamma_2 \hat{\beta}_2. \end{align*} It follows that \[ f_\mu(\hat{x}_1) = f_\mu(\hat{\alpha_1}, \hat{\beta}_2) = \begin{pmatrix} \hat{\alpha}_1 - H_1(\hat{\beta}_2) \\ \gamma_2 \hat{\beta}_2 - H_2(\hat{\alpha}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] \[ f_\mu(\hat{x}_2) = f_\mu(\hat{\beta}_1, \hat{\alpha}_2) = \begin{pmatrix} \hat{\beta}_1 - H_1(\hat{\alpha}_2) \\ \gamma_2 \hat{\alpha}_2 - H_2(\hat{\beta}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] so that $\hat{x}_1, \hat{x}_2$ are equilibria for $f_\mu$. Evidently, if $\hat{R}$ is degenerate, then $\hat{x}_1 = \hat{x}_2$ and by Theorem \ref{thm:bootstrap_eqbounds}, it follows that this is the unique equilibrium for $f_\mu$. On the other hand, suppose $\hat{R}$ is nondegenerate so that $\hat{x}_1 \neq \hat{x}_2$ and $f_\mu$ has at least two equilibria. Note that either $\hat{\alpha}_1 \neq \hat{\alpha}_2$ or $\hat{\beta}_1 \neq \hat{\beta}_2$. We first prove that in fact, both of these inequalities must hold and in particular, $\hat{\alpha}_i < \hat{\beta}_i$ for $i = 1,2$. Denote the nullclines of $f_\mu$ by \[ \cN_1 := \setof*{x \in X : H_1(x_2) - x_1 = 0} \qquad \cN_2 := \setof*{x \in X : \frac{1}{\gamma_2} H_2(x_1) - x_2 = 0}. \] Observe that at a point $x \in \cN_1$ we have $T_x \cN_1 = \inspan \setof*{(1, \frac{1}{\gamma_2} H_2'(x_1))}$ implying that $\cN_1$ is a smooth $1$-dimensional manifold and the mapping $x \mapsto (1, \frac{1}{\gamma_2} H_2'(x_1))$ is a smooth parameterization of the tangent bundle of $\cN_1$. The second coordinate of this parameterization is monotonically decreasing since $H_2$ is a negative Hill function. \corrc SK: Need to finish this proof. Somehow it is more difficult than expected. <<>> \end{proof} \end{theorem} \longcorrl SK: << For $\hill \rightarrow \infty$, we know that in the center parameter region $\mathcal{R}$ satifying the inequalities $\ell_i < \gamma_i \theta_i < \ell_i + \delta_i$ has two stable fixed points. We also know that, for $\hill \rightarrow 0$, the system \eqref{eq:hill_toggle_switch} has trivially a unique fixed point. We expect therefore that for all parameters in the region $\mathcal{R}$ we can find $x^*,\hill^*, v$ such that the toggle switch undergoes a saddle-node bifurcation at those values. Furthermore, we don't expect to find saddle-node bifurcations outside of this parameter region, and if we do we expect them to be ``isolas'', that is saddle-nodes generating a set of equilibria that are going to undergo another saddle-node bifurcation for higher values of $\hill$, such as sketched in Figure \ref{fig:isolas}. || >> \corrc EQ: why don't you like this? seems a bit out of context here... <<>> \subsection{Comparing bistable parameters} \label{sec:comparing_bistable_parameters} \corrc EQ: let's not use $\alpha$ and $\beta$ again, it seems we are talking about the map $\Phi$ again \\ SK: Agreed. <<>> \corrc EQ: the indices convention here is a bit messy: indices ``1'' relate to alpha and beta 1, that work on the y-axis. Or am I completely wrong?<<>> In this section we consider our claim more carefully. First, we describe a method of sampling fibers of $\pi_\Xi$ consisting of two steps. First, we introduce new parameters, $\setof{\alpha_1, \beta_1, \alpha_2, \beta_2}$ defined by \[ \alpha_1 := \ell_{1,2} \quad \beta_1 := \ell_{1,2} + \delta_{1,2} \quad \alpha_2 := \frac{\ell_{2,1}}{\gamma_2} \quad \beta_2 := \frac{\ell_{2,1} + \delta_{2,1}}{\gamma_2}. \] It is often convenient to express this as a nonlinear transformation, $\psi : \Xi \to \rr^2 \times \rr^2$, defined by $\psi(\xi) = (\alpha, \beta)$. The motivation for this transformation is as follows. Observe that the restriction $\Xi \subseteq (0, \infty)^5$ implies that \[ \image \psi \subseteq \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : \alpha_i < \beta_i, i = 1,2}. \] Upon rewriting each of the $9$ parameter regions in terms of these parameters, we see that each region is defined by a linear manifold. Moreover, the boundaries between each pair of adjacent region are also linear manifolds. For example, the region $U_5$ is given by \[ U_5 = \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : 0 < \alpha_1 < 1 < \beta_1 \ \text{and} \ 0 < \alpha_2 < 1 < \beta_2 } \] and the boundary separating $U_5$ and $U_6$ is given by \[ \partial U_5 \cap \partial U_6 = \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : 0 < \alpha_1 < 1 < \beta_1, \ \alpha_2 = 1, \ \text{and} \ 1 < \beta_2 }. \] Next, fix positive constants $\overbar{\alpha}, \overbar{\beta}$ and define $K \subset \image \psi$ by \[ K = \setof{(\alpha, \beta) \in \image \psi : \norm{\alpha}_\infty \leq \overbar{\alpha}, \norm{\beta}_\infty \leq \overbar{\beta}}. \] Define a map $K \to [0,3]^2$ by $(\alpha, \beta) \mapsto (u,v)$ where $u,v$ are defined by the formulas \longcorrq EQ: I think this is the complete formula, copied from the code << \[ u = \begin{cases} \beta_2 & \text{if} \ \beta_2 \leq 1 \\ case 2 & \text{if} \ \alpha_2 < 1 < \beta_2 \\ 2 + \frac{\alpha_2-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_2 \end{cases} \qquad v = \begin{cases} \beta_1 & \text{if} \ \beta_1 \leq 1 \\ case 2 & \text{if} \ \alpha_1 < 1 < \beta_1 \\ case 3 & \text{if} \ 1 \leq \alpha_1 \end{cases} \] || \[ u = \begin{cases} \beta_2 & \text{if} \ \beta_2 \leq 1 \\ 1+\frac{ 1 - \alpha_2}{\beta_2 - \alpha_2} & \text{if} \ \alpha_2 < 1 < \beta_2 \\ 2 + \frac{\alpha_2-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_2 \end{cases} \qquad v = \begin{cases} \beta_1 & \text{if} \ \beta_1 \leq 1 \\ 1+\frac{ 1 - \alpha_1}{\beta_1 - \alpha_1} & \text{if} \ \alpha_1 < 1 < \beta_1 \\ 2 + \frac{\alpha_1-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_1 \end{cases} \] >> This is, under the hood, a projection from the full parameter space to what we may call the DSGRN coordinates. \subsubsection{``Invertion'' of this projection} \red{EQ: should we call DSGRN parameter regions in matrix-way? By this I mean (1,1), (1,2), (1,3), (2,1), (2,2), and (2,3)... instead of 1,2..9? I think it would make some discussions here easier.} Consider fixing $(u,v) \in [0,3]\times [0,3]$ in DSGRN coordinates. The corresponding full parameter associated to $(u,v)$ depends on the region $(u,v)$ lives in. \red{MAGICAL NOTATIONS!} I will start with $u$ and determine the parameter values $(\gamma_2, \ell_2, \delta_2, \theta_2)$. First, $\theta_2 =1$. It must hold \[ (\gamma_2,\ell_2, \delta_2) = \begin{cases} \frac{\ell_2+\delta_2}{\gamma_2} = u \qquad &\text{ if } u<1,\\ \frac{\gamma_2 - \ell_2}{\delta_ 2}= u - 1 \qquad &\text{ if } u\in [1,2),\\ \frac{\ell_2}{\gamma_2} = ( u - 2) (\overbar{\alpha} -1) + 1 \qquad &\text{ if } u\geq 2. \end{cases} \] Notice how, the third relationship does not involve $\delta_2$, while the second requires $\gamma_2 > \ell_2$. For $v$, the situation is simpler, because $\gamma_1$ has already been set to 1, so the equations follow as \[ (\ell_1, \delta_1) = \begin{cases} \ell_1+\delta_1 = v \qquad &\text{ if } v<1,\\ \frac{1 - \ell_1}{\delta_ 1}= v - 1 \qquad &\text{ if } v\in [1,2),\\ \ell_1= ( v - 2) (\overbar{\alpha} -1) + 1 \qquad &\text{ if } v\geq 2. \end{cases} \] \red{ if there is a mistake here, the code needs to be updated too. Code in ``fiber\texttt{\_}sample()'' in file \texttt{toggle\_switch\_heat\_functionalities}} \subsection{Results}\label{sec:toggle_switch_results} \subsubsection{Initial results} Using the coordinate system we have introduced, we can sample the square $S = [0,3]\times [0,3]$ uniformly, project each point $(x,y) \in S$ into the appropriate fiber, according to a randomization of \eqref{} and, letting the Hill exponent vary, find the value of the Hill exponent such that the system is undergoing a saddle node bifurcation with respect to the exponent. We plot the results in a heat map, Fig. \ref{fig:heat_map}a. \begin{figure}[h] \begin{center} \includegraphics[width = 0.45\textwidt ]{HD_heatmap.png} \includegraphics[width = 0.45\textwidt ]{location_good_par.png} \caption{ On the left, the heat map of the Hill coefficientsprojected on DSGRN coordinates, on the right the projection of hteparameters themselves.\red{this computation needs to be re-run with random samples over all of the square}} \label{fig:heat_map} \end{center} \end{figure} This Figure gives an intuition of what we would like to prove: choosing parameters in the ''center region'' has the highest likelihood of giving us a saddle node for some values of the Hill coefficient. Looking at this map, we observe that the bottom left of the center region seems to be the best location for a practical bistable switch, because there the Hill coefficient needed to undergo the saddle node is the lowest. We also might notice how there are saddle nodes taking place outside the center region. For this to be more clear, we refer to Figure \ref{fig:heat_map}b. These parameters will be additionally sudied in Section \ref{sec:isolas}. As additional testing, we fix the value $y = 1.5$ and choose the value of $x$ in a uniformly spaced segment between 0.5 and 2.5. We then sample each of these point's fiber multiple times, and store the value of the Hill coefficient for which each random parameter undergoes a saddle node bifurcation. This experiment results in Figure \ref{fig:n_wrt_gamma}, where we clearly see how the likelihood of finding a saddle node (and thus bistability) drops as soon as the $x$-value os lower than 1 or higher than 2. \begin{figure}[h] \begin{center} \includegraphics[width = 0.30\textwidt ]{n_wrt_gamma.png} \includegraphics[width = 0.30\textwidt ]{average_n_wrt_gamma.png} \includegraphics[width = 0.30\textwidt ]{number_n_wrt_gamma.png} \caption{ Starting from the left: the value of the Hill coefficient for the saddle nodes we found depending on $x$, the average of the Hill coefficient, the number of saddle nodes per value of $x$ out of a sample size of 25.} \label{fig:n_wrt_gamma} \end{center} \end{figure} With this justification, let's go to the statistical proof. \subsubsection{Some statistics magically appear} \longcorrq SK: I consulted a friend who knows a bit of stats. Based on our specific problem he suggested that the ``correct'' analysis is a $\chi^2$ test where we write our samples as observations of two random variables, $Z_1, Z_2$, defined by \[ Z_1 = \frac{\# \text{SN parameters in } U_5}{\# \text{parameters in } U_5} \qquad Z_2 = \frac{\# \text{SN parameters in } U_5^C}{\# \text{parameters in } U_5^C} \] <<>> \subsection*{What to do in gruesome detail} Following the wikipedia page: https://en.wikipedia.org/wiki/Pearson\%27s\_chi-squared\_test\#Testing\_for\_statistical\_independence and https://en.wikipedia.org/wiki/Chi-squared\_test \begin{center} \begin{tabular}{l|c|c|c} regions: & $U_5$ & $U_5^C$ & all\\ \hline n samples: & $N_5$ & $T - N_5$ & $T$ \\ n saddles : & $n_5$ & $\tilde n$ & $n_5 + \tilde n$\\ no saddle: & $N_5 - n_5$ & $T - N_5 - \tilde n$ & $T - n_5 - \tilde n$ \end{tabular} \end{center} ``expected'' number of saddles in region $U_5$ if everything was equally distributed (the null hypothesis) : $ E = N_5 \times \frac{n_5 + \tilde n}{T}$, that is $\textit{sample in }U_5 \times\frac{ \textit{all saddles}} {\textit{all sample}}$ $\xi = \frac{(\textit{expected - observed})^2}{\textit{expected}} = \frac{(E - n_5)^2}{n_5}$ idea: if this is close to 0, the null hypothesis is right, if this is far from zero the null hypothesis does not model reality. We compare $\xi$ with the upper-tail critical values of chi-square distribution table with 1 degree of freedom as found in the first line of https://www.stat.purdue.edu/$\sim$lfindsen/stat503/Chi-Square.pdf If our value is higher than the value on that page, the probability that the null hypothesis is true is lower than the number at the top of that column. \subsubsection{Sampling fibers} \label{sec:saddle_nodes_on_fibers} \corrc EQ: I don't think we need this section anymore. <<>> \subsubsection{Existence of isolas} \label{sec:isolas} In the previous Section \ref{}, we introduce the saddle node problem, and in Section \ref{} we present our results in finding such numerical saddles in the toggle switch model. The DSGRN machinery states that, at infinity, the center region should have three equilibria, while all other regions should only have one. This means that outside the center region we expect new equilibria generated by a saddle node to disappear again when the Hill coefficient grows towards infinity. This means that, for any parameter outside the center region for which we were able to find one saddle node, we should be able to find two. Furthermore, we expect the behaviour of the equilibria to either be hysteretic, or to define an isola. The graphical difference between the two behaviours can be seen in the Figure \ref{fig:isolas}. \begin{figure}[h] \begin{center} \includegraphics[width = 0.7\textwidth, trim= 1.5cm 14cm 4.25cm 1.7cm, clip]{sketch_isolas} \caption{On the right, expected behavior of a saddle-node in the center region $\mathcal{R}$. On the left, an isola, expected behavior of equilibria when a saddle-node is detected outside the center region. Vertical axes in this case is $\hill$, while the saddle-node s are indicated by a red diamond. Unstable equilibria are drawn as dotted lines.} \label{fig:isolas} \end{center} \end{figure} From the literature, only hysteretic behaviour is expected to be found in the toggle switch, but we were able to identify parameters showcasing both behaviours. In Figure \ref{fig:numerical_isolas}, we present a plot of the Hill coordinate with respect to the $x_1$ coordinate of numerically found equilibria for two \red{three?} different parameters outside of the center region. We can see how they different in the fact that, in the first parameter, a continuous function can be plotted, while in the second one, the Hill coefficient is not a function of $x_1$ and creates an isolas. In both cases, the saddles have been first found numerically, then the equilibria have been computed for some values of the Hill coefficient. No numerical continuation has been implemented. \begin{figure}[h] \begin{center} \includegraphics[width = 0.45\textwidth]{figure_hysteresis.png} \includegraphics[width = 0.45\textwidth]{figure_isolas.png} \caption{ Varying Hill coefficients determining a change in the $x_1$ coordinate of the equilibria. On the right, hysteretic behaviour found at $p =[0.92436706, 0.05063294, 0.81250005, 0.07798304, 0.81613] $, corresponding to $(x,y) =(1.9, 0.975) $. On the left, an isola, found at $p = [0.64709401, 0.32790599, 0.94458637, 0.53012047, 0.39085124]$, corresponding to $(x,y) = (0.975,0.975)$.} \label{fig:numerical_isolas} \end{center} \end{figure} \subsection{Saddle node bifurcations} In this section we will briefly review the definition of saddle node bifurcation and we illustrate its characterization as the solution of a root finding problem. Suppose $f : X \times \Lambda \to X$ denotes a Hill model, with the notation already introduced. Let $\lambda_0$ denote a single scalar component of $\lambda$ which will serve as a distinguished parameter, and let $\lambda' \in \rr^{N + 4M-1}$ denote the vector of remaining parameters. We let $f_{\lambda'} : X \times \rr \times X$ denote the $1$-parameter family of vector fields obtaining by fixing $\lambda'$ i.e.~all parameters except $\lambda_0$ are fixed. For $\lambda_0 \in \rr$, $\hat x(\lambda_0) \in \rr^N$ is an equilibrium if it is a constant solution of Equation \eqref{eq:ODE_model}, or equivalently, if \begin{equation} f_{\lambda'} (\hat x(\lambda_0), \lambda_0) = 0. \end{equation} By the implicit function theorem, for any $\lambda_0$ such that $\hat x(\lambda_0)$ is a regular point of $f_{\lambda'}$, there exists a smooth $1$-dimensional curve of equilibria parameterized by $\lambda_0$. Intuitively, if a Hill model undergoes a saddle-node bifurcation at a parameter value, $\lambda_0^* \in\rr$, then on one side of the bifurcation we can see two equilibria (one having an unstable manifold of dimension 1 higher than the other's unstable manifold) colliding and disappearing at the bifurcaton. Consequently, the implicit function theorem fails at a saddle-node bifurcation. We recall the following characterization of a saddle-node bifurcation here. A proof can be found in \cite{MR2224508}. \begin{theorem}[saddle-node bifurcation] \label{thm:saddle_node_bifurcation} Let $f_{\lambda'} : X \subset \rr^N \times \rr \to X$ be a Hill model with a single free parameter, $\lambda_0$, and recall that $f_{\lambda'}$ depends smoothly on both $x$ and $\lambda_0$. Then, $f_{\lambda'}$ undergoes at a {\em saddle-node bifurcation} at a point, $(\hat{x}, \hat{\lambda}_0)$ if the following conditions are satisfied. \begin{enumerate} \item $f_{\lambda'} (\hat{x}, \hat{\lambda}_0) = 0$. \item $\ker D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0) = \inspan(\setof{v})$ for some nonzero $v \in \rr^N$. \item $w_1 := D_{\lambda_0} f_{\lambda'}(\hat{x}, \hat{\lambda}_0) \neq 0$ and $w_1 \notin \image D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)$. \item $w_2 := D_x \paren*{D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)v } \neq 0$ and $w_2 \notin \image D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)$ \end{enumerate} \end{theorem} To numerically find saddle-node bifurcations, we want a characterization as solutions of a root-finding problem. This is provided by the following computational version of Theorem \ref{thm:saddle_node_bifurcation}. \begin{corollary} \label{cor:saddle_node_bifurcation} Let $f_{\lambda'}, \hat x$, and $\hat{\lambda}_0$ be as defined in Theorem \ref{thm:saddle_node_bifurcation}. Define $g_{\lambda'} : \rr^{2N+1} \to \rr^{2N+1}$ by the formula \[ g_{\lambda'}(x, v, \lambda_0) := \begin{pmatrix} f_{\lambda'}(x, \lambda_0)\\ D_xf(x, \lambda_0) v\\ v^Tv - 1 \end{pmatrix} \qquad x, v \in \rr^N, \lambda_0 \in \rr. \] Let $u = (x, v, \lambda_0) \in \rr^{2N+1}$ and suppose $\hat{u} := (\hat{x}, \hat{v}, \hat{\lambda}_0) \in \rr^{2N+1}$ is a root of $g_{\lambda'}$ satisfying \begin{enumerate} \item $D_u g_{\lambda'} (\hat u)$ is an isomorphism. \item Every nonzero eigenvalue of $D_x f_{\lambda'}(\hat x, \hat{\lambda}_0)$ has nonzero real part. \end{enumerate} Then $f_{\lambda'}$ undergoes a saddle-node bifurcation at $(\hat{x}, \hat{\lambda}_0)$ and $\ker D_x f_{\lambda'}(\hat{x}, \hat{\lambda}_0) = \inspan{\setof{\hat{v}}}$. \end{corollary} \corrc SK: Should we prove this? There is a proof in ODE book but its not published yet. How to cite this? \\ EQ: let's reference an unplublished book, hopefully it's on arxiv or somewhere else. <<>> We refer to finding solutions satisfying the conditions in Corollary \ref{cor:saddle_node_bifurcation} as the {\em saddle-node bifurcation problem} and we refer to the function $g_{\lambda'}$ as the {\em saddle-node bifurcation map}. \corrc SK: we should check for the conditions 1 and 2 in the code - dont' you worry, I got it <<>> Observe that given a candidate solution of the saddle node problem which is a root of the saddle-node bifurcation map it is trivial to check whether or not the two conditions required by Corollary \ref{cor:saddle_node_bifurcation} hold. Consequently, the main difficulty in solving the saddle ndoe problem amounts to finding roots of $g_{\lambda'}$. This is a nonlinear problem, however, there exist a variety of sophisticated methods for solving nonlinear root finding problems. In the following Sections, we will discuss the numerical difficulties in finding a solution to the zero findng problem $g_{\lambda'} (u)= 0$. \subsection{Finding saddle-node bifurcations} In this section we describe our approach for finding saddle-node bifurcations globally for Hill models, where by globally we mean ... \corrq what does ``globally'' mean?<<that we want to use local information on the existence of a bifurcation to understand the dynamics sustained by the system away from the bifurcation. This can be considered as creating a bigger picture from local snapshots of information.>> There are two main difficulties which we must overcome. First, recall that after fixing all but one parameter as $\lambda'$, we must find roots of the corresponding saddle-node map defined by $g_{\lambda'}$ in Corollary \ref{sec:saddle_nodes_on_fibers}. We do this numerically with a Newton based rootfinder which requires an initial guess, a close enough approximation to an actual root. This presents the first difficulty: How do we choose the initial guess? We address this question in greater detail in Section \ref{sec:finding_eq} The second difficulty concerns the question of how we should choose $\lambda'$. Recall that this is a requirement in order to even define the saddle-node map, $g_{\lambda'}$. However, for a Hill model $\lambda'$ can be considered as a vector in some $N+4M-1$ dimensional subset of $\Lambda$ and for an arbitrary choice, there is no reason to expect $f_{\lambda'}$ to undergo a saddle-node bifurcation with respect to the remaining parameter $\lambda_0$. If it does not, then the associated saddle-node map will not have any roots to find. This problem is generally much more problematic than the first since the parameter dimensions of interest are enormous. It is well known that finding bifurcations in dynamical systems with high dimensional parameters is an extremely hard problem. The main novelty in this work lies in using DSGRN to find regions of $\Lambda$ in which to search so that with high probability, we find saddle-node bifurcations. \label{sec:finding_saddle_nodes} \corrq SK: First describe what SN bifurcations look like in DSGRN. Then describe how we numerically search through regions identified by DSGRN. <<>> \subsection{Finding equilibria} \label{sec:finding_eq} In this section we consider the problem of finding equilibria for Hill models. Throughout this section we suppose $\lambda \in \Lambda$ is fixed, $f_\lambda : X \to X$ is a Hill model with $X \subset \rr^N$, and $\hat x \in \rr^N$ is an equilibrium solution i.e.~$f_\lambda(\hat x) = 0$. We introduce two algorithms for solving this problem. \subsubsection{The general algorithm} We assume implementations of two black box algorithms. The first is a root finding implementation which we denote by {\tt FindRoot}, which takes a function, $f : \rr^N \to \rr^N$, and an initial guess $x_0 \in \rr^N$, as input and attempts to identity a root of $f$ near $x_0$. When successful it returns $\hat x \in \rr^N$ satisfying $\norm{f(\hat x)} \approx 0$. For the computations in this work we used a Newton based root finder but other options exist of course. Additionally, we assume a given method for identifying pairs of distinct vectors in $\rr^N$ which approximate the same root of $f$ denoted by {\tt Unique}. That is, if $\hat x_1, \hat x_2$ satisfy \[ \norm{f(x_1)} \approx 0 \approx \norm{f(\hat x_2)} \qquad \text{and} \qquad \norm{\hat x_1 - \hat x_2} \approx 0, \] then {\tt Unique}$(\hat x_1, \hat x_2) = \hat x_1$. Otherwise, $\hat x_1, \hat x_2$ are {\em approximately} distinct. If $\hat x$ is an array of vectors in $\rr^N$, then {\tt Unique}$(\hat x)$ returns a similar array in which each pair of vectors is approximately distinct. With these in hand, our general algorithm for finding equilibria for $f$ begins by fixing a rectangular subset of $X$ of the form \[ R := \prod_{i = 1}^{N} [a_i, b_i], \qquad [a_i, b_i] \subset (0, \infty) \forall 1 \leq i \leq N. \] Each interval in this product is partitioned into $k$ subintervals bounded by $k+1$ uniformly spaced nodes. The product of these nodes forms a grid of points in $\rr^N$ which covers $R$. Each of the $(k+1)^N$ points in this grid is taken as an initial condition for {\tt Findroot} which attempts to return a candidate equilibrium nearby. The algorithm returns an array containing such candidates which are not identified as equivalent by {\tt Unique}. The pseudocode is described in Algorithm \ref{alg:general_equilibria}. \begin{algorithm} \caption{General algorithm} \label{alg:general_equilibria} \begin{algorithmic}[1] \Function{{\tt HillEquilibria}}{$f, R, k$} \State $\hat{x} \gets ()$ \Comment{Initialize equilibrium array} \State $\Delta_i \gets \frac{b_i - a_i}{k}$ \State $u_i \gets (a_i, a_i + \Delta_i, \dotsc, a_i + (k-1) \Delta i, b_i)$ \Comment{Discretize factors} \For{$\kappa \in \setof{1,\dotsc, k}^N$} \State $x_0 \gets (u_{1, \kappa_1}, \dotsc, u_{N, \kappa_N})$ \State $r_\kappa \gets {\tt FindRoot}(f, x_0)$ \Comment {Returns a candidate when it converges} \If{$r_\kappa$} $\hat x.{\tt Append}(r_k) $ \Comment {Append candidate equilibrium} \EndIf \EndFor \State \textbf{return} {\tt Unique}$(\hat x) $ \EndFunction \end{algorithmic} \end{algorithm} \subsubsection{The bootstrap algorithm}\label{ss:bootstrap} In this section we define a second algorithm which exploits the structure of Hill models in specific cases to find equilibria more reliably and efficiently than the Newton-based algorithm described above. The main idea is to begin with an initial rectangular subset (product of intervals) of $X$ which is an enclosure for all equilibria. We obtain a tighter enclosure by restricting evaluation of the Hill model to this rectangular subset and this procedure is iterated. At each step we improve the bound via bootstrapping a previous weaker bound. This can be done efficiently by encoding these bounds into a map of twice the dimension of $f$. We begin with a definition. \begin{definition} \label{def:monotone_factorization} We say that a function, $g : \rr^N \to \rr$, has a {\em monotone factorization} if $g$ is bounded and can be factored as $g = g^+ g^-$ where $g^+$ is strictly increasing with respect to each coordinate and $g^-$ is strictly decreasing with respect to each coordinate. \end{definition} For the remainder of the section, we let $\cH_i : X \to \rr$ denote the $i^{\rm th}$ coordinate of the nonlinear term of $f_\lambda$, and we assume that for each $1 \leq i \leq N$, $\cH_i$ has a monotone factorization. Specifically, for each $i \in \setof{1,\dotsc, N}$ we assume that $\cH_i : \rr^N \to \rr$ has the form \[ \cH_i (x) = \cH_i^+(x) \cH_i^-(x) \] where $\cH_i^+$ is increasing and $\cH_i^-$ is decreasing. Observe that this assumption induces constraints on our Hill models, however, for many biological applications this is not a concern. In practice, a biologically motivated choice for $p_i$ is to choose all genes which up-regulate gene $i$ to be included in the first summand. A Hill model with this assumption satisfies the monotone factorization constraint. \red{reference?} This is a special case of the more typical choice in which each summand of $p_i$ is comprised of Hill functions with the same sign. Then, it follows that $\cH_i$ has a monotone factorization. Without loss of generality, we can reorder the summands so that $\setof{1,\dotsc, q_i'}$ denotes the indices of the summands with positive terms and similarly, $\setof{q'_i+1, \dotsc, q_i}$ denotes the indices of the summands with negative terms. Thus, we can define $\cH_i^+$ and $ \cH_i^-$ by the formulas \[ \cH_i^+(x) = \prod_{j = 1}^{q_i'} \sum_{k \in I_{i,k}} H_{i,k}^+(x_k) \qquad \cH_i^-(x) = \prod_{j = q_i' + 1}^{q_i} \sum_{k \in I_{i,k}} H_{i,k}^-(x_k). \] Since each Hill function is bounded, it follows that $\cH_i^+, \cH_i^-$ are bounded as well and in fact we have the explicit bounds \[ \cH_i^+(0)\cH_i^-(\infty) \leq \cH_i(x) \leq \cH^+_i(\infty) \cH^-(0) \qquad \forall x \in \rr^N \] where we define \[ \cH_i^{\pm}(\infty) := \lim\limits_{t \to \infty} \cH_i^{\pm}(t (1, 1, \dotsc, 1)). \] Therefore, from the definition of $p_i$, it follows that $\cH_i = \cH_i^+ \cH_i^-$ is a monotone factorization. \begin{theorem} \label{thm:bootstrap_eqbounds} Define $\Phi : \rr^{2N} \to \rr^{2N}$ by the formulas \begin{align} \Phi_i(\alpha, \beta) = \frac{1}{\gamma_i} \cH_i^+(\alpha) \cH_i^-({\beta}) & \qquad \alpha, \beta \in \rr^N \\ \Phi_{N+i}(\alpha, \beta) = \frac{1}{\gamma_i} \cH_i^+(\beta) \cH_i^-(\alpha) & \qquad \text{for } 1 \leq i \leq N. \end{align} Define $\alpha^0, \beta^0 \in \rr^{N}$ coordinate-wise by the formulas \corrq << \begin{align} \alpha^{(0)}_i & := \frac{1}{\gamma_i} \paren*{\prod_{j = 1}^{q_i'} \sum_{k \in I_{i,k}} \frac{\ell_{i,k}}{\gamma_k}} \paren*{\prod_{j = q_i'+1}^{q_i} \sum_{k \in I_{i,k}} \frac{\ell_{i,k} + \delta_{i,k}}{\gamma_k}} = \frac{1}{\gamma_i} \cH_i^+(0) \cH_i^-(\infty)\\ \beta^{(0)}_i & := \frac{1}{\gamma_i} \paren*{\prod_{j = 1}^{q_i'} \sum_{k \in I_{i,k}} \frac{\ell_{i,k} + \delta_{i,k}}{\gamma_k}} \paren*{\prod_{j = q_i'+1}^{q_i} \sum_{k \in I_{i,k}} \frac{\ell_{i,k}}{\gamma_k}} = \frac{1}{\gamma_i} \cH_i^+(\infty) \cH_i^-(0). \end{align} || \begin{align} \alpha^{(0)}_i & : = \frac{1}{\gamma_i} \cH_i^+(0) \cH_i^-(\infty)\\ \beta^{(0)}_i & := \frac{1}{\gamma_i} \cH_i^+(\infty) \cH_i^-(0). \end{align} >> for $1 \leq i \leq N$ and we concatenate these when convenient by writing $u^{(0)} =( \alpha^0, \beta^0 ) \in \rr^{2N}$. The following are true. \begin{enumerate} \item $ \lim\limits_{n \to \infty} \Phi^{(n)}(u^{(0)}) = \hat u := (\hat \alpha, \hat \beta) \in \rr^{2N}$ where $\hat u$ is a fixed point for $\Phi$. \item If $\hat x$ is any equilibrium for $f$, then $\hat x \in \prod_{i = 1}^N [\hat \alpha_i, \hat \beta_i]$ i.e.~we have the bounds \[ \alpha_i \leq \hat x_i \leq \beta_i \qquad \text{for } 1 \leq i \leq N. \] \end{enumerate} \end{theorem} \begin{proof} Observe that for $1 \leq i \leq N$ we have the lower bounds \begin{align*} \Phi_i(u^{(0)}) & = \frac{1}{\gamma_i} \cH_i^+\paren*{\alpha^{(0)}} \cH_i^-\paren*{\beta^{(0)}} \\ & > \frac{1}{\gamma_i} \cH_i^+\paren*{0} \cH_i^-\paren*{\infty} \\ & = \alpha^{(0)}_i. \end{align*} Similarly, we have the upper bounds \begin{align*} \Phi_{N+i}(u^{(0)}) & = \frac{1}{\gamma_i} \cH_i^+\paren*{\beta^{(0)}} \cH_i^-\paren*{\alpha^{(0)}} \\ & < \frac{1}{\gamma_i} \cH_i^+\paren*{\infty} \cH_i^-\paren*{0} \\ & = \beta^{(0)}_i. \end{align*} If we write $u^{(1 )}= (\alpha^{(1)}, \beta^{(1)}) := \Phi(u^{(0)})$, then the above bounds can be written as \[ \alpha^{(1)}_i > \alpha^{(0)}_i \qquad \text{and} \qquad \beta^{(1)}_i < \beta^{(0)}_i \qquad \text{for } 1 \leq i \leq N. \] Let $n \in \nn$, and let $u^{(n)} := (\alpha^{(n)}, \beta^{(n)}) = \Phi^{(n)}(u^{(0)})$ denote the $n^{\rm th}$ term in the orbit of $u^{(0)}$. Inductively, $u^{(n)}$ satisfies \[ \alpha^{(n)}_i > \alpha^{(n-1)}_i \qquad \text{and} \qquad \beta^{(n)}_i < \beta^{(n-1)}_i \qquad \text{for } 1 \leq i \leq N. \] Then, since for $1 \leq i \leq N$, $\gamma_i > 0$ and $\cH_i^+, \cH_i^-$ are monotone increasing and decreasing respectively, we have the bounds \begin{align*} \alpha^{(n+1)}_i & = \Phi_i(u^{(n)}) \\ & = \frac{1}{\gamma_i} \cH_i^+\paren*{\alpha^{(n)}} \cH_i^-\paren*{\beta^{(n)}} \\ & > \frac{1}{\gamma_i} \cH_i^+\paren*{\alpha^{(n-1)}} \cH_i^-\paren*{\beta^{(n-1)}} \\ & = \alpha^{(n)}_i. \end{align*} and \begin{align*} \beta_i^{(n+1)} = & \Phi_{N+i}(u^{(n)}) \\ & = \frac{1}{\gamma_i} \cH_i^+\paren*{\beta^{(n)}} \cH_i^-\paren*{\alpha^{(n)}} \\ & < \frac{1}{\gamma_i} \cH_i^+\paren*{\beta^{(n-1)}} \cH_i^-\paren*{\alpha^{(n-1)}} \\ & = \beta^{(n)}_i. \end{align*} It follows that for each $1 \leq i \leq N$, $\setof{\alpha^{(j)}_i}$ is a monotonically increasing sequence which is bounded above by the boundedness of $\cH_i^\pm$ and therefore this sequence converges. Similarly, each $\setof{\beta^{(j)}_i}$ is a monotonically decreasing sequence which is bounded below and thus, also converges. We will denote these limits by $\hat \alpha, \hat \beta \in \rr^N$ where \[ \hat \alpha_i = \lim\limits_{n\to \infty} \setof{\alpha^{(n)}_i} \qquad \text{and} \qquad \hat \beta_i = \lim\limits_{n\to \infty} \setof{\beta^{(n)}_i}. \] Letting $\hat u = (\hat \alpha, \hat \beta)$, it follows by continuity of $\Phi$ that $\Phi(\hat u) = \hat u$ which completes the proof of the first claim. To prove the second claim, we start by defining $F : X \to X$ by the formula \[ F_i(x) = \frac{1}{\gamma_i }\cH_i(x) \qquad 1 \leq i \leq N, \] and for each $n \in \nn$, define \[ \cR^{(n)} := \prod_{i = 1}^{N} [\alpha^{(n)}_i, \beta^{(n)}_i]. \] Observe that if $\hat x$ is an equilibrium solution for $f_\lambda$, then $\hat x$ is a fixed point of $F$. Therefore, it suffices to prove that any fixed point of $F$ is contained in $\cR^{(n)}$ for all $n \in \nn$. We start by observing that from the definitions of $F$ and $u^{(0))}$, we have \[ \image F = \cR^{(0)} \] so it follows that $\hat x \in \cR^{(0)}$. Inductively, suppose that $n \in \nn$ is fixed and $\hat x \in \cR^{(n-1)}$. In particular, we assume that \begin{equation} \label{eq:induction_bound} \alpha_i^{(n-1)} \leq \hat{x}_i \leq \beta_i^{(n-1)} \qquad \text{for} \ 1 \leq i \leq N. \end{equation} For each $1 \leq i \leq N$, $\cH_i^+, \cH_i^-$ is monotone increasing and decreasing respectively, and therefore Equation \eqref{eq:induction_bound} combined with the definition of $\Phi$ implies that \[ \alpha_i^{(n)} = \frac{1}{\gamma_i} \cH_i^+\paren*{\alpha^{(n-1)}} \cH_i^-\paren*{\beta^{(n-1)}} \leq F(\hat x)_i \leq \frac{1}{\gamma_i} \cH_i^+\paren*{\beta^{(n-1)}} \cH_i^-\paren*{\alpha^{(n-1)}} = \beta_i^{(n)} \] holds for all $1 \leq i \leq N$. This proves that if $F(\hat x) = \hat x$ and $\hat x \in \cR^{(n-1)}$, then $\hat x \in \cR^{(n)}$ as well. \end{proof} In fact, we observe that the sequence of rectangles constructed in the proof of Theorem \ref{thm:bootstrap_eqbounds} actually satisfies $\image F\big|_{\cR^{(n-1)}} = \cR^{(n)}$ for all $n \in \nn$. This motivates the following algorithm for bounding equilibria. \begin{algorithm} \caption{Bootstrap algorithm} \label{alg:bootstrap_equilibria} \begin{algorithmic}[1] \Function{\tt RootEnclosure}{$f$} \State $u \gets \lambda$ \Comment{Initialize orbit as a function of $\lambda$} \State $v \gets \Phi(u)$ \While{$\norm{u - v} > \epsilon$} \State $u \gets v$ \State $v \gets \Phi(u)$ \EndWhile \State $R \gets \prod_{i = 1}^N [v_i, v_{N+i}]$ \State \textbf{return} $R$ \EndFunction \Function{\tt MonotoneHillEquilibria}{$f, k$} \State $R \gets ${\tt RootEnclosure($f$)} \State \textbf{return} {\tt HillEquilibria$(f, R, k)$} \EndFunction \end{algorithmic} \end{algorithm} \corrc EQ: $\lambda$ is not defined on the first line of the algorithm is that all right?<<>> \corrc EQ: is this second algorithm really necessary in pseudo-code? seems a bit overkill<<>> \subsection{Isolating equilibria} \corrc EQ: the Newton's method <<>> \corrl SK: << || Short section on radii polynomials and their use in isolating equilibria. Probably this should be merged with the Newton's method subsection. >> In our search for equilibria, we often found hard to determine if two results retrieved by the Newton's method were two numerically different representations of the same analytic solution or two similar representations of two different analytical solutions. To numerically distinguish between the two, we used a numerical version of the radii polynomial approach. This method has been used in the validated numerics community \cite{}, and, when carefully implemented, can prove the existence and uniqueness of an analytical solution in an explicitly computed neighborhood of the numerical solution. In this paper, we will present an approximate implementation of this method, thus the results we obtain do not have the status of proof as the ones in \cite{}, but can be used as a strong indication to merge or distinguish two numerical solutions. The radii polynomial approach is build around the contraction mapping theorem. In an abstract setting, we consider a zero-finding problem $F(x) = 0$. In our case, $F$ is the right hand side of the ODE \eqref{eq:ODE_model}. Let $\hat x$ be an approximation of the solution, such that $F(\hat x)\approx 0$, and let $A \approx DF(\hat x) ^{-1}$. Then we define a Newton-Kantorovich operator $$ T(x) = x - AF(x), $$ with the idea that $T$ is likely to be a contraction in the neighborhood of $\hat x$. To estimate the radius of contraction of $T$, we make use of the following theorem. \begin{theorem} Let $T$ be as defined, and let $$ Y \geq \| T(\hat x) - \hat x \|, \qquad Z(r) \geq \max_{b\in B_1(0)} \| DT(\hat x+ rb)\|, $$ then, let us define the radii polynomial as \red{norms not properly defined, should we?} \begin{equation}\label{eq:radii_pol} p(r) = Y + (Z(r) - 1) r. \end{equation} If there exists an $r^*$ such that $p(r^*)<0$, then there exists a unique $x_{sol}$ such that $F(x_{sol} = 0$ and $\|\hat x - x_{sol}\|<r^*$. \end{theorem} For the proof we refer to \cite{}. In our case, we notice $DT(x) = I - ADF(x)$, thus $$ DT(\hat x+ rb)=I - ADF(\hat x+rb) = I - ADF(\hat x) + A\left(DF(\hat x) - DF(\hat x + rb)\right) $$ and we split to computation of $Z(r)$ into the computation of $ Z_0 \geq \|I - ADF(\hat x)\| $ and $ Z_1(r) \geq \max_{b\in B_1(0)} \|A\left(DF(\hat x) - DF(\hat x + rb)\right)\|$. It's worth noting that $$ \max_{b\in B_1(0)} \|A\left(DF(\hat x) - DF(\hat x + rb)\right) \| \leq \max_{z,b\in B_1(0)}\|AD^2F(\hat x + rz)b\|r. $$ We then approximate $ \max_{z,b\in B_1(0)}\|AD^2F(\hat x + rz)b\| \approx \|AD^2F(\hat x)\|$, by assuming that the second derivative of $F$ is almost constant. While the approximation is crude, we are using this method to further weed out numerical duplicates, thus we consider it sufficiently precise for the occasion. \corrc EQ: add info on how to remove numerical duplicates<<>> \subsection{Bifurcations and DSGRN} As indicated in Section~\ref{sec:dsgrn} given a regulatory network Explain general criterion we use to identify the potential bifurcations. Particular case is saddle node bifurcation. \subsection{Saddle-node bifurcations} Here we should state the computational version as an abstract theorem. From this theorem it is clear what data we need. We now cite the github page and technical paper where reader can find code for Hill functions. \section{Introduction} In applications, it might be interesting to generate data that samples multiple areas of interest. We assume the sum of these areas covers $\rr^N$. In particular, we want our sampling to satisfy two conditions: \begin{itemize} \item the points are iid, \item each area is covered by roughly the same number of points. \end{itemize} We will make these requests more precise with the introduction of some definitions. This problem appears in a variety of situations, but we will concentrate in particular on the output of DSGRN. This program takes as input a dynamical system with parameters in $\rr^N_+$ and returns a tiling of $\rr^N_+$ by semi-algebraic sets such that each semi-algebraic set sustains the same dynamics. It is now of interst to sample the dynamics, thus to create a large number of parameters that covers the given semi-algebraic sets. To be able to sample all dynamics detected dy DSGRN, we require the dataset of parameters to cover each parameter region with (almost) equal likelyhood. \section{background} Let $\rr^N_+$ be tiled by semi-algebriac sets $\mathcal{P} = \{p_1,p_2,\dots, p_n\}$. We expect that each semi-algebraic set is non-empty, following the resutls by [Shane and Lun]. Then, we want to create a distribution $\mathcal{F}$ such that if $x$ is sampled from $\mathcal{F}$ or $x\sim \mathcal{F}$, then $\mathbb{E}(x\in p_i) \approx \frac{1}{n}$ for all $i$. We define $R: \rr^N_+ \rightarrow \{1, \dots, n\}$ as the function that associate to each point $x\in \rr^N_+$ the unique index $i$ such that $x \in p_i$. Due to $\mathcal{P}$ being a semi-algebraic set, $i$ is unique, since $p_i \cap p_j = \emptyset$ for any $i\neq j$. \section{Idea} We assume that, given any value $x\in \rr^N_+$, it is computationally possible to compute $R(x)$. We expect this computation to be efficient. Considering a given distribution $\mathcal{F}$, and a sample $x_1, \dots x_k$ of iid sampled from $\mathcal {F}$, then we can count how many of these samples belong to each algebraic set, first defining the boolean function $$ b(x,i) = \begin{cases} 1 \quad & \text{if } R(x) == i,\\ 0 & \text{otherwise,} \end{cases} $$ then defining the counting function $$ c_i(x_1, \dots, x_k) = \sum_{j=1}^k b(x_j, i). $$ For ease of writing, we combine all counting functions in an array $C(x_1, \dots x_k) = ( c_1, c_2, \dots c_n)$, and we define the \emph{scoring} of the distribution $\mathcal{F}$ as \begin{equation}\label{eq:scoring} S(\mathcal{F} ) = \lim_{k\rightarrow\infty} \frac{\min(C(x_1,\dots, x_k)}{\max(C(x_1,\dots, x_k)}. \end{equation} It is possible to view this scoring in another way. Considering the likelihood of $R(x)$ to be equal to a given $i$, we can view $R$ as being itself a random variable, according to an unknown distribution over $\{1, \dots, n\}$. Then, we can consider the discrete probability distribution $\mathcal{F}_n$ of $R$ over $\{1, \dots, n\}$, that is a list of real positive values $v_1, v_2, \dots, v_n$, such that they sum up to 1. The scoring then is the ratio of the lowest probability over the biggest one. Note that $\mathcal{F}_n$ depends both on the probability distribution $\mathcal{F}$ and on the semi-algebraic set $\mathcal{P}$. It is possible to approximate the scoring of the distribution $\mathcal{F}$ by truncating the limit presented in \eqref{eq:scoring} and thus considering a finite sample $x_1, \dots, x_k$. We call this the \emph{computable score} of size $k$, using the notation $S_k(\mathcal{F})$. We have here constructed a relationship from a semi-algebraic set and a distribution to a computable quantity $S_k \in [0,1]$. We might note here how the computable score of size $k$ is itself a random variable, whose variance tends to 0 for $k$ tending to infinity. Thus, to have a reliable representation of the scoring of $\mathcal{F}$, one needs to choose $k$ large enough. Now, the problem of finding the ``best'' distribution, as discussed in the introduction, becomes the problem of maximizing the scoring of the distribution. We have at hand a method to rank distributions. Consider now how the distribution $\mathcal{F}$ could be itself determined by an array of coefficients, such as mean and variance for the Gaussian distribution. If that is the case, we can reformulate the problem of finding the best distribution into the problem of finding the best coefficients. Let us formulate this idea in a more rigorous fashion. To span $\rr_+$, let us consider the Fisher distribution, also called $F$-distribution. The Fisher distribution depends on two coefficients, usually denoted as $d_1$ and $d_2$. Then, to sample $\rr^N_+$, we will use $N$ independent Fisher distributions, each with different coefficients $(d_1^j,d_2^j)$, for $j = 1,\dots N$. The coefficients $d = \left((d_1^j,d_2^j)_{j = 1,\dots N}\right)\in \rr^{2N }$ fully determine the probability distribution on $\rr^N_+$. Thus, we associate each $d\in \rr^{2N}$ with the computable score of size $k$ of the Fisher distribution defined by $d$. We have successfully created a function \begin{align*} f : \rr^{2N} &\rightarrow [0,1]\\ d&\mapsto \frac{\min(C(x_1,\dots, x_k)}{\max(C(x_1,\dots, x_k)}, \quad x_1, \dots, x_k \sim F(d_1, d2) \times F(d_3, d_4) \times \dots F(d_{2N-1}, d_{2N}) \end{align*} that ranks distributions. The final element of our method consists of applying a minimization algorithm to $1-f$ on $\mathbb{R}^{2N}$ from any random starting point. Let us note how, for $k$ tending to infinity, $f$ is a smooth function, but for $k$ finite $f$ is a random function, thus has no expected smoothness. Thus, when considering which minimization algorithm to apply to $1-f$ we must take into account the limitations imposed by the form of $f$ itself. In particular, the lack of derivatives of any order. Assuming $k$ is large enough such that the variance of $f$ is small, we can use the Nelder-Mead algorithm. This algorithm has specifically been developed for $C^0$ functions and does not require the knowledge of a derivative to run. In our implemetation, we used the \texttt{numpy.optimize} built-in Nelder-Mead algorithm to run our computations. With the idea of shifting the problem from finding a random distribution to finding optimal coefficients defining a random distribution, it is then possible to consider a variety of different distributions. In our case, the Fisher distribution is interesting because it is defined over the unbounded interval $[0,\infty)$, that is the space our unknown parameters live in. The main constraint of the Fisher distribution is that it is only a one-dimensional random distribution. To include correlation within our generated variables, we need to consider some other distribution. In this paper, we consider the square of a multivariate Gaussian distribution. This distribution takes into consideration the correlation between the various dimensions and is properly defined over $\mathbb{R}^N_+$, but the number of unknowns now needs to include the correlation matrix, thus making the number of unknowns $N^2+N$. This has clear computational implications, but allows to avoid "outliers", that is generated parameters with elements of different order of magnitude. \section{tests and conclusion} \subsection{DSGRN - again!} Why are we interested in semi-algebraic sets? \subsection{The 9 regions of the Toggle Switch} Why the toggle switch? \subsection{Extensive testing on the Toggle Switch} at least: - comparison with uniform distribution - possible use in determining the relative areas of the regions - different starting point - discuss lack of convergence \subsection{Another example: the Soggle Twitch and a fast overview of the results} \subsection{Extreme result} Consider a situation where fidning even just one sample in each region would take thousands of samples: then it's important to choose $k$ wisely. Show off a bit! \subsection{Equilibria from DSGRN} \subsection{A general numerical algorithm} We assume implementations of two black box algorithms. The first is a root finding implementation which we denote by {\tt FindRoot}, which takes a function, $f : \rr^N \to \rr^N$, and an initial guess $x_0 \in \rr^N$, as input and attempts to identity a root of $f$ near $x_0$. When successful it returns $\hat x \in \rr^N$ satisfying $\norm{f(\hat x)} \approx 0$. For the computations in this work we used a Newton based root finder but other options exist of course. Additionally, we assume a given method for identifying pairs of distinct vectors in $\rr^N$ which approximate the same root of $f$ denoted by {\tt Unique}. That is, if $\hat x_1, \hat x_2$ satisfy \[ \norm{f(x_1)} \approx 0 \approx \norm{f(\hat x_2)} \qquad \text{and} \qquad \norm{\hat x_1 - \hat x_2} \approx 0, \] then {\tt Unique}$(\hat x_1, \hat x_2) = \hat x_1$. Otherwise, $\hat x_1, \hat x_2$ are {\em approximately} distinct. If $\hat x$ is an array of vectors in $\rr^N$, then {\tt Unique}$(\hat x)$ returns a similar array in which each pair of vectors is approximately distinct. With these in hand, our general algorithm for finding equilibria for $f$ begins by fixing a rectangular subset of $X$ of the form \[ R := \prod_{i = 1}^{N} [a_i, b_i], \qquad [a_i, b_i] \subset (0, \infty) \forall 1 \leq i \leq N. \] Each interval in this product is partitioned into $k$ subintervals bounded by $k+1$ uniformly spaced nodes. The product of these nodes forms a grid of points in $\rr^N$ which covers $R$. Each of the $(k+1)^N$ points in this grid is taken as an initial condition for {\tt Findroot} which attempts to return a candidate equilibrium nearby. The algorithm returns an array containing such candidates which are not identified as equivalent by {\tt Unique}. The pseudocode is described in Algorithm \ref{alg:general_equilibria}. \begin{algorithm} \caption{General algorithm} \label{alg:general_equilibria} \begin{algorithmic}[1] \Function{{\tt HillEquilibria}}{$f, R, k$} \State $\hat{x} \gets ()$ \Comment{Initialize equilibrium array} \State $\Delta_i \gets \frac{b_i - a_i}{k}$ \State $u_i \gets (a_i, a_i + \Delta_i, \dotsc, a_i + (k-1) \Delta i, b_i)$ \Comment{Discretize factors} \For{$\kappa \in \setof{1,\dotsc, k}^N$} \State $x_0 \gets (u_{1, \kappa_1}, \dotsc, u_{N, \kappa_N})$ \State $r_\kappa \gets {\tt FindRoot}(f, x_0)$ \Comment {Returns a candidate when it converges} \If{$r_\kappa$} $\hat x.{\tt Append}(r_k) $ \Comment {Append candidate equilibrium} \EndIf \EndFor \State \textbf{return} {\tt Unique}$(\hat x) $ \EndFunction \end{algorithmic} \end{algorithm} \subsection{The bootstrap algorithm}\label{ss:bootstrap} In this section we define a second algorithm which exploits the structure of Hill models in specific cases to localize equilibria more reliably and efficiently than the Newton-based algorithm described above. The main idea is to begin with an initial rectangular subset of $(0,\infty)^N$ which is an enclosure for all equilibria and then iteratively obtain tighter rectangular enclosures. \begin{definition} \label{def:monotone_factorization} We say that a continuous function $g\colon [0,\infty)^N\to (0,\infty)^N$ has a {\em monotone factorization} if for each $i=1,\ldots, N$, \[ g_i(x) = g_i^+(x) g_i^-(x) \] where with respect to each coordinate $g_i^+\colon (0,\infty)^N\to (0,\infty)^N$ is a bounded monotonically strictly increasing function and $g_i^-\colon (0,\infty)^N\to (0,\infty)^N$ is a bounded monotonically strictly decreasing function. \end{definition} Consider a continuous function $f\colon [0,\infty)^N\to\rr^N$ of the form $f(x) = -\Gamma x +g(x)$ where $g\colon [0,\infty)^N\to(0,\infty)^N$ has a monotone factorization and for all $i=1,\ldots, N$, $f_i(x) = -\gamma_i x + g_i(x)$ and $\gamma_i >0$. Define $\Phi\colon \rr^{2N}\to \rr^{2N}$ by \[ \Phi_i(\alpha, \beta) = \frac{1}{\gamma_i} g_i^+(\alpha) g_i^-({\beta}) \quad\text{and}\quad \Phi_{N+i}(\alpha, \beta) = \frac{1}{\gamma_i} g_i^+(\beta) g_i^-(\alpha), \] where $\alpha, \beta \in \rr^N$ and $i = 1,\ldots, N$. Finally define $(\alpha^{(0)},\beta^{(0)})\in \rr^{2N}$ coordinate wise by \begin{equation} \alpha^{(0)}_i := \frac{1}{\gamma_i} g_i^+(0) \liminf_{\|x\|\to \infty}g_i^-({x}) \quad\text{and}\quad \beta^{(0)}_i := \frac{1}{\gamma_i} \limsup_{\|x\|\to \infty}g_i^+(x) g_i^-(0) \end{equation} \begin{theorem} \label{thm:bootstrap_eqbounds} Consider $f$, $\Phi$, and $(\alpha^{(0)},\beta^{(0)})$ as defined above. Assume that $\liminf_{\|x\|\to \infty}g_i^-({x})>0$ for all $i=1,\ldots,N$. Then, the following are true. \begin{enumerate} \item[(i)] $x\in (0,\infty)^N$ is a zero of $f$ if and only if \begin{equation} \label{eq:Phifixed} \Phi_i(x, x) = x_i = \Phi_{N+i}(x, x) \end{equation} for all $i=1,\ldots, N$. \item[(ii)] Starting with $n=0$, iteratively define $(\alpha^{n+1},\beta^{n+1}) = \Phi(\alpha^{n},\beta^{n})$. Then, $(\hat{\alpha},\hat{\beta}) = \lim_{n\to\infty}(\alpha^{n},\beta^{n})$ exists. \item[(iii)] If $f(\hat x)=0$, then \[ \hat{\alpha}_i \leq \hat{x}_i\leq \hat{\beta}_i, \quad i=1,\ldots,N. \] \end{enumerate} \end{theorem} \begin{proof} We leave it to the reader to check (i). (ii) follows from the boundedness and strict monotonicity of $g_i^+$ and $g_i^-$. To be more specific, we prove inductively that for each $i=1,\ldots, N$, $\alpha_i^{(n)}$ and $\beta_i^{(n)}$ are monotonically increasing and decreasing sequences, respectively. The base case follows from \[ \alpha_i^{(1)} = \Phi_i\paren*{\alpha^{(0)},\beta^{(0)}} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(0)}} g_i^-\paren*{\beta^{(0)}} > \frac{1}{\gamma_i} g_i^+\paren*{0} \liminf_{\|x\|\to \infty}g_i^-({x}) = \alpha^{(0)}_i \] and \[ \beta_i^{(1)} = \Phi_{N+i}\paren*{\alpha^{(0)},\beta^{(0)}} = \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(0)}} g_i^-\paren*{\alpha^{(0)}} < \frac{1}{\gamma_i} \limsup_{\|x\|\to \infty}g_i^+\paren*{x} g_i^-\paren*{0} = \beta^{(0)}_i. \] Now assume that $\alpha_i^{(n)} < \alpha_i^{(n-1)}$ and $\beta_i^{(n)}> \beta_i^{(n-1)}$. The strict monotonicity of $g_i^+$ and $g_i^-$ implies that \[ \alpha_i^{(n+1)} = \Phi_i\paren*{\alpha^{(n)},\beta^{(n)}} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n)}} g_i^-\paren*{\beta^{(n)}} > \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n-1)}} g_i^-(\beta^{(n-1)}) = \alpha^{(n)}_i \] and \[ \beta_i^{(n+1)} = \Phi_{N+i}\paren*{\alpha^{(n)},\beta^{(n)}} = \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n)}} g_i^-\paren*{\alpha^{(n)}} < \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n-1)}} g_i^-\paren*{\alpha^{(n-1)}} = \beta^{(n)}_i. \] The proof of (iii) is also done inductively. Define \[ \cR^{(n)} := \prod_{i = 1}^{N} [\alpha^{(n)}_i, \beta^{(n)}_i]. \] By the proof of (ii), $\cR^{(n+1)}\subset \cR^{(n)}$. Define $F : [0,\infty)^N \to [0,\infty)^N$ by the formula \[ F_i(x) = \frac{1}{\gamma_i }g_i(x) \qquad 1 \leq i \leq N. \] Observe that if $f(\hat{x})=0$, then $F(\hat x)=\hat{x}$. Therefore, it suffices to prove that if $F(\hat x)=\hat{x}$ then $\hat{x}\in \cR^{(n)}$ for all $n \in \nn$. Observe that from the definitions of $F$ and $\paren*{\alpha^{(0))},\beta^{(0)}}$, \[ F\paren*{[0,\infty)^N} = \cR^{(0)} \] and therefore $\hat{x} \in \cR^{(0)}$. Inductively, suppose that $n \in \nn$ is fixed and $\hat{x} \in \cR^{(n-1)}$, i.e. \begin{equation} \label{eq:induction_bound} \alpha_i^{(n-1)} \leq \hat{x}_i \leq \beta_i^{(n-1)} \qquad \text{for} \ 1 \leq i \leq N. \end{equation} The inequalities of \eqref{eq:induction_bound} combined with the definition of $\Phi$ implies that for all $1 \leq i \leq N$ \[ \alpha_i^{(n)} = \frac{1}{\gamma_i} g_i^+\paren*{\alpha^{(n-1)}} g_i^-\paren*{\beta^{(n-1)}} \leq F_i(\hat x)=\hat{x}_i \leq \frac{1}{\gamma_i} g_i^+\paren*{\beta^{(n-1)}} g_i^-\paren*{\alpha^{(n-1)}} = \beta_i^{(n)} \] where the inequalities are obtain from the fact that $g_i^+$ and $g_i^-$ are strictly monotonically increasing and decreasing respectively. Therefore, $\hat{x} \in \cR^{(n)}$. \end{proof} Observe that $F\paren*{\cR^{(n-1)}} = \cR^{(n)}$. This motivates the following algorithm for bounding equilibria. \begin{algorithm} \caption{Bootstrap algorithm} \label{alg:bootstrap_equilibria} \begin{algorithmic}[1] \Function{\tt RootEnclosure}{$f$} \State $u \gets \lambda$ \Comment{Initialize orbit as a function of $\lambda$} \State $v \gets \Phi(u)$ \While{$\norm{u - v} > \epsilon$} \State $u \gets v$ \State $v \gets \Phi(u)$ \EndWhile \State $R \gets \prod_{i = 1}^N [v_i, v_{N+i}]$ \State \textbf{return} $R$ \EndFunction \Function{\tt MonotoneHillEquilibria}{$f, k$} \State $R \gets ${\tt RootEnclosure($f$)} \State \textbf{return} {\tt HillEquilibria$(f, R, k)$} \EndFunction \end{algorithmic} \end{algorithm} \corrc EQ: $\lambda$ is not defined on the first line of the algorithm is that all right?<<>> \corrc EQ: is this second algorithm really necessary in pseudo-code? seems a bit overkill<<>> \subsection{Distinguishing numerical equilibria} \bigskip {\color{red}To HERE} \bigskip \corrl SK: << || Short section on radii polynomials and their use in isolating equilibria. Probably this should be merged with the Newton's method subsection. >> In our search for equilibria, we often found hard to determine if two results retrieved by the Newton's method were two numerically different representations of the same analytic solution or two similar representations of two different analytical solutions. To numerically distinguish between the two, we used a numerical version of the radii polynomial approach. This method has been used in the validated numerics community \cite{}, and, when carefully implemented, can prove the existence and uniqueness of an analytical solution in an explicitely computed neighborhood of the numerical solution. In this paper, we will present an approximate implementation of this method, thus the results we obtain do not have the status of proof as the ones in \cite{}, but can be used as a strong indication to merge or distinguish two numerical solutions. The radii polynomial approach is build around the contraction mapping theorem. In an abstract setting, we consider a zero-finding problem $F(x) = 0$. In our case, $F$ is the right hand side of the ODE \eqref{eq:ODE_model}. Let $\hat x$ be an approximation of the solution, such that $F(\hat x)\approx 0$, and let $A \approx DF(\hat x) ^{-1}$. Then we define a Newton-Kantorovich operator $$ T(x) = x - AF(x), $$ with the idea that $T$ is likely to be a contraction in the neighborhood of $\hat x$. To estimate the radius of contraction of $T$, we make use of the following theorem. \begin{theorem} Let $T$ be as defined, and let $$ Y \geq \| T(\hat x) - \hat x \|, \qquad Z(r) \geq \max_{b\in B_1(0)} \| DT(\hat x+ rb)\|, $$ then, let us define the radii polynomial as \red{norms not properly defined, should we?} \begin{equation}\label{eq:radii_pol} p(r) = Y + (Z(r) - 1) r. \end{equation} If there exists an $r^*$ such that $p(r^*)<0$, then there exists a unique $x_{sol}$ such that $F(x_{sol} = 0$ and $\|\hat x - x_{sol}\|<r^*$. \end{theorem} For the proof we refer to \cite{}. In our case, we notice $DT(x) = I - ADF(x)$, thus $$ DT(\hat x+ rb)=I - ADF(\hat x+rb) = I - ADF(\hat x) + A\left(DF(\hat x) - DF(\hat x + rb)\right) $$ and we split to computation of $Z(r)$ into the computation of $ Z_0 \geq \|I - ADF(\hat x)\| $ and $ Z_1(r) \geq \max_{b\in B_1(0)} \|A\left(DF(\hat x) - DF(\hat x + rb)\right)\|$. It's worth noting that $$ \max_{b\in B_1(0)} \|A\left(DF(\hat x) - DF(\hat x + rb)\right) \| \leq \max_{z,b\in B_1(0)}\|AD^2F(\hat x + rz)b\|r. $$ We then approximate $ \max_{z,b\in B_1(0)}\|AD^2F(\hat x + rz)b\| \approx \|AD^2F(\hat x)\|$, by assuming that the second derivative of $F$ is almost constant. While the approximation is crude, we are using this method to further weed out numerical duplicates, thus we consider it sufficiently precise for the occasion. \subsection{The zero finding problem} \subsection{The Lagrangian formulation} \blue{This should be turned into a section about the minimization algorithm used in the code and how it handles constraints. While it doesn't need to be precise, it should be here for completeness} While onfronted with high dimensional parameter spaces, we chose the approach of looking of ``special'' parameters in this space. The easiest mathematical definition of ``special'' in this context is the solution to an optimization problem. We therefore want to present here a numerical approach to numerical optimization. First of all, a dislaimer: this section is purely for review and completeness of the paper and doesn't present any new results. The problem we want to solve numerically is a generalised minimization problem with equality constraints, that would take the form \begin{align}\label{e:minimization_problem} \min g(x) : \quad F(x)= 0 \end{align} with $x\in\rr^k$, $g:\rr^k \rightarrow \rr$ and $F : \rr^k \rightarrow \rr^l$ with $1\leq l < k$. Let us remark here that we want to aproach this problem from a numerical perspective. We therefore will consider this problem only locally, and we will not be looking for any global minimum. We might consider doing so many local searches that we could be fairly sure of having converged to the global minimum, but it will always only be a strong hint, never a proof. In most of the cases we will discuss, the fuction $g$ will be trivial, while the constraints encapsulated in $F$ will be fairly complicated and require most of our work. Analytically, the \textit{Lagrangian formulation} allows to rewrite problem \eqref{e:minimization_problem} into a minimization problem without constraints. This formulation works as presented in the following Theorem. \begin{theorem} With the notation just introduced, let $\mu \in \rr^l$ of unknonws. The Lagrangian $$ \mathcal{L}(x,\mu) = g(x)- \mu^T F(x) $$ has a local minimum for every local solution to \eqref{e:minimization_problem}. \end{theorem} \begin{proof} Fuente, Angel de la (2000). Mathematical Methods and Models for Economists. Cambridge: Cambridge University Press. p. 285. or plenty others \end{proof} We are therefore left to solve the equivalent problem \begin{align*} &D_xg(x) - \mu^T D_xF(x) = 0,\\ &F(x) = 0, \end{align*} that is incidentally a probelm with $k+ l $ unknowns and $k+l$ equations. In the cases considered in this article, $F$ encloses all dynamical constraints, such as beng a saddle node bifucation, and $g$ is the optimization constraint. In the following, multiple examples with this structure will be presented. On the numerical level, this problem is solved following the trust region contrained algorithm, first presented in [Powell, M.J.D., Yuan, Y. A trust region algorithm for equality constrained optimization. Mathematical Programming 49, 189–211 (1990)] (\red{is this the right reference?}) \red{OLD version below} There are two problems with trying to minimize $\hill$ along the surface of saddle-node bifurcations. The first is that for fixed $\lambda$, $\hill := \hill(x, \lambda)$ implicitly depends on the coordinates of the equilibrium as well as the other parameters, $\lambda$. The bifurcation point is only known implicitly as well as a root of the $g(x,\lambda, \hill, v)$ defined above. Therefore, it is not easy to get our hands on the derivative of $\hill$ with respect to $x$ and $\lambda$. The second concern is that we want to optimize (minimize or maximize) other scalars along this surface, not just $\hill$. For these reasons we reformulate the saddle-node problem as an appropriate Lagrangian optimization problem. For simplicity, we will continue to separate the parameters of the Hill Model as $\hill$ and $\lambda$ but this does not restrict the generality of the discussion. We suppose we have a function, $h(x,\lambda, \hill, v)$ which we would like to optimize along the bifurcation surface. Recall that we have the following dimensions: \[ \hill \in \rr, \quad x \in \rr^n, \quad \lambda \in \rr^m, \quad v \in \rr^n \] and thus, we can consider to be of the form $h : \rr^{m + 2n + 1} \to \rr$. We write $M = 2n+1$ and recall that the saddle node zero finding problem is a function of the form, $g : \rr^{M} \to \rr^{M}$. A stationary point satisfies $D \cL = 0$ which amounts to a solution of the nonlinear system of equations \begin{align*} Dh(x, \lambda, \hill, v) & = \mu^T Dg(x, \lambda, \hill, v) \\ g(x, \lambda, \hill, v) & = 0 \end{align*} Therefore, we will look for optimizers of $h$ at stationary points of the Lagrangian, \[ \cL(h, x, \lambda, \hill, v, \mu) := h - \mu^T g(x, \lambda, \hill, v) \qquad \mu \in \rr^{M} \] where $\mu$ is a vector of unknown multipliers. Counting dimensions we see that $\cL : \rr^{m + 2M} \to \rr$ but $g = 0$ ``uses up'' $M$ of these free variables as constraints. So we are still left with an optimization over $m + M$ free variables as expected. Now, we compute $D \cL$ by writing $\mu \in \rr^M$ in components, \[ \mu = (\mu_1, \mu_2, \mu_3) \in \rr^n \times \rr^n \times \rr. \] This computation is easiest to work out for particular examples so we consider the example $h = \hill$. Then, we can write $\cL$ as \[ \cL(\hill, x, \lambda, v, \mu) = \hill - \mu_1^T f(x,\lambda, \hill) - \mu_2^T Df(x, \lambda, \hill) \cdot v - \mu_3 \paren*{\norm{v} - 1}. \] Then taking the derivative of $\cL$ we have \[ D \cL(\hill, x, \lambda, v, \mu) = \begin{pmatrix} 1 - \mu_1^T D_nf(x, \lambda, \hill) - \mu_2^T D_n \paren*{D_xf(x, \lambda, n) \cdot v} \\ -\mu_1^T D_x f(x,\lambda, \hill) - \mu_2^T D_x \paren*{D_x f(x, \lambda, \hill) \cdot v} \\ -\mu_1^T D_\lambda f(x, \lambda, \hill) - \mu_2^T D_\lambda \paren*{D_x f(x, \lambda, n) \cdot v} \\ -\mu_2^T D_x f(x, \lambda, \hill) - \frac{\mu_3}{\norm{v}} v \\ g(x, \lambda, \hill, v) \end{pmatrix} \] Observe that with the exception of $D_\lambda f$ and $D_\lambda (Df)$, we have already computed each of these quantities in the original zero finding problem for the saddle-node. \subsection{Hysteresis} Connected to the problem of finding a saddle node given some function to minimize is what we will call the hysteresis problem. In its abstract formulation, we are considering the system $$ \dot x = f(\lambda, \beta, x) $$ that is known to undergo two saddle node bifurcations w.r.t. $\lambda$ for some choices of $\beta$, that is the system undergoes two saddle node bifurcations at $(\lambda_0,\beta_0,x_0)$ and $(\lambda_1,\beta_1,x_1)$ with $\beta_0=\beta_1$. The problem is to chose $\beta$ such as to maximise the parameter distance of the two saddle nodes, that is $| \lambda_0-\lambda_1|$. This problem stems from biological concerns. It models the situation in which one parameter is responsible for a change in expression levels, but these levels depend on the hystory of the cell. Mathematically, this is modelled by an equilibrium brnach that undergoes two saddle node bifurcations as depicted in Figure \ref{f:hysteresis}. \red{references to hysteresis papers} \begin{figure} \begin{center} \includegraphics[width = 0.7\textwidth, trim= 2cm 13cm 8.25cm 1.7cm, clip]{hysteresis} \caption{Hysteresis is associated to a branch of equilibria undergoing two saddle node bifurcations as depicted. This created a center region of bistability that ``remembers'' the direction the system is coming from upon changing $\lambda$. The saddle nodes are indicated by a red diamond. Unstable equilibria are drawn as dotted lines.}\label{f:hysteresis} \end{center} \end{figure} The problem we want to solve: $(\lambda_1, \lambda_2, \beta, x_1, x_2, v_1, v_2)$ such that \begin{align*} f(\lambda_1, \beta, x_1) = 0,\\ f(\lambda_2, \beta, x_2) = 0,\\ Df(\lambda_1, \beta, x_1)v_1 = 0,\\ Df(\lambda_2, \beta, x_2)v_2 = 0,\\ \lambda_1 - \lambda_2 \text{ is maximal} \end{align*} algorithm: \begin{enumerate} \item set $\beta$, find intial $(\lambda_1, \lambda_2, x_1, x_2, v_1, v_2)$ using a modification of what is now \textit{find\_saddle\_node} with the understanding that $\lambda_1<\lambda_2$, \item run constrained minimization with SLSQP with minimization function $g(\lambda_1, \lambda_2, \beta, x_1, x_2, v_1, v_2) = \lambda_1 - \lambda_2$ and constraints $$ \begin{cases} f(\lambda_1, \beta, x_1) = 0,\\ f(\lambda_2, \beta, x_2) = 0,\\ Df(\lambda_1, \beta, x_1)v_1 = 0,\\ Df(\lambda_2, \beta, x_2)v_2 = 0. \end{cases} $$ \end{enumerate} \begin{remark} If we have the toggle switch and if we want in more generality, we also want to add the constraint $$ \hill_1 = \hill_2, \text{ i.e. } \hill_1 - \hill_2 =0. $$ and possibly the constraint $$ \| (\lambda_1, \lambda_2, x_1, x_2, v_1, v_2)\|_p = k $$ with a reasonable $k$, possibly $k = 3$ or even 4 and $p$ either 1 or $\infty$. \end{remark} \subsection{Saddle-node bifurcations(old Section 5.1)} \corrc (I think this is Konstantin's comment) Here we should state the computational version as an abstract theorem. From this theorem it is clear what data we need. We now cite the github page and technical paper where reader can find code for Hill functions. <<>> \corrc EQ: we should rephrase this as a path <<>> \corrc EQ: why is $X$ there twice? <<>> \longcorrl Rephrase in the context of 1D parameters, because in the following we always talk about saddle nodes along a parameter path (1D) << In this section we will briefly review the definition of saddle-node bifurcation and we illustrate its characterization as the solution of a root finding problem. Suppose $f : X \times \Lambda \to X$ denotes a Hill model, with the notation already introduced. Let $\lambda_0$ denote a single scalar component of $\lambda$ which will serve as a distinguished parameter, and let $\lambda' \in \rr^{N + 4M-1}$ denote the vector of remaining parameters. We let $f_{\lambda'} : X \times \rr \times X$ denote the $1$-parameter family of vector fields obtaining by fixing $\lambda'$ i.e.~all parameters except $\lambda_0$ are fixed. For $\lambda_0 \in \rr$, $\hat x(\lambda_0) \in \rr^N$ is an equilibrium if it is a constant solution of Equation \eqref{eq:ODE_model}, or equivalently, if \begin{equation} f_{\lambda'} (\hat x(\lambda_0), \lambda_0) = 0. \end{equation} By the implicit function theorem, for any $\lambda_0$ such that $\hat x(\lambda_0)$ is a regular point of $f_{\lambda'}$, there exists a smooth $1$-dimensional curve of equilibria parameterized by $\lambda_0$. Intuitively, if a Hill model undergoes a saddle-node bifurcation at a parameter value, $\lambda_0^* \in\rr$, then on one side of the bifurcation we can see two equilibria (one having an unstable manifold of dimension 1 higher than the other's unstable manifold) colliding and disappearing at the bifurcaton. Consequently, the implicit function theorem fails at a saddle-node bifurcation. || In this section we will briefly review the definition of saddle-node bifurcation in great generality and we illustrate its characterization as the solution of a root finding problem. Suppose $g : X \times \rr \to X$ denotes a vector field depending on a unique parameter, that is we are considering the dynamical system \begin{equation}\label{eq:general_ODE} \dot x = g(x, s), \quad s\in\rr. \end{equation} For $s \in \rr$, $\hat x(s) \in \rr^N$ is an equilibrium if it is a constant solution of Equation \eqref{eq:general_ODE}, or equivalently, if \begin{equation} g(\hat x(s), s) = 0. \end{equation} By the implicit function theorem, for any $s$ such that $\hat x(s)$ is a regular point of $g$, there exists a smooth $1$-dimensional curve of equilibria parameterized by $s$. Intuitively, if a Hill model undergoes a saddle-node bifurcation at a parameter value, $s^* \in\rr$, then on one side of the bifurcation we can see two equilibria (one having an unstable manifold of dimension 1 higher than the other's unstable manifold) colliding and disappearing at the bifurcation. Consequently, the implicit function theorem fails at a saddle-node bifurcation. >> We recall the following characterization of a saddle-node bifurcation here. A proof can be found in \cite{MR2224508}. \longcorrl EQ: Trying to connect it more to the path saddle node we want to use! << \begin{theorem}[saddle-node bifurcation] \label{thm:saddle_node_bifurcation} Let $f_{\lambda'} : X \subset \rr^N \times \rr \to X$ be a Hill model with a single free parameter, $\lambda_0$, and recall that $f_{\lambda'}$ depends smoothly on both $x$ and $\lambda_0$. Then, $f_{\lambda'}$ undergoes at a {\em saddle-node bifurcation} at a point, $(\hat{x}, \hat{\lambda}_0)$ if the following conditions are satisfied. \begin{enumerate} \item $f_{\lambda'} (\hat{x}, \hat{\lambda}_0) = 0$. \item $\ker D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0) = \inspan(\setof{v})$ for some nonzero $v \in \rr^N$. \item $w_1 := D_{\lambda_0} f_{\lambda'}(\hat{x}, \hat{\lambda}_0) \neq 0$ and $w_1 \notin \image D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)$. \item $w_2 := D_x \paren*{D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)v } \neq 0$ and $w_2 \notin \image D_x f_{\lambda'} (\hat{x}, \hat{\lambda}_0)$ \end{enumerate} \end{theorem} || \begin{theorem}[saddle-node bifurcation] \label{thm:saddle_node_bifurcation} Let $g : X \times \rr \to TX$ be a vector field with a single free parameter, $s$, and recall that $g$ depends smoothly on both $x$ and $s $. Then, $g$ undergoes at a {\em saddle-node bifurcation} at a point, $(\hat{x}, \hat{s})$ if the following conditions are satisfied. \begin{enumerate} \item $g (\hat{x}, \hat{s}) = 0$. \item $\ker D_x g (\hat{x}, \hat{s}) = \inspan(\setof{v})$ for some nonzero $v \in \rr^N$. \item $w_1 := D_{s} g(\hat{x}, \hat{s}) \neq 0$ and $w_1 \notin \image D_x g (\hat{x}, \hat{s})$. \item $w_2 := D_x \paren*{D_xg(\hat{x}, \hat{s})v } \neq 0$ and $w_2 \notin \image D_x g (\hat{x}, \hat{s})$ \end{enumerate} \end{theorem} >> \corrc EQ: we still don't check conditions 3 nor 4, nor condition 2 of the corollary <<>> To numerically find saddle-node bifurcations, we want a characterization as solutions of a root-finding problem. This is provided by the following computational version of Theorem \ref{thm:saddle_node_bifurcation}. \longcorrl<< \begin{corollary} \label{cor:saddle_node_bifurcation} Let $f_{\lambda'}, \hat x$, and $\hat{\lambda}_0$ be as defined in Theorem \ref{thm:saddle_node_bifurcation}. Define $g_{\lambda'} : \rr^{2N+1} \to \rr^{2N+1}$ by the formula \begin{equation}\label{eq:num_saddle_node} g_{\lambda'}(x, v, \lambda_0) := \begin{pmatrix} f_{\lambda'}(x, \lambda_0)\\ D_xf(x, \lambda_0) v\\ v^Tv - 1 \end{pmatrix} \qquad x, v \in \rr^N, \lambda_0 \in \rr. \end{equation} Let $u = (x, v, \lambda_0) \in \rr^{2N+1}$ and suppose $\hat{u} := (\hat{x}, \hat{v}, \hat{\lambda}_0) \in \rr^{2N+1}$ is a root of $g_{\lambda'}$ satisfying \begin{enumerate} \item $D_u g_{\lambda'} (\hat u)$ is an isomorphism. \item Every nonzero eigenvalue of $D_x f_{\lambda'}(\hat x, \hat{\lambda}_0)$ has nonzero real part. \end{enumerate} Then $f_{\lambda'}$ undergoes a saddle-node bifurcation at $(\hat{x}, \hat{\lambda}_0)$ and $\ker D_x f_{\lambda'}(\hat{x}, \hat{\lambda}_0) = \inspan{\setof{\hat{v}}}$. \end{corollary} || \begin{theorem} \label{thm:saddle_node_bifurcation} Let $g, \hat x$, and $\hat{s}$ be as defined in Theorem \ref{thm:saddle_node_bifurcation}. Define $G : \rr^{2N+1} \to \rr^{2N+1}$ by the formula \begin{equation}\label{eq:num_saddle_node} G(x, v, s) := \begin{pmatrix} g(x, s)\\ D_xg(x, s) v\\ v^Tv - 1 \end{pmatrix} \qquad x, v \in \rr^N, s \in \rr. \end{equation} Let $u = (x, v, s) \in \rr^{2N+1}$ and suppose $\hat{u} := (\hat{x}, \hat{v}, \hat{s}) \in \rr^{2N+1}$ is a root of $g$ satisfying \begin{enumerate} \item $D_u G (\hat u)$ is an isomorphism. \item Every nonzero eigenvalue of $D_x g(\hat x, \hat{s})$ has nonzero real part. \end{enumerate} Then $g$ undergoes a saddle-node bifurcation at $(\hat{x}, \hat{s})$ and $\ker D_x g(\hat{x}, \hat{s}) = \inspan{\setof{\hat{v}}}$. \end{theorem} >> \corrc SK: Should we prove this? There is a proof in ODE book but its not published yet. How to cite this? \\ EQ: there are references, we'll find them. <<>> We refer to finding solutions satisfying the conditions in Corollary \ref{cor:saddle_node_bifurcation} as the {\em saddle-node bifurcation problem} and we refer to the function $G$ as the {\em saddle-node bifurcation map}. Observe that, given a candidate solution of the saddle-node problem which is a root of the saddle-node bifurcation map, it is trivial to check whether or not the two conditions required by Corollary \ref{cor:saddle_node_bifurcation} hold. Consequently, the main difficulty in solving the saddle node problem amounts to finding roots of $G$. This is a nonlinear problem, however, there exist a variety of sophisticated methods for solving nonlinear root finding problems. In the following Sections, we will discuss the numerical difficulties in finding a solution to the zero finding problem $G (u)= 0$. In our applications, we want to find saddle nodes along paths in Hill models. To this end, we will define a path $\gamma: \rr \rightarrow \Lambda$ and the vector field of interest when looking for saddle node bifurcations is $$ g(x, s) = h (x, \gamma(s)), $$ where $f$ is a Hill model as presented in Section \ref{sec:hill models}. \corrc EQ: this is a repetition from previously, where do we introduce these paths properly? <<>> \subsection{Finding saddle-node bifurcations (old Section 5.2)} \label{sec:finding saddle-node bifurcations} As defined in \ref{def:combinatorial_saddle} and following the notation introduced in Section \ref{sec:dsgrn}, when looking at a combinatorial saddle node from DSGRN, we are being provided with two regions of parameter space $R_1$ and $R_2\subset \Xi$ such that, for the discontinuous system, a stable equilibrium is appearing when crossing the boundary $ \partial R_1\cap \partial R_2$. Intuitively, this means that, for $d$ tending to infinity, if we choose two parameters $\lambda_1\in R_1$ and $\lambda_2\in R_2$, a path $\gamma$ connecting them and crossing their shared boundary should undergo a saddle node bifurcation at $\partial R_1\cap \partial R_2$. In \cite{??} such intuition is formalised for piecewise linear functions used as approximation of the discontinuous dynamics. By appropriately associating to each parameter a piecewise linear function, it is proven that there exist a branch of steep enough piecewise linear functions parametrized by $\gamma$ such that the branch undergoes a saddle node bifurcation. \corrc The following claim is proven in \cite{duncan:gedeon:kokubu:mischaikow:oka}. <<>> We then formulate the following claim: \begin{claim} Given any parameter $\lambda\in\Xi$, there exists a large enough $d^*$ such that the number of stable equilibria of the Hill system at $(\lambda, d)$ corresponds to the number of combinatorial equilibria in the DSGRN system at $\lambda$ for any $d>d^*$. \end{claim} From this claim, that the appearance of an equilibrium at a combinatorial saddle would correspond to the appearance of an equilibrium in the smooth system when moving from one parameter region to another. To be precise, our claim is \begin{claim} Given two parameters $(\lambda_1, d^*)$ and $(\lambda_2, d^*)$ such that $\lambda_1 \in R_1$ and $\lambda_2\in R_2$, with $d^*$ sufficiently large, any path $\gamma$ connecting them would {\em usually} undergo an odd number of saddle node bifurcations. \end{claim} Let us point out how passing from the first claim to the second one assumes that all bifurcations affecting the number of equilibria are saddle node bifurcations. While this is likely, due to the saddle node being the most generic bifurcation, this is not always the case. For this reason, this second claim should be interpreted from a statistical perspective. Once this ground work is established, we are left with choosing a reasonable path and searching for saddle nodes along such given path. For the Toggle Switch toy model, an additional information that we will exploit is that, for any parameter $\lambda$, the toggle switch has a unique equilibrium at $d = 1$. Thus, given a parameter $\lambda$, we always consider the straight path between $(\lambda, \infty)$ and $(\lambda,1)$ while we search for saddle nodes. The expectation is that, if $\lambda\in R(5)$, then we will find a saddle node, while we wouldn't find any if $\lambda\notin R(5)$. Let us remark that a pair of additional saddle nodes might appear in any region. This would result in the existence of parameters in $R(5)$ that undergo three saddle nodes between $d = 1$ and $d = \infty$ and parameters outside of $R(5)$ that undergo two saddle nodes along our straight path. Numerically, finding saddle nodes along a give path requires the balancing between two opposing numerical instabilities. On one hand, starting a Newton method on the extended system \eqref{eq:num_saddle_node} requires the knowledge of a starting point sufficiently close to the analytical saddle node. Furthermore, the convergence of this Newton iterations is very sensitive to the choice of $v$, the vector in the kernel of $D_xf(x, \gamma(s))$. Thus, before calling Newton we need to build a reasonable approximation of the solution. On the other hand, a reasonable approximation of the solution requires the knowledge of at least one equilibrium that would undergo the saddle node. Unfortunately, this problem is numerically ill-posed when the parameters are close to the saddle node parameters, and two equilibria are close to merging and disappearing. To balance these two numerical problems, the algorithm used consists of three steps: \begin{itemize} \item{sub-division}: The path $\gamma$ is subdivided into smaller sub-paths $\gamma_i, i = 1,\dots N$. The size of the sub-path here generated is the measure of the refinement of the algorithm. If a pair of saddle nodes is happening within a sub-path, the algorithm will not be able to detect them. \item{bisection}: For each sub-path $\gamma_i$, if the number of equilibria changes between the beginning and the end of the sub-path, a bisection algorithm is called, thus refining the size of the interesting sub-path. The scope of the bisection algorithm is to refine the starting point for the root finding algorithm as much as possible, but we need to consider that, when the parameters are two close to the saddle node parameters, the root finding algorithm will likely not find the equilibria that are on the verge of undergoing a saddle node. At the end of this bisection phase, we have a sub-path $\hat \gamma$ such that the number of numerical equilibria found at $\hat \gamma(0)$ and at $\hat \gamma(1)$ differs by at least one. We then select the equilibrium $\hat x$ that is most likely to be undergoing a saddle node and the parameter $\hat \lambda$ where that equilibrium is found. \item{Find Root}: Once a good approximation of the equilibria undergoing a saddle node has been found, an approximation $\hat v$ of the kernel of $f$ can be built, and our {\tt FindRoot} algorithm can be called on \eqref{eq:num_saddle_node} with initial approximation $(\hat x, \hat v, \hat \lambda)$. When this algorithm does not converge, we store the initial approximation as a "bad candidate". \end{itemize} This combination of search algorithms, combining a rougher search, a bisection and the {\tt FindRoot} algorithm has been, in our experience, successful in managing the two numerical instabilities of the equilibrium and saddle node funding problems. \subsection{The Toggle Switch Hill model} \label{sec:the toggle switch hill model} We begin by briefly outlining how the Hill model construction described in Section \ref{sec:hill models} yields a Hill function model associated with the ODEs in Equation \eqref{eq:Toggle_ODEs}. The Toggle Switch has two state variables of interest, denoted by $x_1, x_2$, and thus the appropriate phase space is $X = (0, \infty)^2$. For $i = 1,2$, the state variable $x_i$ is assigned a positive linear decay parameter, $\gamma_i$. In addition, both edges of the network are assigned a negative Hill function since both edges in Figure \ref{fig:TS}(a) are repressing. These Hill functions contribute the additional 8 parameters to the model and the parameter space for the Toggle Switch Hill model is $\Lambda = (0, \infty)^{10}$. As described in Section \ref{sec:hill models}, we collect the parameters into a vector which we order as follows \[ \lambda := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}} \in \Lambda. \] Since each node of the Toggle Switch has only a single incoming edge we assign the interaction function defined by $p(z) = z$ to both coordinates of $\cH$. Therefore, the Toggle Switch Hill model is given by the formula \begin{equation} \label{eq:toggle switch hill model} f(x, \lambda) = -\Gamma x + \cH(x), \qquad x \in X, \quad \lambda \in \Lambda \end{equation} where the linear and nonlinear terms are given by \begin{equation} \Gamma = \begin{pmatrix} \gamma_1 & 0 \\ 0 & \gamma_2 \end{pmatrix} \qquad \cH(x) = \begin{pmatrix} H^-_{1,2}(x_2) \\ H^-_{2,1}(x_1) \end{pmatrix}. \end{equation} As expected this is simply the vectorized form of the system of ODEs in Equation \eqref{eq:Toggle_ODEs}. Additionally, we note that $f$ satisfies the hypotheses of Theorem \ref{thm:bootstrap_eqbounds} and consequently in the analysis to follow, Algorithm \ref{alg:bootstrap_equilibria} was employed to bound the equilibria for the Toggle Switch. In fact, in Section \ref{sec:numerical_analysisTS} we prove a stronger version of Theorem \ref{thm:bootstrap_eqbounds} which improves the efficiency of our statistical investigation. The relevant parameters for the combinatorial Toggle Switch model analyzed using DSGRN are as vectorized and denoted by \[ \xi := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}} \in \Xi, \] where we remind the reader that $\Xi = (0, \infty)^8$ denotes the combinatorial parameter space which excludes the Hill coefficient parameters. Despite the fact that $\Xi$ is $8$ dimensional, DSGRN computes the combinatorial dynamics for the Toggle Switch in less than a second on a basic laptop. Recall from Section \ref{sec:introduction} that the combinatorial description of the dynamics consists of the $9$ semi-algebraic sets listed in Table \ref{tab:parameter_regions} and the Morse graph describing the combinatorial dynamics for all parameters in each of these $9$ regions. Finally, we relate the combinatorial and Hill model parameter spaces by defining the projection map $\pi_\Xi : \Lambda \to \Xi$ by the formula \[ \pi_\Xi \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}} = \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}}. \] In terms of this projection we can now restate the heuristic claims made at the end of Section \ref{sec:introduction}. First, we claim that, with very high probability, $\lambda \in \Lambda$ is a bistable parameter for $f$ if and only if $\pi_\Xi \lambda \in R(5)$. Then, let $\curve: [0, 1] \to \Lambda$ be a path such that $d_{1,2} = d_{2,1} = 1$ at $\curve(0)$, $d_{1,2}, d_{2,1} \gg 1$ at $\curve(1)$, and $\pi_{\Xi} \circ \curve$ is constant. Then we claim that with very high probability, we find a saddle-node bifurcation for $f$ along $\curve$ if and only if $\pi_\Xi \circ \curve \in R(5)$. We will use statistical hypothesis testing combined with our efficient numerical implementation of the Toggle Switch to justify these two claims. \subsection{Reducing the number of parameters} \label{sec:reducing the number} While the full parameter space of the Toggle Switch can be studied by \library, we are interested in making the results more readable and easier to handle from a statistical perspective too. For this, we will make several changes to the model in order to reduce the dimension of the parameter space. We will reduce the number of parameters via two mechanisms. First we will equate the Hill coefficients associated to both edges in the Toggle Switch. This amounts to assuming that $d := \hill_{1,2} = \hill_{2,1}$, is the common value of both Hill coefficients which reduces the number of parameters in the model by $1$. Second, we will further reduce the dimension of the parameter space via non-dimensionalization of the parameters. After identifying the Hill coefficients and performing non-dimensionalization whereby we may fix $3$ parameter values to be $\gamma_1 = \theta_{2,1} = \theta_{1,2} = 1$ we obtain the {\em reduced Toggle Switch} Hill model defined by \begin{equation} f^*(x) := \begin{pmatrix} - x_1 + \ell_{1,2}^* + \delta_{1,2}^* \frac{1}{1 + x_2^\hill} \\ - \gamma_2^* x_2 + \ell_{2,1}^* + \delta_{2,1}^* \frac{1}{1 + x_1^\hill} \end{pmatrix}. \end{equation} where the parameters of the reduced model are related to the original parameters by the identities \begin{align*} \ell_{1,2}^* & := \frac{\ell_{1,2}}{\gamma_1 \theta_{2,1}} \\ \delta_{1,2}^* & := \frac{\delta_{1,2}}{\gamma_1 \theta_{2,1}} \\ \ell_{2,1}^* & := \frac{\ell_{2,1}}{\gamma_1 \theta_{1,2}} \\ \delta_{2,1}^* & := \frac{\delta_{2,1}}{\gamma_1 \theta_{1,2}} \\ \gamma_2^* & := \frac{\gamma_2}{\gamma_1} \\ d & := d_{1,2} = d_{2,1}. \end{align*} The parameter space associated to $f^*$ is the subspace, $\Lambda^* \subset \Lambda$, defined by \[ \Lambda^* := \setof*{\lambda \in \Lambda : \gamma_1 = \theta_{2,1} = \theta_{1,2} = 1, \ \hill_{1,2} = \hill_{2,1} = d} \cong (0, \infty)^6, \] and we denote a typical parameter by \[ \lambda^* := \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*, d} \in \Lambda^*. \] The dynamics generated by $f^*$ are conjugate to $f$ restricted to the subset $\setof*{\lambda \in \Lambda : d_{1,2} = d_{2,1}}$ and therefore we have performed all computations described in the remaining sections using the reduced Toggle Switch model. We also note that the \library library has been written to allow these sorts of constraints to be implemented just as easily as a general Hill model, and takes advantage of the reduced number of parameters for faster computation. However, it is crucial to point out that none of the algorithms in this paper rely on either of the reductions performed on this example. After imposing these parameter constraints for the DSGRN parameter regions we obtain a {\em reduced} version of the combinatorial parameter space denoted by $\Xi^* := (0, \infty)^5 \subset \rr^5$ where a typical reduced combinatorial parameter has the form $\xi^* = \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*}$. Observe that the $9$ DSGRN parameter regions are projected onto semi-algebraic subsets of $\Xi^*$ defined by the {\em reduced inequalities} given in the last column of Table \ref{tab:parameter_regions}. Of course, the dynamic phenotypes for each region are unchanged as a consequence of the conjugacy. Analogous to the discussion in Section \ref{sec:the toggle switch hill model} we define a projection map for the reduced parameter space $\pi_{\Xi^*} : \Lambda^* \to \Xi^*$ which is defined by projection onto the first $5$ coordinates. In the projected parameter space $\Xi^*$, we can readily define all the parameter regions already introduced by DSGRN in $\Lambda^*$ in Section \ref{sec:the toggle switch hill model}. The reduced inequalities for each projected region $R^*(i), i = 1,\dots, 9$ are presented in the last column of Table \ref{tab:parameter_regions}. \subsection{Numerical analysis of the Toggle Switch}\label{sec:numerical_analysisTS}\label{sec:the toggle switch saddle} Here we explicitly describe our implementation of numerical techniques described in Sections \ref{sec:equilibria} for the Toggle Switch example starting with finding equilibria. For the case of the Toggle Switch, some additional results are stated in Theorem \ref{thm:toggle_bootstrap_eqbounds}. We begin by observing that $f^*$ satisfies Definition \ref{def:monotone_factorization} for any $\lambda^* \in \Lambda^*$ and therefore the bootstrap algorithm is applicable. Following the construction in Section \ref{sec:the bootstrap algorithm} we obtain the bootstrap map for the reduced Toggle Switch which is given by the formula \[ \Phi(\alpha, \beta) = \begin{pmatrix} \ell_{1,2}^*+ \frac{\delta_{1,2}^*}{1 + \beta_2^\hill} \\ \frac{1}{\gamma_2^*} \paren*{\ell_{2,1}^*+ \frac{\delta_{2,1}^*}{1 + \beta_1^\hill}} \\ \ell_{1,2}^*+ \frac{\delta_{1,2}^*}{1 + \alpha_2^\hill}\\ \frac{1}{\gamma_2^*} \paren*{\ell_{2,1}^* + \frac{ \delta_{2,1}^*}{1 + \alpha_1^\hill}} \end{pmatrix} \qquad (\alpha, \beta) \in \rr^2 \times \rr^2. \] Following Algorithm \ref{alg:bootstrap_equilibria} we start with the initial condition \[ \paren*{\alpha^{(0)}, \beta^{(0)}} := \begin{pmatrix} \ell_{1,2}^*\\ \ell_{2,1}^* \\ \ell_{1,2}^*+ \delta_{1,2}^*\\ \ell_{2,1}^* + \delta_{2,1}^* \end{pmatrix}, \] and define the iterates under $\Phi$ by $\paren*{\alpha^{(n)}, \beta^{(n)}} = \Phi\paren*{\alpha^{(n-1)}, \beta^{(n-1)}}$ for $n \geq 1$. Theorem \ref{thm:bootstrap_eqbounds} ensures that the sequence of iterates converges to a fixed point of $\Phi$ denoted by $(\hat{\alpha}, \hat{\beta}) \in \rr^2 \times \rr^2$ and that all equilibria of $f^*$ are contained in the rectangle $[\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$. However, in the case of the toggle switch we can prove a stronger version of Theorem \ref{thm:bootstrap_eqbounds} which we will exploit in our statistical analysis. \begin{theorem} \label{thm:toggle_bootstrap_eqbounds} Let $f^* : X \to TX$ denote the Toggle Switch Hill model with $\lambda^* \in \Lambda^*$ fixed and let $\Phi : \rr^4 \to \rr^4$ be the associated bootstrap map for $f^*$. Suppose the orbit through $u^{(0)}$ converges to $\hat{u} := (\hat{\alpha}, \hat{\beta}) \in \rr^2 \times \rr^2$ and let $ \hat R := [\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$ denote the equilibrium bounds guaranteed by Theorem \ref{thm:bootstrap_eqbounds}. Then exactly one of the following is true. \begin{enumerate} \item $\hat{R}$ is a degenerate rectangle (i.e.~for $i = 1,2$, $\hat{\alpha}_i = \hat{\beta}_i$) and $f^*$ has a unique equilibrium $\hat{x} = (\hat{\alpha}_1, \hat{\alpha}_2)$ which is stable. \item $\hat {R}$ is non-degenerate and $f^*$ has at least two stable equilibria. Specifically, the corners of $\hat{R}$ with coordinates \[ \hat{x}_1 = (\hat{\alpha}_1, \hat{\beta}_2), \qquad \hat{x}_2 = (\hat{\beta}_1, \hat{\alpha}_2) \] are stable equilibria of $f^*$. \end{enumerate} \end{theorem} \begin{proof} Define $\hat{x}_1 := (\hat{\alpha}_1, \hat{\beta}_2), \hat{x}_2 := (\hat{\beta}_1, \hat{\alpha}_2)$, and observe that since $(\hat{\alpha}, \hat{\beta})$ is a fixed point of $\Phi$ we have by direct computation \begin{align*} H_{1,2}(\hat{\beta}_2) & = \hat{\alpha}_1 \\ H_{2,1}(\hat{\beta}_1) & = \gamma_2^* \hat{\alpha}_2 \\ H_{1,2}(\hat{\alpha}_2) & = \hat{\beta}_1 \\ H_{2,1}(\hat{\alpha}_1) & = \gamma_2^* \hat{\beta}_2. \end{align*} It follows that \[ f^*(\hat{x}_1) = f^*(\hat{\alpha_1}, \hat{\beta}_2) = \begin{pmatrix} \hat{\alpha}_1 - H_{1,2}(\hat{\beta}_2) \\ \gamma_2^* \hat{\beta}_2 - H_{2,1}(\hat{\alpha}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] \[ f^*(\hat{x}_2) = f^*(\hat{\beta}_1, \hat{\alpha}_2) = \begin{pmatrix} \hat{\beta}_1 - H_{1,2}(\hat{\alpha}_2) \\ \gamma_2^* \hat{\alpha}_2 - H_{2,1}(\hat{\beta}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] so that $\hat{x}_1, \hat{x}_2$ are equilibria for $f^*$. Evidently, if $\hat{R}$ is degenerate then $\hat{x}_1 = \hat{x}_2$ and by Theorem \ref{thm:bootstrap_eqbounds} it follows that this is the unique equilibrium for $f^*$. On the other hand if $\hat{R}$ is nondegenerate then $\hat{x}_1$ and $\hat{x}_2$ are distinct equilibria of $f^*$. Next, we prove that $\hat{x}_1$ is stable as the argument for stability of $\hat{x}_2$ is similar. Observe that the derivative of $f^*$ at $\hat{x}_1$ is given by the formula \[ Df^*(\hat{x}_1) = \begin{pmatrix} -1 & H'_{1,2}(\hat{\beta}_2) \\ H'_{2,1}(\hat{\alpha}_1) & -\gamma^*_2 \end{pmatrix} \] which has eigenvalues satisfying \begin{equation} \label{eq:corner_eigenvalues} z^2 + (1 + \gamma^*_2)z + \gamma^*_2 - H'_{1,2}(\hat{\beta}_2) H'_{2,1}(\hat{\alpha}_1) = 0. \end{equation} Observe that since $H_{1,2}$ and $H_{2,1}$ are monotonically decreasing, the discriminant of the polynomial in Equation \eqref{eq:corner_eigenvalues} is \[ (1 + \gamma_2^*)^2 - 4 \paren*{\gamma_2^* - H_{1,2}'(\hat{\beta_2})H_{2,1}'(\hat{\alpha_1})} = (1 - \gamma_2^*)^2 + H_{1,2}'(\hat{\beta_2})H_{2,1}'(\hat{\alpha_1}) > 0. \] Hence, we can deduce that the eigenvalues are real and distinct and thus $\hat{x}_1$ is hyperbolic. Consequently, $Df^*(\hat{x}_1)$ has two linearly independent eigenvectors which we denote by $\mathbf{v_1}, \mathbf{v_2}$. Let $f_1, f_2$ denote the two components of $f^*$. We consider points $x = (x_1, x_2)$ near the equilibrium at $\hat{x}_1$ lying on the lines defined by $x_1 = \hat{\alpha_1}$ and $x_2 = \hat{\beta_2}$. Specifically, we have the four cases: \begin{enumerate} \item If $x_1 < \hat{\alpha}_1$, then $f_1(x_1, \hat{\beta}_2) = -x_1 + H_{1,2}(\hat{\beta}_2) > -\hat{\alpha_1} + H_{1,2} (\hat{\beta}_2) = 0$. \item If $x_1 > \hat{\alpha}_1$, then $f_1(x_1, \hat{\beta}_2) = -x_1 + H_{1,2}(\hat{\beta}_2) < -\hat{\alpha_1} + H_{1,2} (\hat{\beta}_2) = 0$. \item If $x_2 < \hat{\beta}_2$, then $f_2(\hat{\alpha}_1, x_2) = -\gamma_2^*x_2 + H_{2,1}(\hat{\alpha}_1) > -\gamma_2^*\hat{\beta}_2 + H_{2,1}(\hat{\alpha}_1) = 0$. \item If $x_2 > \hat{\beta}_2$, then $f_2(\hat{\alpha}_1, x_2) = -\gamma_2^*x_2 + H_{2,1}(\hat{\alpha}_1) < -\gamma_2^*\hat{\beta}_2 + H_{2,1}(\hat{\alpha}_1) = 0$. \end{enumerate} Consequently, if $\epsilon > 0$ is sufficiently small, then $f^*$ is transverse to the boundary of the ball \[ B_\epsilon(\hat{x}_1) := \setof*{x \in \rr^2 : \norm{x - \hat{x}_1} < \epsilon } \] and the flow generated by $f^*$ is in-flowing. It follows from continuity of $f^*$ that there exists $\epsilon > 0$ sufficiently small such that the flow restricted to the subspaces $\inspan \setof*{\mathbf{v_1}}$ and $\inspan \setof*{\mathbf{v_2}}$ is also in-flowing. Hence, $\mathbf{v_1}$ and $\mathbf{v_2}$ lie in the stable manifold of $\hat{x}_1$. \end{proof} \subsection{Statistical analysis of saddle nodes} \label{sec:statistical analysis of} Having built numerical methods to search for saddle nodes in the Toggle Switch, we now perform a statistical investigation of their results. Specifically, we investigate the accuracy of the dynamical phenotypes predicted using DSGRN as well as the effectiveness of our algorithm for numerically finding saddle-node bifurcations based on those predictions. We begin with the observation that if $\hill = 1$, then $f^*$ has a unique equilibrium which is stable. From our previous discussion of the Toggle Switch \ref{sec:toggle_switch}, given a parameter $\pi_{\Xi^*} \lambda^* = \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*} $ and a path $\curve : [1, \infty] \to \Lambda^*$ defined by $\curve(s) = \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*, s}$, if the DSGRN prediction were perfectly accurate then we would expect the following statements to hold: \begin{enumerate} \item If $\pi_{\Xi^*} \lambda^* \in R^*(5)$, then $f^*$ undergoes a saddle-node bifurcation along the parameterized path $\curve$. \item If $\pi_{\Xi^*} \lambda^* \notin R^*(5)$, then $f^*$ doesn't undergo a saddle-node bifurcation along $\curve$. \end{enumerate} Considering the non-linearity introduced by following the path $\curve$, we have to consider these statements from a statistical perspective, that is: most paths $\curve$ that connect $R^*(5)\times \{\infty\}$ and $R^*(5) \times \{1\}$ undergo a numerical saddle node. These two previous statements can be summarized by the following statistical statement: there is a correlation between the existence of a saddle node for $\lambda^*$ and $\lambda^*\in R(5)$. Using the numerical algorithms described in this paper and Theorem \ref{thm:toggle_bootstrap_eqbounds} we quantify the strength of this correlation using a $\chi^2$ test with the null hypothesis being that no correlation exists. To describe the categorical random variables of the test we introduce some additional notation. We start by randomly sampling parameters from $\Xi^*$ and denote these samples by $\setof*{\xi^*_1, \cdots, \xi^*_N}$. For each sample parameter, we check whether or not it lies in $R^*(5)$ and whether or not we detect a saddle-node bifurcation along $\mathbf{r}$. On this basis we form $4$ mutually exclusive categories. We let \begin{enumerate} \item $p_s^{(5)}$ denotes the number of parameters in $R^*(5)$ undergoing a saddle-node bifurcation, \item $p_m^{(5)}$ denotes the number of monostableparameters in $R^*{(5)}$ which do not undergo a saddle node bifurcation, \item $p^{C}_s$ denotes the number of parameters in the complement of $R^{(5)}$ undergoing a saddle-node bifurcation, \item $p^{C}_m$ denotes the number of monostable parameters in the complement of $R^{(5)}$ which do not undergo a saddle-node bifurcation. \end{enumerate} \begin{remark} An important element to discuss is the distribution of the chosen sample. For the Toggle Switch, a Gaussian distribution is created, such that each sample is the square of sample from said distribution. The variance and covariance matrix are chosen such that the probability of a sample being in either of the nine DSGRN regions $R(i), i = 1,\dots, 9$ is (roughly) equal. For bigger networks, such a distribution is hard to achieve, but in the Toggle Switch case we are able to find distributions such that the best sampled region has roughly the double of the number of samples as the worst sampled region. Let us remind that each region is unbounded, and no uniform sampling can be achieved. This justifies the use of unbounded distributions. The data-set for the Toggle Switch is created in the function {\tt create\_dataset\_ToggleSwitch}. \end{remark} The computation of the values in step 1-4 creates the following correlation matrix $$ M = \begin{pmatrix} p^{(5)}_s & p^{C}_s\\ p^{(5)}_m &p^{C}_m \end{pmatrix}. $$ Once the matrix $M$ is assembled, we are able to use them to compute the $\chi^2$-test,~\cite{}. This test is meant to prove that there is a correlation between a parameter being in $R_5$ and such parameter undergoing a saddle node. The smaller the $p$-value of the test, the stronger the correlation is expected to be. In {\tt toggle\_switch\_chi\_test}, a sample from a random distribution is built, such that there are roughly equal number of sample points in $R(5)$ and its complement. Then, the matrix $M$ is built. While each run the exact values might differ due to the sampling, a computation run on $10^4$ produced $$ M = \begin{pmatrix} 1116 & 36\\ 74 & 8685 \end{pmatrix}, $$ together with Figures \ref{fig:heat_map}. While the code computes a data-set of this size in less than 5 minutes on a laptop, choosing larger data-sets would not give us better results, but the clarity of the figures would be impacted. Thus, $93.7$\% of parameters in $R(5)$ undergo a saddle node with Hill coefficient under 100, compared to $0.4$\% outside of $R(5)$. Then, a $\chi$-square test is run with $M$ as input, giving a $p$-value of 0. This value is a statistical guarantee that combinatorial saddles correspond to smooth saddles and our approach is able to find the majority of saddle nodes. We remark that the values of $M$ do not include the parameters for which we could find a change in the number of equilibria but for which Newton did not return a solution for the saddle node problem \eqref{eq:num_saddle_node}. These parameters were few ($89$) in our run and are the product of numerical artifacts, as such they should be considered as outliers in our discussion. \begin{remark} Increasing the maximum Hill coefficient considered increases the likelihood of finding saddle nodes in $R^*(5)$, but also the likelihood of outliers due to numerical instabilities at high Hill coefficient values. \end{remark} \if False \subsection{Heat map section} \label{sec:toggle switch heat map} While Section \ref{sec:statistical analysis of} presents a general overview of the dynamics over the whole of phase space, we would like to have a more detailed understanding of where the transition between bistability and monostability occurs in the Toggle Switch. This presentation revolves around Figure \ref{fig:heat_map}, but in other to present it we would need a projection of the parameter space into 2D. We begin by presenting a method for visualizing parameters in the Toggle Switch. For a fixed $\xi^* \in \Xi^*$, we are interested in knowing not only which DSGRN parameter region it lies in, but also measuring how close it lies to the boundary of that region. Recall that even after nondimensionalization, the $9$ DSGRN parameter regions associated with the Toggle Switch Hill model represent semi-algebraic subsets of $\Lambda^*$ which is $6$ dimensional. Thus, we will define a projection into a bounded subset of $\rr^2$ which preserves these parameter regions and their relative distances to the boundaries. \corrc KM: is it clear that this is a projection? <<>> To start we introduce new parameters, $\setof*{a_1, b_1, a_2, b_2}$ defined by \[ a_1 := \ell^*_{1,2} \qquad b_1 := \ell_{1,2}^* + \delta_{1,2}^* \qquad a_2 := \frac{\ell_{2,1}^*}{\gamma_2^*} \qquad b_2 := \frac{\ell_{2,1}^* + \delta_{2,1}^*}{\gamma_2^*}. \] Next, we define the nonlinear transformation $\psi : \Xi^* \to \rr^2 \times \rr^2$ by the formula $\psi(\xi^*) = (a, b)$. The motivation for this transformation is as follows. Observe that since $\Xi^* \subseteq (0, \infty)^5$, we have \[ \image \psi \subseteq \setof{(a, b) \in \rr^2 \times \rr^2 : 0 < a_i < b_i, \ i = 1,2}. \] After rewriting the polynomial inequalities in Table \ref{tab:parameter_regions} in these new coordinates, we see that $9$ DSGRN parameter regions as well as their boundaries, are defined by linear manifolds. For example, region associated with parameter node $5$ is defined in these coordinates by \[ R(5) = \setof*{(a, b) \in \rr^2 \times \rr^2 : 0 < a_1 < 1 < b_1 \ \text{and} \ 0 < a_2 < 1 < b_2 } \] and the boundary separating $R(5)$ and $R(6)$ is given by \[ \partial R(5) \cap \partial R(6) = \setof*{(a, b) \in \rr^2 \times \rr^2 : 0 < a_1 < 1 < b_1, \ a_2 = 1, \ \text{and} \ 1 < b_2 }. \] Next, fix positive constants $\overbar{a}, \overbar{b}$ and define $K \subset \image \psi$ by \[ K_{\overbar{a}, \overbar{b}} = \setof*{(a, b) \in \image \psi : \norm{a}_\infty \leq \overbar{a}, \norm{b}_\infty \leq \overbar{b}}. \] We define another map, $g : K_{\overbar{a}, \overbar{b}} \to [0,3]^2$ by the formula $g(a, b) = (g_1(a,b),g_2(a,b))$ where $g_1,g_2$ are defined by the formulas \[ g_1(a,b) = \begin{cases} b_2 & \text{if} \ b_2 \leq 1 \\ 1+\frac{ 1 - a_2}{b_2 - a_2} & \text{if} \ a_2 < 1 < b_2 \\ 2 + \frac{a_2-1}{\overbar{a} - 1} & \text{if} \ 1 \leq a_2 \end{cases} \qquad g_2(a,b) = \begin{cases} b_1 & \text{if} \ b_1 \leq 1 \\ 1+\frac{ 1 - a_1}{b_1 - a_1} & \text{if} \ a_1 < 1 < b_1 \\ 2 + \frac{a_1-1}{\overbar{a} - 1} & \text{if} \ 1 \leq a_1 \end{cases} \] We use the previous constructions to visualize parameters as follows. Given a fixed collection of parameters $\setof*{\xi^*_1, \dotsc, \xi^*_M} \subset \Xi^*$ we choose $\overbar{a}, \overbar{b}$ sufficiently large so that $\psi(\xi^*_j) \in K_{\overbar{a}, \overbar{b}}$ for $1 \leq j \leq M$. Therefore, the mapping $g \circ \psi$ is a projection of these parameters from $\Xi^*$ into $[0,3]^2$. Moreover, observe that $g \circ \psi$ maps each of the $9$ parameter regions to a distinct unit square in $[0,3]^2$ and each boundary separating these parameter regions is mapped into lines of the form $g_i = j$ with $i \in \setof*{1,2}$ and $j \in \setof*{1,2,3}$. For instance, parameters in $R(5)$ are mapped into the square $[1,2]^2$ and parameters on the boundary between $R(5)$ and $R(2),R(4),R(6),R(8)$ are mapped into the lines $g_1 = 1, g_2 = 1, g_2 = 2, g_1 = 2$ respectively. Having built a projection of parameter space into 2D, we can now build a heat-map, presenting the value of $\hill$ at which a saddle node bifurcation takes place. It is now the time to present numerical results in support of our claims. Following \cite{parameter_sampling}, it is possible to construct a distribution $\mathcal{F}$ such that samples taken from $\mathcal{F}$ span the parameter regions $R(i)\subset \Xi^*, i = 1,\dots, 9$ of the Toggle Switch in such a way that no region is significantly over- or under-represented. Thus, we can create large samples of parameters knowing that $R(5)$ will be roughly represented in the sample as often as the other parameter regions. Having constructed a large sample of parameters, we can use the techniques presented in Section \ref{ss:visualDSGRN} to project each parameter to the square $[0,3]^2$. For each parameter sample $\lambda \in \Xi^*$, we are interested in finding any saddle node bifurcation that happens along the path $\curve(s)$ as presented in Section \ref{sec:the toggle switch saddle}. We will practically restrict ourselves to Hill coefficients satisfying $1 \leq d \leq 100$, \corrc SK: Check that $100$ is still the value we actually used in the final computation <<>> The first visual result we want to present is Hill coefficient at which saddle nodes happen. For this, we find saddle nodes with respect to the Hill coefficient, then project the parameter over the $[0,3]^2$ square and represent the Hill coefficient as a heat map. In Figure \ref{fig:heat_map}, the $x$ and $y$ axes represent the $g_1$ and $g_2$, while the color indicates the lowest Hill coefficient $d$ for which we could find a saddle node at the given parameter value. Two additional information are given. Parameters where we could find a change in the number of numerical equilibria, but for which the saddle node problem did not give us a numerical solution are labeled "bad candidates", and are plotted in a scatter plot. Then, parameters for which we could find more than a single saddle node are also stored and plotted under the name of "multiple saddles". \begin{figure}[h] \begin{center} \includegraphics[width = 0.225\textwidt ]{dsgrn_heat_plot.pdf} \includegraphics[width = 0.225\textwidt ]{all_results.pdf} \caption{Left: Using the projection presented in Section \ref{ss:visualDSGRN}, a heat map is plotted with the color indicating the smallest Hill coefficient undergoing a saddle node. Right: Using the same projection, parameters are plotted in blue if they don't undergo any saddle node, in green if they undergo one saddle node, in orange if they undergo multiple saddle nodes and in red if the bisection algorithm found a saddle node that was not numerically confirmed with Equation \eqref{eq:num_saddle_node}} \label{fig:heat_map} \end{center} \end{figure} This Figure gives an intuition of what we would like to prove: choosing parameters in $R^*(5)$ has the highest likelihood of giving us a saddle node for relatively low enough the Hill coefficient. Looking at this map, we observe that the bottom left of the center region seems to be the best location for a practical bistable switch, since most large Hill coefficients would be over the saddle node. We also might notice how there are saddle nodes taking place outside the center region. For this to be more clear, we refer to Figure \ref{fig:heat_map}b, where the parameters that undergo a saddle node are plotted without the associated Hill coefficient. \fi \subsection{Analysis of the Toggle Switch Hill model} Following the Hill model construction defined in Section \ref{sec:hill models} for the Toggle Switch yields a Hill function model associated with the ODEs in Equation \eqref{eq:Toggle_ODEs} as follows. We observe that the Toggle Switch has two state variables of interest, denoted by $x_1, x_2$, and thus the appropriate phase space is $X := (0, \infty)^2$. For $i = 1,2$, the state variable $x_i$ is assigned a linear decay parameter, $\gamma_i$. In addition, each edge of the GRN is assigned a Hill function which are both negative since both edges of the Toggle Switch model are repressing. These Hill functions, denoted by $H^-_{1,2}$ and $H^-_{2,1}$, contribute the additional 8 parameters to the model which are denoted by $\setof*{\ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}}$. Thus, we define our parameter space for the Toggle Switch to be $\Lambda := (0, \infty)^{10}$ and as described in Section \ref{sec:hill models} We collect the parameters into a vector denoted by \[ \lambda := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}} \in \Lambda. \] Observe that since each node of the Toggle Switch has only a single incoming edge we associate each coordinate of $\cH$ with the same trivial interaction function defined by $p(z) = z$. Therefore, for fixed $\lambda \in \Lambda$, the linear and nonlinear terms defining the Hill model are given by \begin{equation} \cL_{\lambda}(x) = -\begin{pmatrix} \gamma_1 & 0 \\ 0 & \gamma_2 \end{pmatrix} x \qquad \cH_{\lambda}(x) = \begin{pmatrix} H^-_{1,2}(x_2) \\ H^-_{2,1}(x_1) \end{pmatrix} \end{equation} where the Hill functions are given explicitly by the formulas \[ H_{1,2}^-(x_2) = \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^{\hill_{1,2}}}{\theta_{1,2}^{\hill_{1,2}} + x_2^{\hill_{1,2}}}, \qquad H_{2,1}^-(x_1) =\ell_{2,1} + \delta_{2,1} \frac{\theta_{2,1}^{\hill_{2,1}}}{\theta_{2,1}^{\hill_{2,1}} + x_1^{\hill_{2,1}}}. \] Expressing these together yields the Toggle Switch Hill model defined for $x \in X$ and $\lambda \in \Lambda$ by the formula \begin{equation} \label{eq:toggle switch hill model} f_\lambda(x) = \cL_{\lambda}(x) + \cH_{\lambda}(x), \end{equation} which is the vectorized form of the ODEs in Equation \eqref{eq:Toggle_ODEs}. Observe that despite its simplicity, this model already has a $10$ dimensional parameter space. Consequently, studying the global dynamics is a challenge and we show in Section \ref{sec:toggle_switch_results} that even this model exhibits dynamics which are far from completely understood. Nevertheless, the toggle switch is generically expected to exhibit only two dynamical phenotypes. Either there is a unique stable equilibrium which is globally attracting, or it has three equilibria, two stable and one unstable. We refer to a parameter for the first case as {\em monostable}, while the second is called {\em bistable}. In this example, we demonstrate that our combined combinatorial/numerical approach is capable of reliably finding parameters operating in either regime as well as finding non-generic parameters on the boundary between these regimes. These are parameter values at which a saddle-node bifurcation occurs. \subsection{Analysis of the Toggle Switch in DSGRN} \label{sec:analysis of the} The relevant parameters for the combinatorial Toggle Switch model analyzed using DSGRN are denoted by \[ \xi := \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}} \in \rr^8. \] We define $\Xi := (0, \infty)^8 \subset \rr^8$ to be the combinatorial parameter space. Despite the fact that $\Xi$ is $8$ dimensional, DSGRN computes the combinatorial dynamics for the Toggle Switch in less than a second on a basic laptop. \corrc SK: Probably the description of a semi-algebraic set and parameter nodes/regions will be covered instead in Section 3 so this bit can be shortened signficantly. <<>> The output of DSGRN is a partition of $\Xi$ into 9 distinct semi-algebraic sets called {\em parameter regions} (i.e.~a set defned by polynomial inequalities of the 8 parameters). Each region corresponds to a single parameter node in the DSGRN parameter graph denoted as $\setof*{v_1,\dotsc, v_9}$. For each parameter node DSGRN also outputs a combinatorial dynamical phenotype. The explicit polynomial inequalities associated with each parameter node and their dynamical phenotypes are presented in Table \ref{tab:parameter_regions}. Every node except $v_5$ is labelled as ``monostable'' implying that DSGRN finds exactly 1 stable combinatorial equilibrium at these nodes. On the other hand $v_5$ is labelled as ``bistable'' meaning that DSGRN finds exactly 2 stable combinatorial equilibria at this node. Observe that a combinatorial saddle-node bifurcation occurs between node $v_5$ and each of its adjacent nodes which are the nodes $\setof*{v_2, v_4, v_6, v_8}$. Consequently, we refer to $v_5$ as the {\em critical parameter node} which is defined by the associated semi-algebraic set \[ C := \setof{\xi \in \Xi : \ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2} \ \text{and} \ \ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}}. \] Following the methodology described in this paper, we are inclined to numerically search for saddle-node bifurcations along parameter paths passing through the boundaries of $C$. In the remainder of this section we will demonstrate statistically that DSGRN has essentially found all of the saddle-node bifurcations. More specifically, define $\pi_\Xi : \Lambda \to \Xi$ by the formula \[ \pi_\Xi \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, d_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}, d_{2,1}} = \paren*{\gamma_1, \ell_{1,2}, \delta_{1,2}, \theta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}, \theta_{2,1}} \] (i.e.~projection onto the coordinates which do not correspond to Hill coefficients). We will argue that with very high probability, $\lambda$ is a bistable parameter for $f$ if and only if $\pi_\Xi \lambda \in C$. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \begin{table*}\centering \begin{tabular}{@{}clll@{}} \toprule Region & Phenotype & Inequalities & Reduced Inequalities \\ \midrule \multirow{2}{*}{$1$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$2$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\ \midrule \multirow{2}{*}{$3$} &\multirow{2}{*}{Monostable} & $\gamma_1 \theta_{2,1} < \ell_{1,2}$ & $1 < \ell_{1,2}$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \midrule \multirow{2}{*}{$4$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$5$} &\multirow{2}{*}{Bistable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\\midrule \multirow{2}{*}{$6$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} < \gamma_1 \theta_{2,1} < \ell_{1,2} + \delta_{1,2}$ & $\ell_{1,2} < 1 < \ell_{1,2} + \delta_{1,2}$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \midrule \multirow{2}{*}{$7$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\ell_{2,1} + \delta_{2,1} < \gamma_2 \theta_{1,2}$ & $\ell_{2,1} + \delta_{2,1} < \gamma_2$ \\ \midrule \multirow{2}{*}{$8$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\ell_{2,1} < \gamma_2 \theta_{1,2} < \ell_{2,1} + \delta_{2,1}$ & $\ell_{2,1} < \gamma_2 < \ell_{2,1} + \delta_{2,1}$ \\ \midrule \multirow{2}{*}{$9$} &\multirow{2}{*}{Monostable} & $\ell_{1,2} + \delta_{1,2} < \gamma_1 \theta_{2,1}$ & $\ell_{1,2} + \delta_{1,2} < 1$ \\ {} & {} & $\gamma_2 \theta_{1,2} < \ell_{2,1}$ & $\gamma_2 < \ell_{2,1}$ \\ \bottomrule \end{tabular} \caption{The result of the DSGRN analysis for the Toggle Switch. DSGRN partitions $\Xi$ into $9$ parameter regions on which the combinatorial dynamic phenotype is known and constant. These regions are the semi-algebraic subsets of $\Xi$ satisfying the inequalities in the third column. For the reduced parameter space defined in Section \ref{sec:reducing the number}, the corresponding parameter regions are the semi-algebraic subsets of $\Xi^*$ described by the inequalities in the last column.} \label{tab:parameter_regions} \end{table*} \subsection{Reducing the number of parameters} \label{sec:reducing the number} To simplify the statistical analysis of the Toggle Switch example we will make several changes to the model in order to reduce the dimension of the parameter space. However, none of these changes are required for the methods discussed in this paper nor do they impact the results. We will reduce the number of parameters via two mechanisms. First we will equate the Hill coefficients associated to both edges in the Toggle Switch. This amounts to assuming that $d := \hill_{1,2} = \hill_{2,1}$, is the common value of both Hill coefficients which reduces the number of parameters in the model by $1$. Second, we will further reduce the dimension of the parameter space via non-dimensionalization of the parameters as follows. Begin by letting $f_\lambda$ denote the Hill model for the Toggle Switch previously defined in Equation \eqref{eq:toggle switch hill model}. Suppose, $x : (-\epsilon, \epsilon) \to X$ parameterizes a trajectory segment for $f_\lambda$ and a typical point along this trajectory is denoted by $x(t) = (x_1(t), x_2(t))$. We consider the rescaled state and time variables defined by \[ x_1 := k_1 y_1, \quad x_2 := k_2 y_2, \quad t := k_t \tau, \] for some positive constants, $\setof{k_1, k_2, k_t}$. The differential equation satisfied by $x_1$ can be rewritten as \[ \frac{k_1}{k_t} \ddx{y_1}{\tau} = \ddx{x_1}{t} = -\gamma_1 k_1 y_1 + \ell_{1,2} + \delta_{1,2} \frac{\theta_{1,2}^{\hill}}{\theta_{1,2}^{\hill} + \paren{k_2 y_2}^{\hill}}. \] After a similar computation for $x_2$ and multiplying through by $\frac{k_1}{k_t}$ and $\frac{k_2}{k_t}$ respectively, we obtain the system of differential equations in the new variables given by \begin{align*} \ddx{y_1}{\tau} & = -\gamma_1 k_t x_1 + \frac{\ell_{1,2} k_t}{k_1} + \frac{\delta_{1,2} k_t}{k_1} \frac{\paren{\frac{\theta_{1,2}}{k_2}}^{\hill}}{\paren{\frac{\theta_{1,2}}{k_2}}^{\hill} + y_2^{\hill}} \\ \ddx{y_2}{\tau} & = -\gamma_2 k_t x_2 + \frac{\ell_{2,1} k_t}{k_2} + \frac{\delta_{2,1} k_t}{k_2} \frac{\paren{\frac{\theta_{2,1}}{k_1}}^{\hill}}{\paren{\frac{\theta_{2,1}}{k_1}}^{\hill} + y_1^{\hill}}. \end{align*} Observe that for any choice of $k_1,k_2,k_t$, these differential equations still have the form of a Hill model and the coordinate change, $(x_1,x_2,t) \mapsto (y_1,y_2, \tau)$, is a dynamical conjugacy between these Hill models. In particular, we choose the rescaling parameters: \[ k_1 = \theta_{2,1}, \quad k_2 = \theta_{1,2}, \quad k_t = \frac{1}{\gamma_1}, \] and define the {\em reduced parameters} \begin{align*} \ell_{1,2}^* & := \frac{\ell_{1,2}}{\gamma_1 \theta_{2,1}} \\ \delta_{1,2}^* & := \frac{\delta_{1,2}}{\gamma_1 \theta_{2,1}} \\ \ell_{2,1}^* & := \frac{\ell_{2,1}}{\gamma_1 \theta_{1,2}} \\ \delta_{2,1}^* & := \frac{\delta_{2,1}}{\gamma_1 \theta_{1,2}} \\ \gamma_2^* & := \frac{\gamma_2}{\gamma_1}, \end{align*} so that the non-dimensionalized version of Equation \eqref{eq:toggle switch hill model} with identical Hill coefficients is the Hill model defined by the formula \begin{equation} f^*(y) := \begin{pmatrix} - y_1 + \ell_1^* + \delta_1^* \frac{1}{1 + y_2^\hill} \\ - \gamma_2^* y_2 + \ell_2^* + \delta_2^* \frac{1}{1 + y_1^\hill} \end{pmatrix}. \end{equation} We refer to $f^*$ as the {\em reduced} Toggle Switch Hill model and observe that $f^*$ has only $6$ variable parameters (one of which is the common value of both Hill coefficients) and 3 fixed parameters (both threshold parameters and the linear decay parameter on $y_1$). Thus, we define the reduced parameter space associated to $f^*$ as the subspace, $\Lambda^* \subset \Lambda$, defined by \[ \Lambda^* := \setof*{\lambda \in \Lambda : \gamma_1 = \theta_{2,1} = \theta_{1,2} = 1, \ \hill_{1,2} = \hill_{2,1} = d} \cong (0, \infty)^6. \] Consequently, we denote a typical reduced parameter in $\Lambda^*$ by \[ \lambda^* := \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*, d}. \] \corrl SK: We can also impose the following additional constraint if we like << || In addition, observe that the dimensionless parameter, $\gamma_2^*$, is the ratio of the linear decay rates for $x_1$ and $x_2$ and since the model is symmetric with respect to these state variables, we can assume without loss of generality that $\gamma_2^* \geq 1$. >> \corrc EQ: I would state it later, to ``double'' the info we have <<>> Since the dynamics generated by $f$ and $f^*$ are conjugate restricted to the subset \\ $\setof*{\lambda \in \Lambda : d_{1,2} = d_{2,1}}$, we have performed all computations described in the remaining sections using the reduced Toggle Switch model, or equivalently, under the assumption that the Hill model in Equation \eqref{eq:toggle switch hill model} has fixed parameters $\theta_{2,1} = \theta_{1,2} = \gamma_1 = 1$ and identified Hill coefficients $d_{1,2} = d_{2,1} = d$. We also note that the HillCont library has been written to allow these sorts of constraints to be implemented just as easily as a general Hill model, and takes advantage of the reduced number of parameters for faster computation. However, it is crucial to point out that none of the algorithms in this paper rely on either of the reductions performed on this example. After imposing these parameter constraints for the DSGRN parameter regions we obtain a {\em reduced} version of the combinatorial parameter space denoted by $\Xi^* := (0, \infty)^5 \subset \rr^5$ where a typical reduced combinatorial parameter has the form $\xi^* = \paren*{\ell_{1,2}^*, \delta_{1,2}^*, \gamma_2^*, \ell_{2,1}^*, \delta_{2,1}^*}$. Observe that the $9$ DSGRN parameter regions are projected onto semi-algebraic subsets of $\Xi^*$ defined by the {\em reduced inequalities} given in the last column of Table \ref{tab:parameter_regions}. Of course, the dynamic phenotypes for each region are unchanged as a consequence of the conjugacy. Analogous to the discussion in Section \ref{sec:analysis of the} we define $\pi_{\Xi^*} : \Lambda^* \to \Xi^*$ to be projection onto the first $5$ coordinates and in this context, our claim from the previous section can be restated as follows. With high probability, $\lambda^* \in \Lambda^*$ is a bistable parameter for $f^*$ if and only if $\pi_{\Xi^*} \lambda^* \in C$. \subsection{The toggle switch saddle-node bifurcation problem} \label{sec:the toggle switch saddle} \corrc SK: Stopped rewriting here <<>> In this section we demonstrate how the saddle-node bifurcation problem described in Section \ref{sec:eq_and_SN} is defined and solved for the toggle switch. Recalling our previous discussion, the parameter of interest is the shared Hill coefficient $\hill$ for values $\hill \geq 1$ and we write the remaining parameters as a vector, $\mu := \paren*{\ell_{1,2}, \delta_{1,2}, \gamma_2, \ell_{2,1}, \delta_{2,1}} \in \image \pi_{\Xi}$. Observe that if $\mu$ is fixed, then the toggle switch Hill model is a one parameter family of the form, $f_{\mu} : [0, \infty)^2 \times [1, \infty]$, given explicitly by the formula \begin{equation} f_{\mu}(x, \hill) = - \begin{pmatrix} 1 & 0 \\ 0 & \gamma_2 \end{pmatrix} x + \begin{pmatrix} H_{1,2}^- (x_2, \hill) \\ H_{2,1}^- (x_1, \hill) \end{pmatrix} = \begin{pmatrix} -x_1 + \ell_{1,2} + \delta_{1,2} \frac{1}{1 + x_2^\hill} \\ -\gamma_2 x_2 + \ell_{2,1} + \delta_{2,1} \frac{1}{1 + x_1^\hill} \\ \end{pmatrix} \end{equation} To define the saddle-node bifurcation problem we first consider the algorithm for finding equilibria. Both $\cH_1$ and $\cH_2$ satisfy Definition \ref{def:monotone_factorization} so the bootstrap algorithm can be applied. The bootstrap map is $\Phi : \rr^4 \to \rr^4$ defined by \[ \Phi(\alpha, \beta) = \begin{pmatrix} H_{1,2}(u_4) \\ H_{2,1}(u_3) \\ \frac{1}{\gamma_2} H_{1,2}(u_2)\\ \frac{1}{\gamma_2} H_{2,1}(u_1), \end{pmatrix} \qquad u \in \rr^4. \] Following Algorithm \ref{alg:bootstrap_equilibria} we compute the iterates of $\Phi$ defined by \[ u^{(0)} := \begin{pmatrix} \ell_{1,2} \\ \ell_{2,1} \\ \ell_{1,2} + \delta_{1,2} \\ \ell_{2,1} + \delta_{2,1} \\ \end{pmatrix} \qquad u^{(n)} := \Phi(u^{(n-1)}) \forall n \geq 1. \] Theorem \ref{thm:bootstrap_eqbounds} ensures that $u^{(n)}$ converges to a fixed point of $\Phi$, $\hat{u} = (\hat{\alpha}, \hat{\beta})$ and that all equilibria for $f_\mu$ are contained in the subset $[\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$. However, in the case of the toggle switch we can prove a stronger version of Theorem \ref{thm:bootstrap_eqbounds} \begin{theorem} \label{thm:toggle_bootstrap_eqbounds} Suppose $d \geq 1$ and $\mu \in \image \pi_\Xi$ is a parameter such that the nullclines of $f_\mu$ only intersect transversally. Let $\Phi : \rr^4 \to \rr^4$ be the associated bootstrap map and suppose the orbit through $u^{(0)}$ converges to $\hat{u} := (\hat{\alpha}, \hat{\beta}) \in \rr^2 \times \rr^2$ and define $ \hat R := [\hat{\alpha}_1, \hat{\beta}_1] \times [\hat{\alpha}_2, \hat{\beta}_2] \subset X$. Then $\hat{u}$ is asymptotically stable and exactly one of the following is true. \begin{enumerate} \item $\hat{R}$ is a degenerate rectangle (i.e.~for $i = 1,2$, $\hat{\alpha}_i = \hat{\beta}_i$) and $f_\mu$ has a unique equilibrium, $\hat{x} = (\hat{\alpha}_1, \hat{\beta}_1)$. Moreover, $\hat{x}$ is stable. \item $f_\mu$ has exactly three equilibria. Two equilibria are stable and lie at the corners of $\hat{R}$. Specifically, \[ \hat{x}_1 = (\hat{\alpha}_1, \hat{\beta}_2), \qquad \hat{x}_2 = (\hat{\beta}_1, \hat{\alpha}_2) \] are stable equilibria. The third equilibrium denoted by $\hat{x}_3$ is unstable and lies in the interior of $\hat{R}$. \end{enumerate} \corrc SK: I will probably separate this Theorem into 2 pieces. The first proving that the corners of the rectangle are stable equilibria and the second proving that the TS has (generically) 1 stable eq or 2 stable/1 saddle equilibria. s <<>> \begin{proof} The stability of $\hat{u}$ follows directly from the computation \[ D\Phi(\hat{u}) = \begin{pmatrix} 0 & 0 & 0 & H_1'(\hat{\beta}_2) \\ 0 & 0 & \frac{1}{\gamma_2} H_2'(\hat{\beta}_2) & 0 \\ 0 & H_1'(\hat{\alpha}_2) & 0 & 0 \\ \frac{1}{\gamma_2} H_2'(\hat{\alpha}_1) & 0 & 0 & 0 \end{pmatrix} \] and since $H_1, H_2$ are negative Hill functions, every nonzero entry of this matrix is negative. However, these nonzero entries are precisely the eigenvalues of $D\Phi(\hat{u})$. To prove the second claim, define $\hat{x}_1 := (\hat{\alpha}_1, \hat{\beta}_2)$ and $\hat{x}_2 := (\hat{\alpha}_2, \hat{\beta}_1)$. Observe that since $(\hat{\alpha}, \hat{\beta})$ is a fixed point of $\Phi$ we have by direct computation \begin{align*} H_1(\hat{\beta}_2) & = \hat{\alpha}_1 \\ H_2(\hat{\beta}_1) & = \gamma_2 \hat{\alpha}_2 \\ H_1(\hat{\alpha}_2) & = \hat{\beta}_1 \\ H_2(\hat{\alpha}_1) & = \gamma_2 \hat{\beta}_2. \end{align*} It follows that \[ f_\mu(\hat{x}_1) = f_\mu(\hat{\alpha_1}, \hat{\beta}_2) = \begin{pmatrix} \hat{\alpha}_1 - H_1(\hat{\beta}_2) \\ \gamma_2 \hat{\beta}_2 - H_2(\hat{\alpha}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] \[ f_\mu(\hat{x}_2) = f_\mu(\hat{\beta}_1, \hat{\alpha}_2) = \begin{pmatrix} \hat{\beta}_1 - H_1(\hat{\alpha}_2) \\ \gamma_2 \hat{\alpha}_2 - H_2(\hat{\beta}_1) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \] so that $\hat{x}_1, \hat{x}_2$ are equilibria for $f_\mu$. Evidently, if $\hat{R}$ is degenerate, then $\hat{x}_1 = \hat{x}_2$ and by Theorem \ref{thm:bootstrap_eqbounds}, it follows that this is the unique equilibrium for $f_\mu$. On the other hand, suppose $\hat{R}$ is nondegenerate so that $\hat{x}_1 \neq \hat{x}_2$ and $f_\mu$ has at least two equilibria. Note that either $\hat{\alpha}_1 \neq \hat{\alpha}_2$ or $\hat{\beta}_1 \neq \hat{\beta}_2$. We first prove that in fact, both of these inequalities must hold and in particular, $\hat{\alpha}_i < \hat{\beta}_i$ for $i = 1,2$. Denote the nullclines of $f_\mu$ by \[ \cN_1 := \setof*{x \in X : H_1(x_2) - x_1 = 0} \qquad \cN_2 := \setof*{x \in X : \frac{1}{\gamma_2} H_2(x_1) - x_2 = 0}. \] Observe that at a point $x \in \cN_1$ we have $T_x \cN_1 = \inspan \setof*{(1, \frac{1}{\gamma_2} H_2'(x_1))}$ implying that $\cN_1$ is a smooth $1$-dimensional manifold and the mapping $x \mapsto (1, \frac{1}{\gamma_2} H_2'(x_1))$ is a smooth parameterization of the tangent bundle of $\cN_1$. The second coordinate of this parameterization is monotonically decreasing since $H_2$ is a negative Hill function. \corrc SK: Need to finish this proof. Somehow it is more difficult than expected. <<>> \end{proof} \end{theorem} \longcorrl SK: << For $\hill \rightarrow \infty$, we know that in the center parameter region $\mathcal{R}$ satifying the inequalities $\ell_i < \gamma_i \theta_i < \ell_i + \delta_i$ has two stable fixed points. We also know that, for $\hill \rightarrow 0$, the system \eqref{eq:hill_toggle_switch} has trivially a unique fixed point. We expect therefore that for all parameters in the region $\mathcal{R}$ we can find $x^*,\hill^*, v$ such that the toggle switch undergoes a saddle-node bifurcation at those values. Furthermore, we don't expect to find saddle-node bifurcations outside of this parameter region, and if we do we expect them to be ``isolas'', that is saddle-nodes generating a set of equilibria that are going to undergo another saddle-node bifurcation for higher values of $\hill$, such as sketched in Figure \ref{fig:isolas}. || >> \corrc EQ: why don't you like this? seems a bit out of context here... <<>> \subsection{Statistical analysis of saddle-node bifurcations} \label{sec:statistical analysis of} \subsubsection{Old material from combinatorial toggle switch discussion} This analysis strongly motivates our choice to identify the Hill coefficients in the toggle switch. To see why, observe that if $d < 2$, it can be shown that $f_\lambda$ admits exactly $1$ equilibrium which is stable.However, if $\pi_\Xi \lambda \in C$, then for some $2 < \hat{d} < \infty$, $f$ undergoes a bifurcation in which an additional stable equilibrium appears for $d>\hat{d}$. Thus, if we seek to determine whether or not a given $\lambda \in \Lambda$ is a bistable parameter, it is natural to look for saddle-node bifurcations with respect to the Hill coefficient. Since we have identified the Hill coefficients we obtain a $1$-dimensional saddle-node bifurcation problem. \subsubsection{Old section called ``comparing bistable parameters''} \corrc EQ: let's not use $\alpha$ and $\beta$ again, it seems we are talking about the map $\Phi$ again \\ SK: Agreed. <<>> \corrc EQ: the indices convention here is a bit messy: indices ``1'' relate to alpha and beta 1, that work on the y-axis. Or am I completely wrong?<<>> In this section we consider our claim more carefully. First, we describe a method of sampling fibers of $\pi_\Xi$ consisting of two steps. First, we introduce new parameters, $\setof{\alpha_1, \beta_1, \alpha_2, \beta_2}$ defined by \[ \alpha_1 := \ell_{1,2} \quad \beta_1 := \ell_{1,2} + \delta_{1,2} \quad \alpha_2 := \frac{\ell_{2,1}}{\gamma_2} \quad \beta_2 := \frac{\ell_{2,1} + \delta_{2,1}}{\gamma_2}. \] It is often convenient to express this as a nonlinear transformation, $\psi : \Xi \to \rr^2 \times \rr^2$, defined by $\psi(\xi) = (\alpha, \beta)$. The motivation for this transformation is as follows. Observe that the restriction $\Xi \subseteq (0, \infty)^5$ implies that \[ \image \psi \subseteq \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : \alpha_i < \beta_i, i = 1,2}. \] Upon rewriting each of the $9$ parameter regions in terms of these parameters, we see that each region is defined by a linear manifold. Moreover, the boundaries between each pair of adjacent region are also linear manifolds. For example, the region $U_5$ is given by \[ U_5 = \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : 0 < \alpha_1 < 1 < \beta_1 \ \text{and} \ 0 < \alpha_2 < 1 < \beta_2 } \] and the boundary separating $U_5$ and $U_6$ is given by \[ \partial U_5 \cap \partial U_6 = \setof{(\alpha, \beta) \in \rr^2 \times \rr^2 : 0 < \alpha_1 < 1 < \beta_1, \ \alpha_2 = 1, \ \text{and} \ 1 < \beta_2 }. \] Next, fix positive constants $\overbar{\alpha}, \overbar{\beta}$ and define $K \subset \image \psi$ by \[ K = \setof{(\alpha, \beta) \in \image \psi : \norm{\alpha}_\infty \leq \overbar{\alpha}, \norm{\beta}_\infty \leq \overbar{\beta}}. \] Define a map $K \to [0,3]^2$ by $(\alpha, \beta) \mapsto (u,v)$ where $u,v$ are defined by the formulas \longcorrq EQ: I think this is the complete formula, copied from the code << \[ u = \begin{cases} \beta_2 & \text{if} \ \beta_2 \leq 1 \\ case 2 & \text{if} \ \alpha_2 < 1 < \beta_2 \\ 2 + \frac{\alpha_2-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_2 \end{cases} \qquad v = \begin{cases} \beta_1 & \text{if} \ \beta_1 \leq 1 \\ case 2 & \text{if} \ \alpha_1 < 1 < \beta_1 \\ case 3 & \text{if} \ 1 \leq \alpha_1 \end{cases} \] || \[ u = \begin{cases} \beta_2 & \text{if} \ \beta_2 \leq 1 \\ 1+\frac{ 1 - \alpha_2}{\beta_2 - \alpha_2} & \text{if} \ \alpha_2 < 1 < \beta_2 \\ 2 + \frac{\alpha_2-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_2 \end{cases} \qquad v = \begin{cases} \beta_1 & \text{if} \ \beta_1 \leq 1 \\ 1+\frac{ 1 - \alpha_1}{\beta_1 - \alpha_1} & \text{if} \ \alpha_1 < 1 < \beta_1 \\ 2 + \frac{\alpha_1-1}{\overbar{\alpha} - 1} & \text{if} \ 1 \leq \alpha_1 \end{cases} \] >> This is, under the hood, a projection from the full parameter space to what we may call the DSGRN coordinates. \subsubsection{``Invertion'' of this projection} \red{EQ: should we call DSGRN parameter regions in matrix-way? By this I mean (1,1), (1,2), (1,3), (2,1), (2,2), and (2,3)... instead of 1,2..9? I think it would make some discussions here easier.} Consider fixing $(u,v) \in [0,3]\times [0,3]$ in DSGRN coordinates. The corresponding full parameter associated to $(u,v)$ depends on the region $(u,v)$ lives in. \red{MAGICAL NOTATIONS!} I will start with $u$ and determine the parameter values $(\gamma_2, \ell_2, \delta_2, \theta_2)$. First, $\theta_2 =1$. It must hold \[ (\gamma_2,\ell_2, \delta_2) = \begin{cases} \frac{\ell_2+\delta_2}{\gamma_2} = u \qquad &\text{ if } u<1,\\ \frac{\gamma_2 - \ell_2}{\delta_ 2}= u - 1 \qquad &\text{ if } u\in [1,2),\\ \frac{\ell_2}{\gamma_2} = ( u - 2) (\overbar{\alpha} -1) + 1 \qquad &\text{ if } u\geq 2. \end{cases} \] Notice how, the third relationship does not involve $\delta_2$, while the second requires $\gamma_2 > \ell_2$. For $v$, the situation is simpler, because $\gamma_1$ has already been set to 1, so the equations follow as \[ (\ell_1, \delta_1) = \begin{cases} \ell_1+\delta_1 = v \qquad &\text{ if } v<1,\\ \frac{1 - \ell_1}{\delta_ 1}= v - 1 \qquad &\text{ if } v\in [1,2),\\ \ell_1= ( v - 2) (\overbar{\alpha} -1) + 1 \qquad &\text{ if } v\geq 2. \end{cases} \] \red{ if there is a mistake here, the code needs to be updated too. Code in ``fiber\texttt{\_}sample()'' in file \texttt{toggle\_switch\_heat\_functionalities}} \subsection{Results}\label{sec:toggle_switch_results} \subsubsection{Initial results} Using the coordinate system we have introduced, we can sample the square $S = [0,3]\times [0,3]$ uniformly, project each point $(x,y) \in S$ into the appropriate fiber, according to a randomization of \eqref{} and, letting the Hill exponent vary, find the value of the Hill exponent such that the system is undergoing a saddle node bifurcation with respect to the exponent. We plot the results in a heat map, Fig. \ref{fig:heat_map}a. \begin{figure}[h] \begin{center} \includegraphics[width = 0.45\textwidt ]{HD_heatmap.png} \includegraphics[width = 0.45\textwidt ]{location_good_par.png} \caption{ On the left, the heat map of the Hill coefficientsprojected on DSGRN coordinates, on the right the projection of hteparameters themselves.\red{this computation needs to be re-run with random samples over all of the square}} \label{fig:heat_map} \end{center} \end{figure} This Figure gives an intuition of what we would like to prove: choosing parameters in the ''center region'' has the highest likelihood of giving us a saddle node for some values of the Hill coefficient. Looking at this map, we observe that the bottom left of the center region seems to be the best location for a practical bistable switch, because there the Hill coefficient needed to undergo the saddle node is the lowest. We also might notice how there are saddle nodes taking place outside the center region. For this to be more clear, we refer to Figure \ref{fig:heat_map}b. These parameters will be additionally sudied in Section \ref{sec:isolas}. As additional testing, we fix the value $y = 1.5$ and choose the value of $x$ in a uniformly spaced segment between 0.5 and 2.5. We then sample each of these point's fiber multiple times, and store the value of the Hill coefficient for which each random parameter undergoes a saddle node bifurcation. This experiment results in Figure \ref{fig:n_wrt_gamma}, where we clearly see how the likelihood of finding a saddle node (and thus bistability) drops as soon as the $x$-value os lower than 1 or higher than 2. \begin{figure}[h] \begin{center} \includegraphics[width = 0.30\textwidt ]{n_wrt_gamma.png} \includegraphics[width = 0.30\textwidt ]{average_n_wrt_gamma.png} \includegraphics[width = 0.30\textwidt ]{number_n_wrt_gamma.png} \caption{ Starting from the left: the value of the Hill coefficient for the saddle nodes we found depending on $x$, the average of the Hill coefficient, the number of saddle nodes per value of $x$ out of a sample size of 25.} \label{fig:n_wrt_gamma} \end{center} \end{figure} With this justification, let's go to the statistical proof. \subsubsection{Some statistics magically appear} \longcorrq SK: I consulted a friend who knows a bit of stats. Based on our specific problem he suggested that the ``correct'' analysis is a $\chi^2$ test where we write our samples as observations of two random variables, $Z_1, Z_2$, defined by \[ Z_1 = \frac{\# \text{SN parameters in } U_5}{\# \text{parameters in } U_5} \qquad Z_2 = \frac{\# \text{SN parameters in } U_5^C}{\# \text{parameters in } U_5^C} \] <<>> \subsection*{What to do in gruesome detail} Following the wikipedia page: https://en.wikipedia.org/wiki/Pearson\%27s\_chi-squared\_test\#Testing\_for\_statistical\_independence and https://en.wikipedia.org/wiki/Chi-squared\_test \begin{center} \begin{tabular}{l|c|c|c} regions: & $U_5$ & $U_5^C$ & all\\ \hline n samples: & $N_5$ & $T - N_5$ & $T$ \\ n saddles : & $n_5$ & $\tilde n$ & $n_5 + \tilde n$\\ no saddle: & $N_5 - n_5$ & $T - N_5 - \tilde n$ & $T - n_5 - \tilde n$ \end{tabular} \end{center} ``expected'' number of saddles in region $U_5$ if everything was equally distributed (the null hypothesis) : $ E = N_5 \times \frac{n_5 + \tilde n}{T}$, that is $\textit{sample in }U_5 \times\frac{ \textit{all saddles}} {\textit{all sample}}$ $\xi = \frac{(\textit{expected - observed})^2}{\textit{expected}} = \frac{(E - n_5)^2}{n_5}$ idea: if this is close to 0, the null hypothesis is right, if this is far from zero the null hypothesis does not model reality. We compare $\xi$ with the upper-tail critical values of chi-square distribution table with 1 degree of freedom as found in the first line of https://www.stat.purdue.edu/$\sim$lfindsen/stat503/Chi-Square.pdf If our value is higher than the value on that page, the probability that the null hypothesis is true is lower than the number at the top of that column. \subsubsection{Existence of isolas} \label{sec:isolas} In the previous Section \ref{}, we introduce the saddle node problem, and in Section \ref{} we present our results in finding such numerical saddles in the toggle switch model. The DSGRN machinery states that, at infinity, the center region should have three equilibria, while all other regions should only have one. This means that outside the center region we expect new equilibria generated by a saddle node to disappear again when the Hill coefficient grows towards infinity. This means that, for any parameter outside the center region for which we were able to find one saddle node, we should be able to find two. Furthermore, we expect the behaviour of the equilibria to either be hysteretic, or to define an isola. The graphical difference between the two behaviours can be seen in the Figure \ref{fig:isolas}. \begin{figure}[h] \begin{center} \includegraphics[width = 0.7\textwidth, trim= 1.5cm 14cm 4.25cm 1.7cm, clip]{sketch_isolas} \caption{On the right, expected behavior of a saddle-node in the center region $\mathcal{R}$. On the left, an isola, expected behavior of equilibria when a saddle-node is detected outside the center region. Vertical axes in this case is $\hill$, while the saddle-node s are indicated by a red diamond. Unstable equilibria are drawn as dotted lines.} \label{fig:isolas} \end{center} \end{figure} From the literature, only hysteretic behaviour is expected to be found in the toggle switch, but we were able to identify parameters showcasing both behaviours. In Figure \ref{fig:numerical_isolas}, we present a plot of the Hill coordinate with respect to the $x_1$ coordinate of numerically found equilibria for two \red{three?} different parameters outside of the center region. We can see how they different in the fact that, in the first parameter, a continuous function can be plotted, while in the second one, the Hill coefficient is not a function of $x_1$ and creates an isolas. In both cases, the saddles have been first found numerically, then the equilibria have been computed for some values of the Hill coefficient. No numerical continuation has been implemented. \begin{figure}[h] \begin{center} \includegraphics[width = 0.45\textwidth]{figure_hysteresis.png} \includegraphics[width = 0.45\textwidth]{figure_isolas.png} \caption{ Varying Hill coefficients determining a change in the $x_1$ coordinate of the equilibria. On the right, hysteretic behaviour found at $p =[0.92436706, 0.05063294, 0.81250005, 0.07798304, 0.81613] $, corresponding to $(x,y) =(1.9, 0.975) $. On the left, an isola, found at $p = [0.64709401, 0.32790599, 0.94458637, 0.53012047, 0.39085124]$, corresponding to $(x,y) = (0.975,0.975)$.} \label{fig:numerical_isolas} \end{center} \end{figure} \subsection{Equilibria} The equilibria of this system are denoted by $\hat x = (\hat x, \hat y)$ and must satisfy \begin{equation} \gamma_1 \hat x = \ell_1 + \delta_1 \frac{\theta_2^{n_1}}{\theta_2^{n_1} + \hat{y}^{n_1}} \qquad \gamma_2 \hat{y} = \ell_2 + \delta_2 \frac{\theta_1^{n_2}}{\theta_1^{n_2} + \hat{x}^{n_2}} . \end{equation} If $\hat x$ is hyperbolic, then the stability is determined by the eigenvalues of the linearization given by \begin{equation} Df(\hat x) = \begin{pmatrix} -\gamma_1 & -\frac{n_1 \delta_1 \theta_2^{n_1} \hat{y}^{n_1 -1}}{\paren*{\theta_2^{n_1} + \hat{y}^{n_1}}^2} \\ -\frac{n_2 \delta_2 \theta_1^{n_2} \hat{x}^{n_2 - 1}}{\paren*{\theta_1^{n_2} + \hat{x}^{n_2}}^2} & -\gamma_2 \end{pmatrix} \end{equation} Equivalently, $(\hat x, \hat y)$ are positive real roots of the map, $F : \rr^2 \to \rr^2$ defined by \begin{align} F(x) & = \begin{pmatrix} \gamma_1 x_1 x_2^{n_1} - \ell_1 x_2^{n_1} + \gamma_1 \theta_2^{n_1} x_1 - \ell_1 \theta_2^{n_1} - \delta_1\theta_2^{n_1} \\ \gamma_2 x_1^{n_2} x_2- \ell_2 x_1^{n_2} + \gamma_2 \theta_1^{n_2} x_2 - \ell_2 \theta_1^{n_2} - \delta_2\theta_1^{n_2} \\ \end{pmatrix} \\ & = \begin{pmatrix} a_3 x_1 x_2^n - a_2 x_2^n + a_1 x_1 - a_0 \\ b_3 x_1^m x_2 - b_2 x_1^m + b_1 x_2 - b_0 \end{pmatrix} \end{align} where the coefficients are determined uniquely by the parameters. We compute the derivative of $F$ at a point by \begin{equation} DF(x) = \begin{pmatrix} a_3x_2^n + a_1 & nx_2^{n-1} \paren*{a_3x_1 - a_2} \\ mx_1^{m-1}\paren*{b_3x_2-b_2} & b_3x_1^m + b_1 \end{pmatrix} \end{equation}
2,877,628,089,342
arxiv
\section{Introduction} A hypercomplex structure $\mathcal Q$ on a manifold $M$ is a set of integrable complex structures on $M$ of the form $\mathcal Q = \{ aI + bJ + cK;\ a^2 + b^2 + c^2 = 1\}$, where $I$, $J$, $K=IJ$ are complex structures satisfying $IJK = -1$. A Riemannian metric $g$ on $M$ is called hyper-Hermitian if it is Hermitian w.r.t. every complex structure in $\mathcal Q$; it is easy to see that hyper-Hermitian metrics always exist. A hyper-Hermitian manifold ($M,g,\mathcal Q$) admits a HKT-structure, where HKT means hyper-K\"ahler with torsion, when there exists a metric connection $\nabla$ that leaves every complex structure in $\mathcal Q$ parallel and whose torsion tensor $T$ is totally skew. When a HKT-structure exists, the connection $\nabla$ is unique and it is called the HKT-connection; actually it coincides with the Bismut connection of every complex structure in $\mathcal Q$ (see e.g. \cite{Gau}). We refer the reader to \cite{GP,V1,V2} for equivalent definitions and basic properties of HKT-structures, which have also played an important role in theoretical physics (see e.g. \cite{HP1, HP2}).\par Hyperk\"ahler structures are a special case of HKT-structures, namely when the HKT-connection coincides with the Levi-Civita connection of $g$, i.e. it is torsionfree. Actually, the hyperk\"ahler condition is extremely stringent and examples are rare; for instance homogeneity forces the manifold to be flat (see e.g. \cite{Besse}). On the contrary, there are plenty of examples of HKT-structures, even when $M$ is supposed to be compact and homogeneous. In \cite{SSTV} the authors described and classified all the left invariant hypercomplex structures on compact Lie groups, for which there exists a biinvariant, hyper-Hermitian Riemannian metric. Joyce \cite{Joyce} then described a way to construct hypercomplex structures on homogeneous spaces of compact Lie groups; this construction, which we recall in section \ref{Jcon}, has been then used and revisited by several authors, see e.g. \cite{OP,PP}. Our first main result states that, if $G$ is a compact Lie group, then every $G$-invariant hypercomplex structure on a homogeneous $G$-space is obtained via the Joyce construction, provided that there exists a naturally reductive $G$-invariant, hyper-Hermitian metric; this metric automatically endows the homogeneous space with an invariant HKT-structure. As a corollary of the proof of this first statement, we also get the fact that the semisimplicity of $G$ forces the group to be of a special kind, namely with every simple factor of type $A_n$. These results are summarized in the following \begin{theorem}\label{MainThm} Let $G$ be a compact connected Lie group acting transitively and almost effectively on some manifold $M$ preserving a hypercomplex structure $\mathcal Q$. Suppose that there exists a naturally reductive $G$-invariant metric $g$ on $M$ which is hyper-Hermitian w.r.t. $\mathcal Q$. Then \par \begin{enumerate} \item there exists a Cartan subalgebra of the complex reductive algebra $\g^\C$ and a corresponding root system for the semisimple part of $\g^\C$ such that the hypercomplex structure $\mathcal Q$ coincides with the one given by the Joyce's construction; \item if $G$ is semisimple, then every simple factor of $\g$ is of type $A_n$. \end{enumerate} \end{theorem} The existence of a naturally reductive metric which is hyper-Hermitian is supposed and extensively used in \cite{SSTV} as well as in our arguments, while we are unaware of any result proving it. We refer the reader also to \cite[Theorem 4]{OP}, where the existence of a family of invariant HKT structures on compact homogeneous spaces is discussed. Our last result reduces the existence of a hyper-Hermitian naturally reductive metric on a homogeneous space to the case of a Lie group. \begin{proposition}\label{restr} Let $G$ be a compact Lie group and $M = G/L$ a $G$-homogeneous space endowed with an invariant hypercomplex structure $\mathcal Q$. Suppose $L$ is not trivial and connected. Then \begin{enumerate} \item The connected component $Y$ of the fixed point set $M^L$ of $L$ containing the origin $[eL]$ is a positive dimensional hypercomplex submanifold. In particular $\chi(M) = 0$; \item If $g$ is an invariant naturally reductive metric, it is hyper-Hermitian if and only if its restriction to $Y$ is.\end{enumerate} \end{proposition} {\sc Notation}. Lie groups and their Lie algebras will be indicated by capital and gothic letters respectively. We will denote the Cartan-Killing form by $\mathcal{B}$. \section{Preliminaries} \subsection{Invariant complex structures} \label{ics} In order to establish the notation we briefly recall the structure theory of compact homogeneous complex manifolds; the reader is referred e.g. to \cite{Ak} for a more detailed exposition.\par Let $M$ be a compact complex manifold and let $G$ be a compact connected Lie group acting almost effectively, transitively and holomorphically on $M$. We will write $M = G/L$ for some compact subgroup $L$. Up to a finite covering we can assume that $L$ is connected. We will also denote by $I$ the $G$-invariant complex structure on $M$. \par The complexified group $G^\C$ acts holomorphically on $G/L$, so that $M=G^\C/Q$ for some connected complex subgroup $Q \subset G^\C$. It is well known that the {\em Tits fibration} $\phi$ provides a holomorphic fibering of the homogeneous space $M$ over a compact rational homogeneous space $G^\C/P$, where the parabolic subgroup $P$ is in general defined to be the normalizer $N_{G^\C}(Q^o)$ of $Q^o$ (see \cite{Ak}); since $Q$ is connected the fibres of $\phi$ are complex tori. The flag manifold $G^\C/P$ can be written as $G/C$ endowed with a $G$-invariant complex structure $\mathcal{I}$, where $C$ is the centralizer of some torus in $G$. Accordingly the Lie algebra $\g$ can be decomposed as \begin{equation}\label{dec} \g=\l\oplus\t\oplus\n\,,\end{equation} where $\c=\l\oplus\t$ and $\n$ is an $\Ad(C)$-invariant complement of $\c$ in $\g$. Since $\t$ identifies with the tangent space to the fibre we have $[\t,\t]=0$. Moreover the algebra $\c$ is contained in the normalizer of $\l$ in $\g$ by construction, hence $[\l,\t]\subset\l \cap \t = 0$. We can choose a Cartan subalgebra $\h$ of the form $\h=\t_\l^\C\oplus \t^\C$, where $\t_\l$ is a maximal abelian subalgebra of $\l$. Denote by $R$ the corresponding root system of $\g^\C$, by $R_\l$ the subsystem relative to $\l$ so that $\l^\C = \t_\l \oplus \bigoplus_{\alpha\in R_\l}\g_\alpha$, and by $R_\n$ the symmetric subset of $R$ such that $\n^\C=\bigoplus_{\alpha \in R_\n}\g_\alpha$. The $G$-invariant complex structure $\mathcal I$ induces an endomorphism of $\n^\C$ that is $\Ad(C)$-invariant and therefore the corresponding subspace $\n^{1,0}$ is a sum of root spaces. The integrability of $\mathcal I$ is equivalent to the condition $$[\n^{1,0},\n^{1,0}]_{\n^\C} \subseteq \n^{1,0}$$ and one can prove (see e.g. \cite{BFR}) that there is a suitable ordering of $R_\n=R_\n^+ \cup R_\n^-$ such that $$\n^{1,0} = \bigoplus_{\alpha\in R_\n^+} \g_\alpha, \quad \n^{0,1} = \bigoplus_{\alpha\in R_\n^-} \g_\alpha.$$ The $G$-invariant complex structure $I$ on $G/L$ induces an $\Ad(L)$-invariant endomorphism, still denoted by $I$, of $\m^\C$, where $\m := \t + \n$. It leaves both $\t$ and $\n$ invariant with $I|_\n = \mathcal I$ and the integrability of $I$ is equivalent to the vanishing of the Nijenhuis tensor $N_I$, namely for $X,Y\in \m$ \begin{equation}\label{NI} [IX,IY]_\m - [X,Y]_\m - I[IX,Y]_\m - I[X,IY]_\m = 0. \end{equation} Equation \eqref{NI} is trivial for $X,Y\in \t$ and with $X\in \t$ and $Y\in \n$ it reduces to the $\ad(\t)$-invariance of $I$. When $X,Y\in \n$, then \eqref{NI} is the integrability of $\mathcal I$ because $[\n^{1,0},\n^{1,0}]\subseteq \n^{1,0}$. \par Viceversa, we start with a decomposition as in \eqref{dec}, where $\l+\t = \c$ and $\c$ is the centralizer of an abelian subalgebra. If we fix an $\ad(\c)$-invariant integrable complex structure $\mathcal I$ on $\n$ and we extend it by choosing an arbitrary complex structure $I_\t$ on $\t$, then $I_\t + \mathcal I$ will provide an integrable $L$-invariant complex structure on the homogeneous space $G/L$. \subsection{Hypercomplex and HKT structures} A {\em hypercomplex} structure on a manifold $M$ is determined by a pair of anticommuting complex structures. Whenever such a pair $(I,J)$ is given, one has a 2-sphere of complex structures on $M$ given by $\{aI + bJ + c IJ \colon a,b,c \in \R \quad {\rm and} \quad a^2+b^2+c^2=1\}$. \\ A metric $g$ on a hypercomplex manifold $M$ is called {\em hyper-Hermitian} if it is Hermitian with respect to both the complex structures. A metric connection $\nabla$ on a hyper-Hermitian manifold $(M,g,I,J)$ is called {\em hyper-K\"ahler with torsion} (HKT) if $\nabla I =\nabla J =0$ and the torsion $T^\nabla$ is totally skew-symmetric, i.e. the tensor $\tau(X,Y,Z)=g(T^\nabla(X,Y),Z)$ is a 3-form.\\ Note that if such a connection exists, then it is unique since it is the {\em Bismut connection} of each complex structure (see \cite{Gau}). \\ An important technical tool we use in section \ref{Main} is the following well-known fact (see \cite{YM}). \begin{proposition}\label{PNmisto} Let $(M,I)$ be a complex manifold. An almost complex structure $J$ on $M$ anticommuting with $I$ is integrable if and only if the tensor $N_{IJ}$ defined for $X,Y \in \Gamma(TM)$ as \begin{equation} \label{Nmisto} N_{IJ}(X,Y) = [IX,JY]+[JX,IY]-I[JX,Y]-J[IX,Y]-I[X,JY]-J[X,IY] \end{equation} vanishes identically on $M$. \end{proposition} Using the notation of subsection \ref{ics} we consider a homogeneous space $M=G/L$ where the compact group $G$ preserves a hypercomplex structure generated by $I, J$ and $K=IJ$. \\ Given a $G$-invariant decomposition $\g = \l + \m$ and the corresponding {\em canonical connection} $D$, it is known that any $G$-invariant tensor on the homogeneous space $G/L$ is $D$-parallel (see e.g. \cite{KoNo}). Since the torsion of $D$ is given by $T^D(X,Y)= - [X,Y]_\m$, we see that $D$ becomes the HKT-connection if there exists a $G$-invariant naturally reductive metric on $M$ that is Hermitian with respect to $I$ and $J$. \subsection{The Joyce construction}\label{Jcon} In \cite{Joyce} Joyce explains how to construct invariant hypercomplex structures on suitable compact homogeneous spaces. His construction can be outlined as follows. Given a compact Lie algebra $\g$ we fix a Cartan subalgebra $\h$ of $\g^\C$ and denote by $R$ the set of corresponding roots. We can find a sequence $\theta_1,\ldots,\theta_k$ of roots such that if $\mathfrak{s}_i^\C \cong \sl_2(\C)$ is the subalgebra generated by the root spaces $\g_{\theta_i}, \g_{-\theta_i}$ and \[ \mathfrak{f}_i:=\g \cap\bigoplus_{{\mathcal B}(\alpha,\theta_i)\neq0}\!\g_{\alpha}\;\,\,, \quad \mathfrak{b}_i:= \g \cap \bigcap_{j=1}^i \,\z_{\g^\C}(\mathfrak{s}_j^\C)\;\,, \] where $\z_{\g^\C}(\s_j^\C)$ denotes the centralizer of $\s_j^\C$ in $\g^\C$, then one has the decomposition \begin{equation} \label{Jd} \g=\mathfrak{b}_k \oplus \bigoplus_{i=1}^k \s_i \oplus \bigoplus_{i=1}^k \f_i\,. \end{equation} The Lie algebra of the isotropy $\l\subset\mathfrak{b}_k$ is chosen as follows: the semisimple part of $\l$ coincides with the semisimple part of $\mathfrak{b}_k$ and the center $\z_\l$ of $\l$ is a subset of the center $\z'$ of $\mathfrak{b}_k$ such that $\dim \z' -\dim \z_\l -k \equiv_4 0$. We denote by $\m$ the $\mathcal B$-orthogonal complement of $\l$ in $\g$.\par The invariant hypercomplex structure on $G/L$ is obtained by the following $\Ad(L)$-invariant hypercomplex structure $\mathcal Q$ on $\m$. The structure $\mathcal Q_{|\f_i}$ coincides with $\ad(\s_i)$. We select $\mathcal B$-orthogonal vectors $u_1,\ldots,u_k$ in $\z'\cap \m$ and use the fact that $\mathfrak{q}_i=\s_i\oplus \R u_i\cong \H$ to define $\mathcal Q_{|\mathfrak{q}_i}$. The complement of $\z_\l\oplus \sum_i \R u_i$ in $\z'$ can be endowed with an arbitrary linear hypercomplex structure. \section{Proof of the main results}\label{Main} \subsection{Proof of Theorem \ref{MainThm}, part (1)} We write $M = G/L$ for some closed subgroup $L\subset G$. We will also suppose that $L$ is connected, otherwise we pass to a finite covering. We will suppose that $G/L$ admits a naturally reductive metric $g$ with respect to the reductive decomposition $\g = \l + \m$, which is Hermitian w.r.t. every complex structure in $\mathcal Q$. We recall (see e.g. \cite{KoNo}) that the metric $g$ induces a scalar product on $\m$ such that, for every $X,Y,Z\in \m$ \begin{equation} \label{red} g([X,Y]_\m,Z) + g(Y,[X,Z]_\m) = 0. \end{equation} Using \eqref{red} and the $\Ad(L)$-invariance of $g$, it is immediate to see that $g(\t,\n)=0$. \par We fix one complex structure $I\in \mathcal Q$ and apply the structure theory explained in section 2.1, keeping the same notation. If now $J\in \mathcal Q$ is an integrable complex structure anticommuting with $I$, we think of $J$ as an $\Ad(L)$-invariant endomorphism of $\m$ and we may formulate Proposition \ref{PNmisto} as follows: for every $X,Y\in \m$ \begin{equation}\label{misto} [IX,JY]_\m+[JX,IY]_\m-I[JX,Y]_\m-J[IX,Y]_\m-I[X,JY]_\m-J[X,IY]_\m = 0. \end{equation} If we now extend $J$ to the complexification $\m^\C$, we see that $J(\m^{1,0}) = \m^{0,1}$ and $J(\m^{0,1}) = \m^{1,0}$. If $X\in \m^{1,0}$ and $Y\in \m^{0,1}$, equation (\ref{misto}) reduces to $$i[X,JY]_{\m^\C} -i[JX,Y]_{\m^\C} - I[JX,Y]_{\m^\C} - I[X,JY]_{\m^\C} = 0, $$ which is automatically satisfied since $[\m^{1,0},\m^{1,0}]_{\m^\C}\subseteq \m^{1,0}$.\\ If $X,Y\in \m^{1,0}$ we see that $N_{IJ}(X,Y)\in \m^{0,1}$ and therefore equation (\ref{misto}) is equivalent to $g(N_{IJ}(X,Y),Z) = 0$ for every $Z\in \m^{1,0}$. Using the fact that $g$ is naturally reductive and Hermitian w.r.t. $I$ and $J$, we have that equation (\ref{misto}) is equivalent to the following condition: for every $X,Y,Z\in \m^{1,0}$ the cyclic sum \begin{equation} \label{cyclic} \mathfrak{S}_{{\small (X,Y,Z)}} \,\, g(JX,[Y,Z]_{\m^\C}) = 0. \end{equation} We now consider the root system $R$ associated with the choice of the maximal abelian subalgebra $(\t_\l+\t)^\C$ of $\g^\C$ as described in section 2.1. The root subsystem $R_\n$ where $\m = \t \oplus \n$ has an ordering $R_\n = R_\n^+ \cup R_\n^-$ induced by the complex structure $I$ and we can select a root $\theta\in R_\n^+$ which is maximal w.r.t. this ordering, namely for every $\alpha\in R_\n^+$ $$\theta + \alpha \not\in R_\n^+.$$ Throughout the following we will denote by $\{H_\alpha,E_\alpha\}_{\alpha \in R}$ the standard Chevalley's basis of the semisimple part of $\g^\C$.\\ We here remark that, since the metric $g$ is naturally reductive, for $\alpha,\beta\in R_\n^+$ we have $g(E_\alpha,E_\beta)=0$ whenever $\alpha\neq -\beta$ and $g(E_\alpha,E_{-\alpha}) \neq 0$. Moreover if $iH_\alpha \in \t$ one can see that $g(E_\alpha,E_{-\alpha})= -\frac{\|\alpha\|^2}{|\alpha|^2}$, where $\|\alpha\|^2=g(iH_\alpha,iH_\alpha)$ . \begin{lemma} \label{Et}We have $JE_{\theta}\in \t^\C$. In particular $E_\theta$ is centralized by $\l$. \label{lemma0} \end{lemma} \begin{proof} Since $g(\t,\n) = 0$, we need to show that $g(JE_{\theta},E_\alpha)=0$ whenever $\alpha \in R_\n^+$. To do this we can first take $X=E_{\theta}$, $Y=E_\alpha$ and $Z= H \in \t^\C \cap \m^{(1,0)}$ in formula \eqref{cyclic} and obtain \[ (\alpha+\theta)(H)\ g(JE_{\theta},E_\alpha)=0\,. \] Now, if $\alpha + \theta$ does not vanish on $\t^\C \cap\m^{(1,0)}$, then the claim follows. Otherwise $\alpha + \theta$ vanishes on the whole $\t^\C$ since $\alpha + \theta \in i\t^*$; in this case we can take $H' \in \t_\l$ such that $(\alpha + \theta)(H') \neq 0$. For such a $H'$ we have $[H',JE_{\theta}]=\theta(H')JE_{\theta}$ and, contracting with $E_\alpha$ and using the fact that $g$ is $J$-Hermitian, once again we get \[ (\alpha+\theta)(H')\ g(JE_{\theta},E_\alpha)=0\,, \] obtaining our first claim. The second assertion follows from the fact that $[\l,\t]=0$ and the $\ad(\l)$-invariance of $J$. \end{proof} Now we want to compute the $\t^\C$-component of $JE_\alpha$ for $\alpha\in R_\n^+$. \begin{lemma}\label{t-component} Given $\alpha\in R_\n^+$ the following statements hold: \begin{itemize} \item[(i)] If $\alpha|_{\t_\l} \equiv 0$ then $JE_\alpha = k_\alpha(H_\alpha+iIH_\alpha)\, {\rm{mod}} \, \n^\C$ for some $k_\alpha \in \C$. In particular $JE_\theta = k_\theta(H_\theta+iIH_\theta)$, where $|k_\theta|^2 = \frac{1}{2|\theta|^2}$. \item[(ii)] If $\alpha|_{\t_\l} \not\equiv 0$ then $JE_\alpha \in \n^\C.$ \end{itemize} \end{lemma} \begin{proof} In order to prove (i) we first note that $iH_\alpha$ lies in $\t$. We now apply \eqref{cyclic} taking $X=E_\alpha$, $Y=H_1$ and $Z=H_2$ where $H_1,H_2\in \t^\C\cap \m^{1,0}$. Thus we obtain \[g(JE_\alpha,\alpha(H_2)H_1-\alpha(H_1)H_2)=0. \] The linear space $\span_\C\{\alpha(H_2)H_1-\alpha(H_1)H_2:\:H_1,H_2\:\in \t^\C\cap \m^{1,0}\}$ coincides with $\{v-iIv:\:v,Iv\:\in (\Ker\ \alpha)\cap \t\}.$ This means that the $\t^\C$-component of $JE_\alpha$ is of the form $\gamma (w+iIw)$ with $\gamma \in \C$ and $w \in \t$ is $g$-orthogonal to $\Ker\, \alpha$. Since $g(iH_\alpha, \Ker\,\alpha)=0$ we can choose $w=iH_\alpha$ and the claim follows for a suitable $k_\alpha \in\C$. The last assertion follows from the following computation \begin{eqnarray*} g(E_\theta,E_{-\theta}) &=& |k_\theta|^2 g(H_\theta - iIH_\theta, H_\theta + i IH_\theta) = 2|k_\theta|^2 g(H_\theta,H_\theta) = \\ &=& 2|k_\theta|^2 g([E_\theta,E_{-\theta}],H_\theta) = 2|k_\theta|^2 |\theta|^2 g(E_\theta,E_{-\theta}).\end{eqnarray*} As for (ii), we select $H\in \t_\l$ with $\alpha(H)\neq 0$ and use the $\ad(\l)$-invariance of $J$ to compute $$\alpha(H)\ g(JE_\alpha,\t) = g([H,JE_\alpha],\t) = g(JE_\alpha,[H,\t]) = 0.$$ \end{proof} We note that $k_\theta$ is determined up to multiplication by a complex number of unit norm, since $J$ can be chosen in the circle of complex structures in $\mathcal Q$ which are orthogonal to $I$. \begin{lemma} \label{lista} \begin{itemize} \item[(i)] If $\alpha, \beta \in R_\n^+$ and $\alpha +\beta \not \in R$, then $g(JE_\alpha,E_\beta)=0$. \item[(ii)] If $\alpha, \beta \in R_\n^+$ and $\alpha +\beta = \gamma \in R^+$ with $\gamma_{|{\t_\l}} \not\equiv 0$, then $g(JE_\alpha,E_\beta)=0$. \item[(iii)] If $\alpha, \beta \in R_\n^+$ and $\alpha +\beta = \gamma \in R^+$ with $\gamma_{|\t_\l}\equiv 0$, then $g(JE_\alpha,E_\beta)= 2 k_{\gamma} \frac{\|\gamma\|^2}{|\gamma|^2}N_{\alpha,\beta}$. \end{itemize} \end{lemma} \begin{proof} The first assertion can be easily proved with the same argument used in Lemma \ref{lemma0}. In order to prove (ii), let $H\in \t_\l$ with $\gamma(H)\neq 0$ and use the $\ad(\l)$-invariance of $J$ to compute \begin{eqnarray*}\alpha(H)g(JE_\alpha,E_\beta) & = & g(J[H,E_\alpha],E_\beta)= g([H,JE_\alpha],E_\beta)=-g(JE_\alpha,[H,E_\beta])\\ & =& -g(JE_\alpha,\beta(H)E_\beta)= -\beta(H)g(JE_\alpha,E_\beta) \end{eqnarray*} so that the claim follows.\\ As for (iii), we select $H\in \t$ with $\gamma(H)\neq 0$ and set $H' = H - iIH$. Using \eqref{cyclic} we have \[ \gamma(H')g(JE_\alpha,E_\beta)=g(JH',[E_\alpha,E_\beta])=N_{\alpha,\beta}\: g(JH',E_\gamma) =-N_{\alpha,\beta}\: g(JE_\gamma,H'). \] Applying part (i) of the previous Lemma we get \begin{eqnarray*} \gamma(H')g(JE_\alpha,E_\beta) & = & -N_{\alpha,\beta}\: k_\gamma(g(H_\gamma,H')+ig(IH_\gamma,H')) = -2N_{\alpha,\beta}k_\gamma\, g(H_\gamma,H') \\ & = & 2\,\gamma(H')\,N_{\alpha,\beta}\,k_\gamma\, \frac{\|\gamma\|^2}{|\gamma|^2}\,. \end{eqnarray*} and the claim follows. \end{proof} We now consider the highest root $\theta$ and define $R(\theta) = \{\alpha\in R_\n^+;\ \theta-\alpha\in R\}$. Note that $\alpha\in R_\n^+$ lies in $R(\theta)$ if and only if $\alpha\neq \theta$ and $\mathcal B(\alpha,\theta) \neq 0$. Moreover if $\alpha \in R(\theta)$, then $\theta-\alpha\in R_\n$: indeed, if $\theta-\alpha = \beta\in R_\l$, we have $\theta-\beta = \alpha\in R$, hence $[E_\theta,E_{-\beta}]\neq 0$, contradicting the fact that $[\l,E_\theta] = 0$ (see Lemma \ref{Et}). \bl \label{JE} If $\alpha\in R(\theta)$, then $JE_\alpha \in \n^\C$. \el \bp Suppose $JE_\alpha$ has a component along $\t^\C$. Using Lemma \ref{t-component}, we compute $$0 = g(JE_\alpha, JE_{-\theta}) = k_\alpha k_\theta\ g(H_\alpha + iI H_\alpha, H_\theta + i IH_\theta) = 2 k_\alpha k_\theta\ g(H_\alpha,H_\theta) = $$ $$ = 2 k_\alpha k_\theta\ g([E_\alpha,E_{-\alpha}],H_\theta) = -2 k_\alpha k_\theta\ \alpha(H_\theta) g(E_\alpha, E_{-\alpha}).$$ Since $\alpha\in R(\theta)$ we have that $\alpha(H_\theta)\neq 0$. Therefore $k_\alpha = 0$ and the claim follows. \ep \bl If $\alpha\in R(\theta)$, then $g(JE_\alpha,E_\beta) = 0$ for every $\beta\in R_\n^+$ unless $\alpha + \beta = \theta$. \el \bp By Lemma \ref{lista} (i), it is enough to take $\beta\in R_\n^+$ so that $\alpha+\beta = \gamma\in R$. Moreover by Lemma \ref{lista} (iii), we may suppose that $\gamma|_{\t_\l}\equiv 0$, hence $\gamma|_{\t}\not\equiv 0$. Choose $H\in \t^{1,0}$ with $\gamma(H) \neq 0$. Now, if $\gamma\neq \theta$, by Lemma \ref{JE} we have that $g(J[E_\alpha,E_\beta],\t) = 0$. Equation (\ref{cyclic}) with $X = E_\alpha$, $Y = E_\beta$ and $Z= H$ implies $\gamma(H)\ g(JE_\alpha,E_\beta) = 0$ and the claim follows. \ep The previous Lemma says that for every $\alpha \in R_\n^+$ one has $JE_\alpha=\lambda_\alpha E_{\alpha-\theta}$ for some $\lambda_\alpha \in \C\setminus\{0\}$. Using Lemma \ref{lista} (iii) we have \begin{equation} \label{lambda} \lambda_\alpha g(E_{\alpha-\theta},E_{\theta-\alpha}) = g(JE_\alpha,E_{\theta-\alpha}) = 2 k_\theta \frac{\|\theta\|^2}{|\theta|^2} N_{\alpha,\theta-\alpha}\,. \end{equation} Using the fact that $g$ is naturally reductive we have \[ g(E_{\alpha-\theta},E_{\theta-\alpha})= -\frac{N_{\alpha,\theta-\alpha}}{N_{\alpha,-\theta}}g(E_{-\theta},E_\theta) = \frac{N_{\alpha,\theta-\alpha}}{N_{\alpha,-\theta}}\frac{\|\theta\|^2}{|\theta|^2}\,, \] which, combined with \eqref{lambda} gives $$JE_\alpha = 2 k_\theta N_{\alpha,-\theta}E_{\alpha-\theta}\,.$$ Let $\mathfrak{s}(\theta)^\C$ be the subalgebra of $\g^\C$ generated by $E_\theta$ and $E_{-\theta}$, and define $\mathfrak{s}(\theta)=\mathfrak{s}(\theta)^\C\cap \g$. Obviously $\mathfrak{s}(\theta)\cong\sp(1)$. Set also $$Z_\theta = I(i H_\theta)\in \t,\quad \mathfrak{u}(\theta) = \mathfrak{s}(\theta) \oplus \R\ Z_\theta.$$ Then $\mathcal Q$ leaves $\mathfrak{u}(\theta)$ invariant and $\mathcal{Q}_{|\u(\theta)}$ is determined by the formula $JE_\theta = k_\theta (H_\theta +iIH_\theta)$. We also define $\mathfrak{f}_\theta=\g \cap \bigoplus_{\alpha\in R(\theta)} (\g_\alpha \oplus \g_{-\alpha})$ and $\mathfrak{c}_\theta = \g \cap \bigoplus_{\alpha\in C(\theta)} (\g_\alpha \oplus \g_{-\alpha})$, where $C(\theta) = \{\alpha\in R_\n^+;\ (\theta,\alpha) = 0\} = R_\n^+\setminus (R(\theta)\cup \{\theta\})$, so that $$\n \oplus \span_\R\{iH_\theta, Z_\theta\} = \mathfrak{u}(\theta) \oplus \mathfrak{f}_\theta \oplus \mathfrak{c}_\theta.$$ \begin{proposition} \label{ad} The hypercomplex structure $\mathcal{Q}$ leaves $\mathfrak{f}_\theta$ invariant and $\mathcal{Q}_{|\mathfrak{f}_\theta} = \ad(\mathfrak{s}(\theta))_{|\mathfrak{f}_\theta}$. \end{proposition} \bp We will show that there exist $\sigma_\theta, \tau_\theta \in \s(\theta)$ such that for every $X \in \mathfrak{f}_\theta$ we have $JX= [\sigma_\theta,X]$ and $IX=[\tau_\theta,X]$. Let $\sigma_\theta = 2(\overline{k}_\theta E_\theta - k_\theta E_{-\theta})$ and $\tau_\theta = \frac{2iH_\theta}{|\theta|^2}$. The claim is a consequence of the following direct computations \begin{eqnarray*} [\sigma_\theta,E_\alpha] & = & -2k_\theta[E_{-\theta},E_\alpha] = -2 k_\theta N_{-\theta,\alpha}E_{\alpha-\theta} = JE_\alpha \\ {[\tau_\theta,E_\alpha]} & = & \frac{2i}{|\theta|^2}[H_\theta,E_\alpha] = 2\frac{{\mathcal B}(\alpha,\theta)}{|\theta|^2}iE_\alpha = iE_\alpha \, \\ \end{eqnarray*} where in the last equation we have used the fact that $2\frac{{\mathcal B}(\alpha,\theta)}{(\theta,\theta)}=1$ since the $\theta$-string of $\alpha$ is formed only by $\alpha-\theta$ and $\alpha$ (see e.g. \cite{Helgason}). \ep We now set $\theta_1:=\theta, \, k_1:=k_\theta$ and define inductively the roots $\theta_j$ as follows. \begin{itemize} \item[1)] $\theta_{j+1}$ is maximal in $C(\theta_j)$, i. e. $\theta_{j+1}+\alpha \not \in R$ for every $\alpha \in C(\theta_j)$; \item[2)] $C(\theta_{j+1}):=\{\alpha \in C(\theta_j)\colon \theta_{j+1}-\alpha \not \in R \}$ \end{itemize} We then set $R(\theta_{j+1}) = \{\alpha\in C(\theta_j);\ \theta_{j+1} - \alpha \in R\}$ and $\f_{j+1} = \g \cap \bigoplus_{\alpha\in R(\theta_{j+1})} (\g_\alpha \oplus \g_{-\alpha})$. Moreover we define $\s_{j+1} \cong \sp(1)$ as the real subalgebra generated by $E_{\theta_{j+1}}, E_{-\theta_{j+1}}$ (note that $\s_1 = \s(\theta)$) and $\u_{j+1} = \s_{j+1} \oplus \R Z_{j+1}$ where $Z_{j+1} = iIH_{\theta_{j+1}} \in \t$. \\ Now we have \begin{proposition}\label{Joyce} There exists a set of roots $\theta_1,\ldots,\theta_\ell$ such that for $j=1,\ldots,\ell$ we have: \begin{itemize} \item[(i)] the subset $C(\theta_\ell)$ is empty; \item[(ii)] the hypercomplex structure $\mathcal{Q}$ leaves $\f_j$ and $\u_j$ invariant. In particular $\mathcal{Q}_{|\f_j}=\ad(\s_j)_{|\f_j}$ and we have $JE_{\theta_j}=k_j (H_{\theta_j} + iIH_{\theta_j}) $ for a suitable $k_j \in \C$ (hence $\l$ centralizes $\s_j$); \item[(iii)] there is a $g$-orthogonal decomposition $\g = \l \oplus \tilde\t \oplus \bigoplus_{j=1}^{\ell} \u_j \oplus \bigoplus_{j=1}^{\ell} \f_j$, where $\tilde\t$ lies in $\t$ and is $\mathcal Q$-invariant. Moreover $[\l,\u_j] = 0$, $[\u_j,\u_k]=0$ for $j\neq k$ and $[\u_j,\f_j]\subseteq \f_j$; \item[(iv)] the root $\theta_1$ can be chosen as the highest root $\tilde{\theta}$ of the whole root system $R$ of $\g$ with respect to an ordering such that $R^+ \supseteq R^+_\n$. \end{itemize} \end{proposition} \begin{proof} The first three statements can be proved by induction using exactly the same arguments as in the previous Lemmas and in Proposition \ref{ad}. The only new statement to prove is (iv). To do this it is enough to show that the highest root space $\g_\ttt$ does not belong to $\l^\C$. Suppose now by contradiction that $E_\ttt \in \l^\C$. Given $\alpha\in R_\n^-$, we have $JE_{\alpha} = H + \sum_{\beta\in R_\n^+}c_\beta E_{\beta}$ for some $H\in \t^\C$ and $c_\beta\in \C$ and therefore $[E_\ttt,E_\alpha]= -J[E_\ttt, JE_\alpha] = 0$ because $\ttt+R_\n^+\not\subset R$ and $[\l,\t]=0$. Hence $[E_\ttt,\n]= 0$ and therefore $[E_\ttt,\m^\C]= 0$. But this cannot happen otherwise $\exp_G(E_\ttt-E_{-\ttt})$ would act trivially on $M$, contradicting the (almost) effectiveness of the $G$-action.\end{proof} Note that the decomposition obtained above matches with decomposition \eqref{Jd} if we take $\mathfrak{b}_k = \l \oplus \tilde\t \oplus \z$, where $\z$ is the center of $\bigoplus_{j=1}^{\ell} \u_j$. We also note that we have the following necessary condition: if $\z_\ell$ is the center of the centralizer in $\g$ of $\{s(\theta_1),\ldots,s(\theta_\ell)\}$, then \begin{equation} \label{cnec}\dim \z_\ell \geq \ell.\end{equation} \subsection{Proof of Theorem \ref{MainThm}, part (2)} We first prove the claim in the case in which $G$ is simple, using Proposition \ref{Joyce} and condition \eqref{cnec}. \par Since the root $\theta_1$ can be chosen as the highest root $\tilde{\theta}$ of the whole root system, we can start from the ``Wolf decomposition'' of $\g$ with respect $\theta_1=\ttt$: \[ \g = \s(\theta_1) \oplus \z_\g(\s(\theta_1)) \oplus \m_1 \] where $\m_1$ is identified with the tangent space of the corresponding Wolf space.\\ By a case-by-case inspection for simple groups it is not difficult to see that for every set of strongly orthogonal roots $\theta_1=\ttt,\ldots,\theta_\ell$ of $\g$, we have $\dim\z_\ell < \ell$ unless $\g$ is of type $A_n$. If $\g=\su(n)$ we have indeed $\dim \z_\ell = \ell$ for every choice of $\theta_1=\ttt,\ldots,\theta_\ell$. (see also \cite[Proposition 1]{PP}). \\ Suppose now that $\g=\g_1\oplus\ldots\oplus\g_r$ where the $\g_j$'s are simple Lie algebras. The set of roots $\Theta = \{\theta_1,\ldots,\theta_\ell\}$ is the disjoint union of the subsets $\Theta_j$ of all roots in $\Theta$ belonging to $\g_j$. Now $\z_\ell$ splits as a direct sum of the centers $\z_j$ of the centralizers in $\g_j$ of the subalgebras generated by the roots in $\Theta_j$. Then $\dim \z_\ell = \sum_{j=1}^r \dim \z_j < \sum_{j=1}^r \sharp\Theta_j = \ell$ if at least one factor of $\g$ is not of type $A_n$ by the previous discussion. \subsection{Proof of Proposition \ref{restr}} (1) Suppose that $Y$ is reduced to a point. For any $I\in \mathcal Q$ the corresponding Tits fibration $\pi$ has a typical fiber that is pointwisely fixed by the isotropy $L$, hence trivial. This means that $M$ is a flag manifold with an invariant hypercomplex structure. If we decompose $\g = \l + \m$ with $\m$ an $\ad(\l)$-invariant subspace, it is known that the $\ad(\l)$-irreducible submodules $\m_j$ ($j=1,\ldots,k$) of $\m$ are mutually inequivalent (see e.g. \cite{S}) and therefore $\mathcal Q$-invariant. Now $\l$ has a non trivial center $\c$ and there is a submodule, say $\m_1$, such that $\ad(\c)|_{\m_1}$ is not trivial. Then using the irreducibility of $\m_1$, we see that $\mathcal Q|_{\m_1}$ belongs to $\ad(\c)|_{\m_1}$, contradicting the fact that the $\mathcal Q|_{\m_1}$ contains anti-commuting elements. Therefore $Y$ has positive dimension and is $\mathcal Q$-invariant. Since $L$ is not trivial, we see that $Y$ is also a proper submanifold. \par (2) Suppose now that the restriction of $g$ to $Y$ is hyper-Hermitian and consider the decomposition $\g = \l + \t + \n$ as in section \ref{ics}, relative to some $I\in \mathcal Q$. Note that $[\l,\t]=0$ means that $\t$ projects to a subspace of $T_{[eL]}Y$ and therefore $g|_{\t\times\t}$ is $I$-Hermitian. Now $\n^\C$ is a sum of root spaces w.r.t. the Cartan subalgebra $(\t_\l + \t)^\C$ and a simple computation using the natural reductiveness and the $\ad(\l)$-invariance of $g$ shows that $g(E_\alpha,E_\beta) = 0$ for every roots $\alpha,\beta$ with $\alpha+\beta\neq 0$. Our claim now follows form the fact that $g(IE_\alpha,IE_{-\alpha}) = g(E_\alpha,E_{-\alpha})$ for every root $\alpha$.
2,877,628,089,343
arxiv
\section{Planar graphs and nodal 3-connectivity} \label{criterion} \input{plane} \section{Conditions for a convex combination map to be an embedding} \label{convex combination embedding section} \input{convcomb} \section{Ambient isotopy} \label{ambient isotopy section} \input{ambient} \section*{Acknowledgement} The author is grateful to Colum Watt for useful information about isotopies. \input{references} \end{document}
2,877,628,089,344
arxiv
\section{Motivations} Ultimately, we would like to study the finite temperature phase transition in QCD and the properties of the plasma phase (QGP), with physical values of the quark masses and in the continuum limit. The present project attempts to set the stage for such a complete study. In this study we consider QCD with two degenerate flavors. A cross check of the lattice results obtained with staggered and Wilson fermion action will help controlling discretization artifacts, while a twisted mass term is expected to facilitate the continuum and chiral limits. Consider the phase diagram of two-flavor QCD in the temperature-mass plane: the first order deconfinement transition stemming from the infinite mass (or quenched) theory weakens with lower quark masses, until it turns into a smooth crossover for intermediate quark masses. In the chiral limit there has to be a true phase transition again, but its nature is still under investigation~\cite{Philipsen:2005mj}. As a first step of our program, we wish to find the location of the phase boundary between hadronic and plasma phase, {\it i.e.} the pseudo-critical temperature and mass combinations, while taking advantage of the properties of twisted mass QCD. This means that we will have to explore a three-dimensional space of temperature T, bare quark mass m, and twisted mass parameter $\mu$. \section{Why Twisted Mass QCD Thermodynamics ?} Wilson fermions have several advantages over staggered fermions, but they also have a more subtle chiral behavior, and a complicated phase structure, both at $T=0$~\cite{Ilgenfritz:2003gw,Farchioni:2004us} and at finite temperature~\cite{AliKhan:2000iz,AliKhan:2001ek,Creutz:1996bg,Ilgenfritz:2005ba}. The twisted mass approach improves over the standard Wilson behavior in two ways: first, it prevents the occurrence of exceptional configurations and should make it relatively easy to reach mass values of the light pseudoscalar mesons close to the physical pion mass; second, once the Wilson hopping parameter $\kappa$ is set to its critical value, the twisted mass term behaves as a conventional quark mass, and, at the same time, an $O(a)$ improvement is automatically guaranteed. For recent results see Refs.~\cite{Farchioni:2005ec,Jansen:2006rf} and for a review Ref.~\cite{Shindler:2005vj}. In this first report we search for the transitions between the hadronic and plasma regimes by varying the Wilson hopping parameter $\kappa$ related to the bare quark mass by $\kappa=1/(2m+8)$ at fixed $\beta$ and fixed twisted mass parameter $\mu$. \section{Strategy and simulations} The simulations were performed on a $16^3 \times 8$ lattice with an improved version of the HMC algorithm as detailed in Ref.~\cite{Urbach:2005ji} and with a Symanzik tree-level improved gauge action. They used approximatively three months$\times$crate of apeNEXT~\cite{Belletti:2006nw}. We choose to work at $\beta = 3.75$ and $\beta = 3.9$ in order to take advantage of the $T = 0$ results~\cite{Farchioni:2005ec,Jansen:2006rf}. In principle, we can then cross the (pseudo-)critical line at a fixed temperature by tuning the quark mass, either by varying $\kappa$ and/or $\mu$. For this strategy to be successful, the simulation parameters $N_t$ and $\beta$ need to satisfy: \begin{equation} T_c^{chiral} < T^{simulation} = 1/(N_t a(\beta)) < T_c^{quenched} \; . \end{equation} For $T^{simulation} > T_c^{quenched}$ the hadronic phase cannot exist, while for $T^{simulation} < T_c^{chiral}$ the QGP cannot exist, irrespective of the mass value. We fixed $N_t = 8$ by taking into account the lattice spacing from the $T=0$ studies, namely $a(3.75) \simeq 0.12$ fm, $a(3.9) \simeq 0.095$ fm, as well as the known critical temperatures, $T^c_{chiral} = 170$ MeV and $T^c_{quenched} = 270$ MeV. In the first set of runs reported here we fixed $\mu = 0.005$ and varied the hopping parameter $\kappa$. \section{Results at $\beta = 3.75$} \begin{figure}[t] \includegraphics[width=7.0 truecm,height=5.5 truecm]{sl5_1.eps} \hspace*{0.7cm} \includegraphics[width=7.0 truecm,height=5.5 truecm]{sl5_2.eps} \caption{\label{plaq} $\langle P \rangle$ as a function of $\kappa$ (left diagram); Conjugate Gradient (CG) Iterations superimposed with $\frac{\partial \langle P \rangle}{\partial \kappa}$ (magnified) as a function of $\kappa$ at fixed $\beta = 3.75$ and $\mu = 0.005$ (right diagram). The results indicate a thermal transition or crossover at $\kappa_t = 0.165(1)$.} \end{figure} At $\beta = 3.75$, the $T = 0$ results show that the minimum pion mass which can be reached with our twisted mass parameter $\mu = 0.005$ is $m_\pi \simeq 400 {\rm MeV}$ extrapolating existing results at $\mu = 0.005$~\cite{Farchioni:2005ec}. Our first goal here is merely to check whether a thermal phase transition or a crossover can be found in the required range $\kappa_t < \kappa_c$. Figure \ref{plaq} (left) shows a scan of the average plaquette as a function of $\kappa$. The steepest slope of the plaquette as well as the increase of the number of CG iterations needed for the inversion shown in Figure \ref{plaq} (right), both suggest a crossover or phase transition at~\cite{Farchioni:2005ec} \begin{equation} \kappa_t(\beta = 3.75, \mu = 0.005) = 0.165(1) \; . \end{equation} Hence $\kappa_t< \kappa_c(T = 0) = 0.1669$, as required. \section{Results at $\beta$ = 3.9} After the exploratory study at $\beta = 3.75$, the choice $\beta = 3.9$ brings us closer to the continuum limit. Results for $T = 0$ at this $\beta$ are available at a number of values for the twisted mass parameter $\mu$, see Ref.~\cite{Jansen:2006rf}. The minimum pion mass at $T = 0$ for our $\mu = 0.005$, inferred from these results, is about 350 MeV. As a direct comparison between the results at the two $\beta$ values we show in Figure 2 the number of Conjugate Gradient iterations required for the inversion -- which is an indicator of criticality -- as a function of $\kappa$. \begin{figure}[t] \begin{center} \includegraphics[width=8.0 truecm,height=4.5 truecm]{slnew_1.eps} \caption{The number of Conjugate Gradient iterations as a function of $\kappa$ for the two $\beta$-values. The solid lines are a smooth interpolation to guide the eye.} \vspace*{-0.5cm} \end{center} \end{figure} Figure 3 (left) shows a collection of results for the expectation value of the plaquette $\langle P \rangle$. The errors are smaller than the symbol, the solid line is a Bezier interpolation to guide the eye. We performed local fits to a straight line $\langle P \rangle = a + b \kappa $ within subsequent intervals of width $\Delta \kappa = 0.002$, and we show in the same diagram the tangent in the inflection point. The parameters $b$ are used as estimators of the derivative of the Plaquette w.r.t. to $\kappa$ and are shown in Figure 3 (right). These results indicate a phase transition or crossover around $\kappa = 0.1597$ located according to the maximal slope $b_{max}$. \begin{figure}[htb] \includegraphics[width=10.1 truecm,height=5.5 truecm]{sl6_1.eps} \hskip -2.0 truecm \includegraphics[width=10.1 truecm,height=5.5 truecm]{sl6_2.eps} \caption{$\langle P \rangle$ as a function of $\kappa$ (left diagram); $\frac{\partial \langle P \rangle}{\partial \kappa}$ as function of $\kappa$ at fixed $\beta = 3.9$ and $\mu = 0.005$ (right diagram)} \end{figure} To make this prediction more quantitative, our statistics was enhanced to $O(10000)$ HMC trajectories on a selected sample of points: $\kappa = 0.1586, 0.1591, 0.1593, 0.1597$ in the candidate critical region at $\beta = 3.9$. The results (Figure 4) suggest a long autocorrelation time in the critical region. Given these autocorrelations, our results are very preliminary. Most probably the errors are underestimated, but still the plots may serve as indicators for the location of a crossover or transition. \begin{figure}[t] \includegraphics[width=7.0 truecm,height=4.5 truecm]{sl7_1.eps} \hspace*{0.5cm} \includegraphics[width=7.0 truecm,height=4.5 truecm]{sl7_2.eps} \caption{HMC evolution and error analysis for the high statistics runs at $\beta = 3.9$ : binned averages (left) as a function of the HMC trajectories, and results and errors as a function of the discarded HMC trajectories (right).} \end{figure} Figure 5 shows the results for the Polyakov loop: the steepening is clearly seen, mostly thanks to the latest, high statistics results. The Polyakov loop increases in the same $\kappa$ interval as the one observed for the plaquette. The Polyakov loop histograms of the HMC results after thermalization are narrow in the two pure phases, and broaden around at $\kappa=0.1597$, indicating an increase of the fluctuations and a critical behavior. We see no two-state signal (two peaks in the histograms), which seems to exclude a first order transition, but only a finite size analysis can assess with confidence the nature of the critical behavior. \begin{figure}[b] \includegraphics[width=7.0 truecm,height=5.5 truecm]{sl8_1.eps} \hspace*{0.5cm} \includegraphics[width=7.0 truecm,height=5.5 truecm]{sl8_2.eps} \caption{The real part of the Polyakov loop as a function of $\kappa$ (left diagram), and the Polyakov loop histograms (right diagram) at $\beta = 3.9$ and $\mu = 0.005$. The data set is the same as in Figure 2, with the inclusion of some high statistics results. Both diagrams are consistent with a critical point or crossover at $\kappa_t = 0.1597(5)$. } \end{figure} The steepest slope of the plaquette and of the Polyakov loop, as well as the broadening of the probability distributions suggest a crossover or phase transition at $\kappa_t$: \begin{equation} \kappa_t(\beta = 3.9, \mu = 0.005) = 0.1597(5) \end{equation} and, as it was also observed at $\beta = 3.75$, \begin{equation} \kappa_t < \kappa_c(T = 0) = 0.16085 \; . \end{equation} Although we postpone a detailed analysis of the spectrum and related observables to our ongoing high statistics study~\cite{Progress}, we have collected a few results for the pion propagator. The (zero momentum) pion propagator $G(t)$ is measured at a selected sample of couplings, and is fitted to a standard hyperbolic cosine form \begin{equation} G(t) = A~\cosh\left\{ M \left(t - \frac{N_t}{2} \right) \right\} \end{equation} in the time interval [2:6]. \begin{figure}[htb] \includegraphics[width=7.0 truecm,height=5.5 truecm]{slnew_2.eps} \hspace*{0.5cm} \includegraphics[width=7.0 truecm,height=5.5 truecm]{slnew_3.eps} \caption{The pion propagator $G(t)$ for selected $\kappa$ values. The solid lines are the simple fits described in the text. The right figure shows a subset of the same data points as in the left figure on a different scale. } \end{figure} We show the quality of the results, with the fits themselves superimposed, in Fig. 6 (left and right, note a different scale between the two). The resulting fit parameters, A(mplitude) and M(ass), are collected in Fig. 7 . They indicate that the effective pion mass decreases while approaching the thermal transition from below, while the amplitude of the propagator increases, following the trend of the average plaquette. \begin{figure} \includegraphics[width=10.1 truecm,height=4.5 truecm]{slnew_4.eps} \hskip -1.0 truecm \includegraphics[width=10.1 truecm,height=4.5 truecm]{slnew_5.eps} \caption{Amplitude of the propagator and effective pion mass from the hyperbolic cosine fits as a function of $\kappa$.} \end{figure} \section{Summary and Outlook} We have studied QCD with two flavors of dynamical Wilson fermions including a twisted mass term on a $16^3\times 8$ lattice at two values of the temperature: $\beta = 3.75$ corresponding to $T \simeq 205$ MeV and $\beta = 3.9$ corresponding to $T \simeq 259$ MeV. In either cases we have simulated $O(10)$ values of bare quark masses, by varying the hopping parameter $\kappa$ at constant $\mu = 0.005$. We have observed a behavior consistent with a crossover (and not excluding a real transition) at a critical value of $\kappa$, which we denoted as $\kappa_t$, which is less than the critical $\kappa$ at $T = 0$. This behavior is similar to that observed with ordinary Wilson fermions~\cite{AliKhan:2000iz,AliKhan:2001ek}. In our current study~\cite{Progress} on the apeNEXT machines at DESY and INFN we have to study next the $\mu$ dependence of our results. To this end, we are repeating a $\kappa$ scan at a larger $\mu$ value. At the same time we want to take full advantage of the twisted mass approach by working at full twist with $\kappa = \kappa_c( \beta, T=0 )$. In this case the phase transition will be crossed by tuning the twisted mass $\mu$. Model studies along the lines of Ref.~\cite{Creutz:1996bg} will be most useful to guide our simulations. \vskip .2 truecm \noindent {\bf Acknowledgments}: It is a pleasure to thank Mike Creutz, Roberto Frezzotti, Agnes Mocsy, GianCarlo Rossi and Carsten Urbach for interesting and helpful discussions. We wish also to thank the apeNEXT Collaboration, and in particular Alessandro Lonardo, Davide Rossetti, Hubert Simma, Raffaele Tripiccione and Piero Vicini, for their crucial help and support, as well as Giampietro Tecchiolli for granting access to the Amaro apeNEXT prototype. \vskip .2 truecm
2,877,628,089,345
arxiv
\section{Introduction} \label{intro} In his fundamental 1939 paper \cite{weibull}, W. Weibull summarized the experimental situation found in measurement of the ultimate strength of brittle materials: \emph{"The classical theory is obviously incompatible with numerous results of experimental research. This discrepancy may be bridged over by considering as an essential element of the problem the dispersion obtained in experimental measuring of the ultimate tensile strength (UTS). Viewed from this standpoint, the UTS of a material can not be expressed by a single numerical value, as has been tone heretofore, and a statistical distribution will be required for this purpose"}. Since this time, the Weibull distribution has become an indispensable tool in the classification of ceramics. In particular, the Weibull module $m$ which controls the dispersion of the UTS, has become a standard material property, see e.g. \cite{munz}. In mechanical design, the ultimate tensile strength of a component has to be well above the loads applied. But even in situations, when the loads can be foreseen, Weibull's insight forces engineers to work with levels of reliability rather than with ultimate safety. When choosing shape and dimensions of an engineered part, the quest rather is to control a probability distribution than to keep the maximal stress below a given threshold. This logic has been successfully applied in technological areas ranging from space shuttle heat shields \cite{nasa}, gas turbine combustion chambers \cite{riesch} to dental prostheses \cite{dental}. This is particularly simple in the case of the Weibull model, since the failure distribution of the engineered component over the applied loads is modeled by scale and shape parameters, where the shape parameter is Weibull's module $m$ and only the scale quantity depends on the component's design. Maximizing the scale therefore corresponds to maximizing the probabilistic endurance or the component over the entire range of loads. The maximization of the Weibull scale of a component as a functional of the component's shape puts the control of failure for ceramic components in the framework of shape optimization \cite{allaire,bucur,chenais,haslinger,sokolowski}. In fact, as has been observed recently, the so obtained shape optimization problem has the favorable property that the objective functional is differentiable \cite{bolten,gottschalk,schmitz}. This is in sharp contrast to the peak stress criterion commonly used in optimization, which is non differentiable as objective functional due to maximizing stress over all locations on the component. Gradient based shape optimization techniques, in particular in conjunction with the highly efficient calculation of shape derivatives via adjoint equations, have proven their potential in many engineering applications. In particular this applies to aero design, where the objective functionals have always been differentiable, see e.g. \cite{mohammadi,Frey2009}. It is therefore natural to use the smoothing nature of probabilistic models to extend gradient based optimization to the design objective of (probabilistic) mechanical integrity as well. In the mathematical literature, shape and topology optimization for the linear elasticity PDE have been predominantly applied to the compliance functional, see e.g. \cite{allaire,bendsoe,conti}. This is however not directly related to mechanical integrity. In the present article we for the first time minimize failure probabilities numerically using a first discretize, then adjoin strategy and apply it to a simple 2D design problem. We demonstrate that the shape derivatives via the adjoint method can be calculated numerically with a high level of precision. The resulting geometry flows, under volume constraints, are stable over rather significant changes of the geometry and converge to an (numerically) optimal solution. Remarkably this is true without any artificial smoothing of the shape gradients. The paper is organized as follows: In Section \ref{FailProbs} we derive failure probabilities according from fundamental assumptions of elastic fracture mechanics and an initial flaw distribution using the a point process model introduced in \cite{bolten}. In this way one obtains Weibull's classical model with slight modifications \cite{batendorf, weibull}. For a derivation of the same model via extremal value statistics see e.g. \cite{riesch} and references therein. We discuss the numerical approximation of the objective functional via the discretization of the PDE of linear elasticity with finite elements in Section \ref{FinEl}. Section \ref{lagrange} briefly recalls how to use the adjoint equation in the calculation of shape gradients following a first discretize then adjoin approach. In Section \ref{comput} we numerically calculate shape derivatives for a bended rod on the finite element mesh and we validate the calculations with the method of finite differences. We show for the given example that the geometric flows constructed from the shape derivatives, using also a suitable mesh morphing and a volume constraint, result in the optimal configuration given by the straight rod. In a second case, we study a joint connecting two levels of height, there is no optimal solution that is easily guessed. We again obtain stable geometry flows and a considerable reduction of failure probability for the numerically converged solution. In the final Section \ref{out} we give our conclusions and an outlook to future research. \section{Failure probabilities for ceramic structures} \label{FailProbs} \subsection{The elasticity PDE} \label{elasticityPDE} Let $\Omega\subseteq\mathbb{R}^d$, $d=2,3$, be a domain with Lipschitz boundary $\partial\Omega$. It is assumed to be filled with ceramic material. $\Omega$ represents the ceramic component in its initial, force free state. Furthermore, we assume that the boundary $\partial\Omega$ can be divided into three different parts \begin{align} \partial\Omega=\overline{\partial\Omega}_D\cup \overline{\partial\Omega}_{N_{fixed}}\cup \overline{\partial\Omega}_{N_{free}}. \end{align} Here, $\partial\Omega_D$ is the part of the boundary where Dirichlet-boundary conditions hold. It is supposed to be fixed. $\partial\Omega_{N_{fixed}}$ is the part where surface forces may act. And finally $\partial\Omega_{N_{free}}$ is the part of the boundary which can be modified in order to optimally comply with the design objective 'reliability', as explained below. For technical reasons following from the application, the free boundary is assumed to be force-free.\\ Forces may act on the object with the shape given by $\Omega$. The volume force is represented by a function $f\in L^2(\Omega, \mathbb{R}^d)$, the surface force by a function $g\in L^2(\partial\Omega_N, \mathbb{R}^d)$. In our example, the volume force $f$ represents the force of gravity, the surface force $g$ represents the tensile load.\\ Furthermore, $u\in H^{1}(\Omega, \mathbb{R}^d)$ describes the displacement caused by the acting forces. Here, for $m\in [1,\infty)$, $H^{1}(\Omega, \mathbb{R}^d)$ stands for the Sobolev space on $\Omega$ of once weakly differentiable $L^2(\Omega,\mathbb{R}^d)$ functions with weak derivatives in $L^2(\Omega,\mathbb{R}^{d,d})$. The linear strain tensor is given by $\varepsilon (u):=\frac{1}{2}\left(\nabla u+\nabla u^T\right)\in L^2(\Omega, \mathbb{R}^{d,d})$ and hence the stress tensor $\sigma(u)=\lambda\text{tr } (\varepsilon(u))I+2\mu \varepsilon(u)\in L^2(\Omega, \mathbb{R}^{d,d})$, where $\lambda,\mu>0$ are the Lam\'e constants derived from Young's modulus $\texttt{E}$ and Poisson's ratio $\nu$ by $\lambda=\frac{\nu\texttt{E}}{(1+\nu)(1-2\nu)}$ and $\mu=\frac{\texttt{E}}{2(1+\nu)}$.\\ Let $H^{1}_D(\Omega,\mathbb{R}^d)$ denote the Sobolev space above with zero boundary conditions on the Dirichlet-part of the boundary. Under the given conditions, following from Korn's inequality on $H^{1}(\Omega;\mathbb{R}^ 3)$ \cite{duran} and a the Lax-Milgram theorem, the linear elasticity PDE \begin{align} \label{eqa:absPDE} B(u,v)=L(v)\text{, }\forall v\in H^{1}_{D}(\Omega,\mathbb{R}^d), \end{align} possesses a unique weak solution $u\in H^{1}(\Omega,\mathbb{R}^d)$. The bilinear form $B(u,v)$ and the linear functional $L(v)$ are given by \begin{align} \label{eqa:BandL} \begin{split} B(u,v)&=\intOm[\sigma (u) : \varepsilon (v)]=\lambda\intOm[\nabla\cdot u\nabla\cdot v]+2\mu\intOm[\varepsilon (u) :\varepsilon (v)]\\ L(v) &=\int\limits_{\Omega}f\cdot v\,dx+\int\limits_{\partial\Omega_N}g\cdot v\,dA \end{split} \end{align} \subsection{Survival probabilities from linear fracture mechanics} \label{ProbFun} The full deduction of the functional describing the survival probability of a ceramic component is given in \cite{bolten}. Here we want to derive an introduction for basic understanding only.\\ To derive the objective functional we first need to study the problem in more detail. We want to optimize the reliability for a ceramic body $\Omega$ by minimizing its probability of failure under one given tensile load $\sigma_n$. Failure means that a fracture occurs under the tensile load. Therefore the question is what that probability depends on. Ceramic is produced in a process called sintering. From this process, small flaws arise in the material, which in the first place have no influence on the quality of the material. But under load, these flaws may become the initial point of a rupture. To understand the behavior of this rupture, it is necessary to understand its structure and the basic principles of its growth.\\ There are three different crack opening modes. These modes are visualized in Figure \ref{fig:modes}. Considering the visualization of the modes, it is obvious that in the plane we only have mode I and mode II. They relate to different loadings, where obviously the first opening mode relates to tensile and compressive load. \begin{figure}[!htb] \begin{center} \subfloat[Different modes of the loading\label{fig:modes}]{\resizebox{0.4\textwidth}{!}{\input{Modus_I_II_III.pspdftex}}} \hspace { 1cm } \subfloat[$r$-$\varphi$ coordinate system at the tip of the crack\label{fig:process} ] {\resizebox{0.35\textwidth}{!}{\input{Fracture_mechanics_r_phi_coords_1.pspdftex}}} \end{center} \caption{Visualizations from \cite{bolten}} \end{figure} In linear fracture mechanics, the three dimensional stress field close to a crack in a two-dimensional plane close to the tip of the crack is of the form \begin{align} \sigma = \frac{1}{\sqrt{2\pi r}}\{K_I\tilde{\sigma}^I(\phi )+K_{II}\tilde{\sigma}^{II}(\phi )+K_{III}\tilde{\sigma}^{III}(\phi )\}+\text{regular terms,} \end{align} where $r$ is the distance to the crack front and $\phi$ the angle of the shortest connection point considered to the crack front with the crack plane (see Figure \ref{fig:process}). Experimental evidence shows that $K_I$ is most relevant for the failure of ceramic structures (\cite{bruecknerfoit}) under tensile load, so we will concentrate on this. Note that in the two-dimensional case the formula for sigma above only consists of the first two parts, as in the plane mode III load does not exist.\\ To find the functional aimed for we need to model the cracks first, as it depends on them. We assume them to be "penny shaped" \cite{gross}. As a consequence, a crack can be fully described by the three properties location, orientation and radius. As there is no indication that one of these properties is determined by the sintering process we assume them to be arbitrary. Hence, a crack is identified by its configuration \begin{align} (x,a,\mu )\in \left(\bar{\Omega}\times (0,\infty )\times S^{d-1}\right):=\mathpzc{C}, \end{align} with $S^{d-1}$ the unit sphere in $\mathbb{R}^d$ and $x$ and $\mu$ are uniformly distributed on $\bar{\Omega}$ and $S^{d-1}$, respectively. The distribution of $a$ will be discussed later on. We call $\mathpzc{C}$ the crack configuration space.\\ With this and considering the tensile load $\sigma_n$ in a normal direction of the stress plane, one obtains \begin{align} K_I:=\frac{2}{\pi}\sigma_n\sqrt{\pi a}. \end{align} A crack becomes critical, i.e. a fracture occurs, if $K_I$ exceeds a critical value $K_{I_c}$. Obviously we can neglect compressive load, that is negative values for $\sigma_n$. With this and following \cite{bolten}, we set \begin{align} \sigma_n:=(n\cdot\sigma (Du)n)^+=\max \{n\cdot\sigma (Du)n,0\}. \end{align} Now we have determined $K_I$, we can define the set of critical configurations that lead to failure by $A_C:=A_C(\Omega,Du)=\{(x,a,\mu)\in\mathpzc{C}:K_I(a,\sigma_n(x))>K_{I_C}\}$. Hence we want to minimize the probability of $A_c$ containing at least one flaw. If we assume that the distribution of cracks in different parts of the component is statistically independent, we can conclude with \cite{watanabe} and \cite{kallenberg}[Corollary 7.4.] that the random number of cracks $N(A)$ of some measurable subset of the configuration space $A\subseteq \mathpzc{C}$ is Poisson distributed, i.e. $N(A)$ is a Poisson point process (PPP). It holds that $\mathbb{P}(N(A)=n)=e^{-\nu(A)}\frac{\nu(A)^n}{n!}\sim Po(\nu(A))$, with intensity measure $\nu: \mathpzc{C}\rightarrow\mathbb{R}$. As mentioned before, the component fails if $N(A_C)\geq 1$. With this, we can give the survival probability of the component $\Omega$, given in the displacement field $u\in H^1(\Omega,\mathbb{R}^d)$ as \begin{align} \label{eqa:PSurvival} p_s(\Omega |Du)=P(N(A_c(\Omega ,Du))=0)=\exp\{-\nu (A_c(\Omega, Du))\}. \end{align} Under the assumption that cracks are statistically homogeneously distributed throughout the material and that the crack orientation is isotropic, it follows that \begin{align} \nu=dx\otimes\nu_a\otimes \frac{dn}{2\pi^{\frac{d}{2}}/\Gamma\left(\frac{d}{2}\right)}, \end{align} with $dx$ the Lebesgue measure on $\mathbb{R}^d$, $dn$ the surface measure on $S^{d-1}$ and a certain positive Radon measure $\nu_a$ on $(\mathbb{R}_+,\mathcal{B}(\mathbb{R}_+))$ modelling the occurrence of cracks of length $a$. As mentioned before, there is a critical value $K_{I_c}$ which should not be exceeded in order to avoid failure. Therefore, $A_c$ only contains configurations with a radius $a$ such that $K_I(a)> K_{I_c}$. This is true for all $a_c>\frac{\pi}{4}\left(\frac{K_{I_c}}{\sigma_n}\right)^2$. Due to this considerations, the intensity measure can be evaluated on the critical set as follows \begin{align} \nu (A_c(\Omega, Du))=\frac{\Gamma(\frac{d}{2})}{2\pi^{\frac{d}{2}}}\int\limits_{\Omega}\int\limits_{S^{d-1}}\int\limits_{a_c}^{\infty}d\nu_a(a)dndx. \end{align} Assuming that $d\nu (a)= c\cdot a^{-\tilde{m}}da$, with a certain constant $c>0$ and $\tilde{m}>1$, we calculate the inner integral as follows \begin{align} \int\limits_{a_c}^{\infty}d\nu_a(a)=\tilde{c}\left(\frac{\pi}{4}\left(\frac{K_{I_c}}{\sigma_n}\right)^2\right)^{-\tilde{m}+1}. \end{align} With this and setting $m:=2(\tilde{m}-1)$, with the assumption that $\tilde{m}\geq \frac{3}{2}$ holds and in assembling all constant values in the positive constant $\left(\frac{1}{\sigma_0^m}\right)$ we find our objective functional \begin{align} \label{eqa:ObFun} J(\Omega ,Du):=\nu (A_c(\Omega,Du))=\frac{\Gamma(\frac{d}{2})}{2\pi^{\frac{d}{2}}}\int\limits_{\Omega}\int\limits_{S^{d-1}}\left(\frac{\sigma_n}{\sigma_0}\right)^m dndx. \end{align} Let us now consider the situation, where in \eqref{eqa:BandL} we can neglect the volume force $f$ and we rescale the surface force $g$ with a constant factor $F>0$. As \eqref{eqa:absPDE} is linear, $u$, $Du$, $\sigma$ and $ \sigma_n$ all are scaled by the same factor $F$. Inserting this into \eqref{eqa:ObFun} we see that the probability of survival \eqref{eqa:PSurvival} as a function of the load parameter $F$ follows a Weibull distribution with Weibull module $m$ and scale parameter $\eta(\Omega,g)=J(\Omega,Du)^ {-\frac{1}{m}}$, where $u$ corresponds to the load scale $F=1$, \begin{align} p_s(F,\Omega|g)=\exp\{- J(\Omega,Du) F^ m\}=e^{- J(\Omega,Du) F^ m}=e^{-\left(\frac{F}{\eta(\Omega,g)}\right)^m}. \end{align} This corresponds to the statistical strength of brittle materials as described in \cite{weibull}. Note that if we suitably normalize $g$ such that a force of one Newton is executed on the structure, $F$ can actually be interpreted as the acting force in Newton. The values for the parameter $\sigma_0$ are taken from \cite{baeker}. However we note that these values have been calibrated with a simplified model and some rescaling is needed to to fit them to our model. \section{Discretisation via finite elements} \label{FinEl} To calculate the shape gradient we first need to discretise our problem with finite elements. \subsection{Discretisation of the linear elasticity equation} \label{FinEl-PDE} Recall the linear elasticity PDE \eqref{eqa:absPDE} and \eqref{eqa:BandL} from section \ref{elasticityPDE}. We discretize the PDE with standard Lagrange finite elements \cite[cf.][ch. 2.5]{braess}. For the calculation of the integrals, numerical quadrature is used. In a first step, $\Omega$ is partitioned by a finite mesh $\mathpzc{T}_h$ represented by the $N$ grid points $X=\{X_1,...,X_N\}$. This mesh gives as well $N_{el}$ (Lagrange) finite elements $\{K,\Pi (K),\Sigma (K)\}$ with $n_{sh}$ local shape functions $\theta_{K,k}\in\Pi (K)$ which are defined by the nodes $X_1^K,...,X_{n_{sh}}^K\in K$ with $X=\bigcup_{K\in\mathpzc{T}_h}\{X_1^K,...,X_{n_{sh}}^K\}$ and the corresponding Lagrange interpolation conditions \begin{align} \begin{array}{lr} \varphi_j(\theta_{K,i})=\theta_{K,i}(X_j)=\delta_{ij}, & \text{for }i,j\in\{1,...,n_{sh}\}. \end{array} \end{align} We assume that there exists a reference element $\{\hat{K},\hat{\Pi},\hat{\Sigma}\}$ and a bijective transformation for each element $K\in\mathpzc{T}_h$ $T_K:\hat{K}\rightarrow K$ such that $\hat{\Pi}=\Pi\circ T_K$, $\hat{\theta}j=\theta\circ T_K$, $j\in\{1,...,n_{sh}\}$ and \begin{align} T_K=T_K(\hat{\xi},X)=\sum\limits_{j=1}^{n_{sh}}\hat{\theta}_j(\hat{\xi})X_j^K,\text{ }\hat{\xi}\in\hat{K}. \end{align} To numerically calculate the integral, for each $K\in\mathpzc{T}_h$ we chose $q_l^K$ quadrature points $\hat{\xi}_l^K$ on the reference element $\hat{K}$ with weights $\hat{\omega}_l^K$. We then have \begin{equation} \label{eqa:Bdisc} \begin{split} B(u,v) &=\lambda\sum\limits_{K\in\mathpzc{T}_h}\int\limits_K\nabla\cdot u\,\nabla\cdot vdx+2\mu\sum\limits_{K\in\mathpzc{T}_h}\int\limits_K\varepsilon (u):\varepsilon (v)\,dx\\ &=\lambda\sum\limits_{K\in\mathpzc{T}_h}\int\limits_K\nabla\cdot u(T_K(\hat{\xi}))\nabla\cdot v(T_K(\hat{\xi}))\text{det }(\hat{\nabla}T_K(\hat{\xi}))\, d\hat{\xi}\\ &+2\mu\sum\limits_{K\in\mathpzc{T}_h}\int\limits_K\varepsilon (u(T_K(\hat{\xi}))):\varepsilon (v(T_K(\hat{\xi})))\text{det }(\hat{\nabla}T_K(\hat{\xi}))\, d\hat{\xi}\\ &\approx\lambda\sum\limits_{K\in\mathpzc{T}_h}\sum\limits_{l=1}^{q_l^K}\hat{\omega}_l^K\text{det }(\hat{\nabla}T_K(\hat{\xi}_l))\,\nabla\cdot u(T_K(\hat{\xi}_l))\,\nabla\cdot v(T_K(\hat{\xi}_l))\\ &+2\mu\sum\limits_{K\in\mathpzc{T}_h}\sum\limits_{l=1}^{q_l^K}\hat{\omega}_l^K\text{det }(\hat{\nabla}T_K(\hat{\xi}_l))\varepsilon (u(T_K(\hat{\xi}_l))):\varepsilon (v(T_K(\hat{\xi}_l))). \end{split} \end{equation} For each element $K\in\mathpzc{T}$ and $\xi\in K$ we can write $u(\xi )$ in terms of the local shape functions on the reference element: $u(\xi )=\sum_{m=1}^{n_{sh}}u_m\hat{\theta}_m\circ T_K^{-1}(\xi)$ and hence \begin{align} \nabla u(\xi )=\sum\limits_{m=1}^{n_{sh}}u_{m}\otimes (\hat{\nabla}T_K(\hat{\xi})^T)^{-1}\hat{\nabla}\theta_m(\hat{\xi}). \label{eq:derivU} \end{align} With this, we instantly get \begin{align} \nabla\cdot u(x)=\sum\limits_{m=1}^{n_{sh}}\text{tr }\left( u_{m}\otimes (\hat{\nabla}T_K(\hat{\xi})^T)^{-1}\hat{\nabla}\theta_m(\hat{\xi})\right). \end{align} Similar to the discretization of the bilinear form, the volume force can be discretized in the following way \begin{align} \label{eqa:fdic} \int_{\Omega}f\cdot v\,dx=\sum_{K\in\mathpzc{T}_h}\sum_{l_1}^{q_l^K}\hat{\omega}_l^K\det\left(\hat{\nabla}T_K(\hat{\xi}_l)\right)f(T_K(\hat{\xi}_l) )\cdot v(T_K(\hat{\xi}_l) ). \end{align} The surface integral over $g$ has to be treated differently. We only consider the faces $F$ of the elements that lie on $\partial\Omega$. Let $\mathpzc{N}_h$ be the collection of these specific faces. For each $F\in\mathpzc{N}_h$, the respective element is identified by $K=K(F)\in\mathpzc{T}_h$. One can assume that there also exists a reference face $\hat{F}$ on $\hat{K}$ such that $T_{K(F)}:\hat{F}\rightarrow F$. Additionally, for the quadrature, surface quadrature points $\hat{\xi}_l^F$ and weights $\hat{\omega}_l^F$ have to be chosen, as the face possesses one dimension less than the elements. And finally, for the transformation the square root of the Gram determinant $\sqrt{\det g_F(\hat{\xi}_l^F)}$ is required instead of the determinant of the derivative of $T_K$. It is \begin{align} g_F(\hat{\xi})=\hat{\nabla}^F(T_K\big |_{\hat{F}})(\hat{\xi})\left(\hat{\nabla}^F(T_K\big |_{\hat{F}})\right)^T(\hat{\xi}), \end{align} and thus \begin{align} \label{eqa:gdic} \int_{\partial\Omega}g\cdot v\,A=\sum_{F\in\mathpzc{N}_h}\sum_{l=1}^{q_l^F}\hat{\omega}_l^F\sqrt{\det g_F(\hat{\xi}_l^F)}\,g(T_{K(F)}(\hat{\xi}_l^F))\cdot v(T_{K(F)}(\hat{\xi}_l^F)). \end{align} The discretized equation can be rewritten in a shorter form, in terms of the global degrees of freedom $U=(u_j)_{j\in\{1,...,N\}},u_j\in\mathbb{R}^d$ and the node coordinates $X$, where it is understood that $u_j=0$ if $X_j\in\partial\Omega_D$. Then we have \begin{align} \begin{array}{rcl} B(X)U &=& F(X),\\ B(X)_{(j,r),(k,s)} &=& B(e_r\theta_j,e_s\theta_k),\\ F_{(j,r)} &=& \int\limits_{\Omega}f\cdot e_r\theta_jdx+\int\limits_{\partial\Omega_N}g\cdot e_r\theta_jdA; \end{array} \label{eq:bl} \end{align} with $e_r, r=1,2,3$ the standard basis on $\mathbb{R}^d$.\\ \subsection{Discretization of the objective functional} \label{FinEl-ObFun} For a minimal example, we first want to calculate the problem in $\mathbb{R}^2$. Therefore we consider the two dimensional objective functional \begin{align} J(\Omega,u)=\int_{\Omega}\int_{S^1}\left(n\cdot\sigma\left(Du\right)n)^+\right)^mdn\,dx. \end{align} As $S^1$ is the unit circle in $\mathbb{R}^2$, the inner integral is \begin{align} I(f)=\int_{0}^{2\pi}\left(\left(\cos^2(\varphi)\sigma_{11}+2\cos(\varphi)\sin(\varphi)\sigma_{12}+\sin^2(\varphi)\sigma_{22}\right)^+\right)^md\varphi. \end{align} As the function to be integrated is a periodic function over its whole period, the trapezoidal rule is the method of choice as it shows exponential convergence in this case (cf \cite{weideman}). For the present integral it yields \begin{align} \begin{split} T^{(n)}(f)&=\frac{2\pi}{n}\left(\left(\sigma_{11}^+\right)^m+\sum_{i=1}^{n-1}\left(\left(\cos^2\left(\frac{i2\pi}{n}\right)\sigma_{11}\right.\right.\right.\\ &\left.\left.\left.+2\cos\left(\frac{i2\pi}{n}\right)\sin\left(\frac{i2\pi}{n}\right)\sigma_{12}+\sin^2\left(\frac{i2\pi}{n}\right)\sigma_{22}\right)^+\right)^m\right). \end{split} \end{align} By replacing $I(f)$ with $T^{(n)}(f)$, we can discretize the objective functional in the following way \begin{align} \label{eqa:Jdisc} \begin{split} J(\Omega ,u) &\approx \sum\limits_{K\in\mathpzc{T}_h}\int\limits_K T^{(n)}(f)\left(\sigma \left(x\right),\varphi\right)dx\\ &= \sum\limits_{K\in\mathpzc{T}_h}\int\limits_{\hat{K}}T^{(n)}(f)\left(\sigma \left(T_K\left(\hat{x}\right)\right),\varphi\right)\text{det }\left(\hat{\nabla}T_K\left(\hat{x}\right)\right)d\hat{x}\\ &\approx \sum\limits_{K\in\mathpzc{T}_h}\sum\limits_{l=1}^{q_l^K}\hat{\omega}_l^KT^{\left(n\right)}\left(f\right)\left(\sigma \left(T_K\left(\hat{\xi}_l^K\right)\right)\right)\text{det }\left(\hat{\nabla}T_K\left(\hat{\xi}_l^K\right)\right) \end{split} \end{align} \begin{align*} &=\sum\limits_{K\in\mathpzc{T}_h}\sum\limits_{l=1}^{q_l^K}\hat{\omega}_l^K\frac{2\pi}{n}\left(\left(\sigma\left(T_K\left(\hat{\xi}_l^K\right)\right)_{11}^+\right)^m+\sum_{i=1}^{n-1}\left(\left(\cos^2\left(\frac{i2\pi}{n}\right)\sigma\left(T_K\left(\hat{\xi}_l^K\right)\right)_{11}\right.\right.\right.\\ &\left.\left.\left.+2\cos\left(\frac{i2\pi}{n}\right)\sin\left(\frac{i2\pi}{n}\right)\sigma\left(T_K\left(\hat{\xi}_l^K\right)\right)_{12}+\sin^2\left(\frac{i2\pi}{n}\right)\sigma\left(T_K\left(\hat{\xi}_l^K\right)\right)_{22}\right)^+\right)^m\right)\\ &\cdot\text{det }\left(\hat{\nabla}T_K\left(\hat{\xi}_l^K\right)\right). \end{align*} In the following, we will use the finite element node set $X$ as the representative for the geometric shape $\Omega$. Likewise we use the set of global degrees of freedom $U$ to encode the (approximate) displacement field $u$. We thus write $J(X,U)$ for the discretization of $J(\Omega,u)$. \section{Discretised shape gradients} \label{lagrange} After discretising the objective functional we want to calculate the shape gradient. This is \begin{align} \frac{dJ(X,U(X))}{dX}=\parableit{J(X,U(X))}{X}+\parableit{J(X,U(X))}{U}\parableit{U(X)}{X}. \end{align} As the calculation of $\parableit{U(X)}{X}$ is very costly, we consider the corresponding Lagrange function instead. This is given by \begin{align} \mathcal{L}(X,U,\Lambda):=J(X,U)-\Lambda^T(B(X)U-F(X)), \end{align} where $\Lambda$ is the adjoint state.\\ Calculating the derivatives of the Lagrange function with respect to all three variables yields \begin{align} \begin{array}{lcr} 0\overset{!}{=}\frac{\partial \mathcal{L}(X,U,\Lambda )}{\partial \Lambda}& \Leftrightarrow & B(X)U(X)=F(X), \end{array} \end{align} which gives the state equation, \begin{align} \begin{array}{lcr} 0\overset{!}{=}\frac{\partial \mathcal{L}(X,U,\Lambda )}{\partial U}=\frac{\partial J(X,U)}{\partial U}-\Lambda^TB(X) &\Leftrightarrow & B(X)\Lambda =\frac{\partial J(X,U)}{\partial U}, \end{array} \end{align} which is the adjoint equation. Hence, the following set of equations \begin{align} \begin{split} \frac{dJ(X,U(X))}{dX}&=\frac{\partial J(X,U)}{\partial X}+\Lambda^T\left[\frac{\partial F(X)}{\partial X}-\frac{\partial B(X)}{\partial X}U\right] \\ B^T(X)\Lambda &= \frac{\partial J(X,U)}{\partial U}\\ B(X)U(X) &= F(X) \label{eqa:adjointEq} \end{split} \end{align} gives the discretized shape gradient. \section{Computation and validation of shape gradients and shape flows} \label{comput} \subsection{Implementation} The calculation of the shape gradient using the adjoint formalism \eqref{eqa:adjointEq} requires the numerical calculation of the state $U$, $\frac{\partial J(X,U)}{\partial U}$, of the adjoint state $\Lambda$ and the calculation of $\frac{\partial J(X,U)}{\partial X}$, $\frac{\partial B(X)}{\partial X}$ and $\frac{\partial F(X)}{\partial X}$. This can be done by somewhat lengthy but straight forward calculations based on \eqref{eqa:Jdisc}, \eqref{eqa:Bdisc}, \eqref{eqa:fdic} and \ref{eqa:gdic}. For the implementation, all these partial derivatives are calculated locally for each local node set of the finite elements and are assembled to global objects thereafter. Note that the contractions with the adjoint state $\Lambda$ and the state $U$ have to be performed during local calculations prior to the assembly in order to keep memory requirements for the storage of $\frac{\partial F(X)}{\partial X}$ and especially $\frac{\partial B(X)}{\partial X}$ reasonably low. \subsection{Validation with finite differences} We consider a simple example in $d=2$: For this purpose we generate a simple two-dimensional test object whose behavior during the optimization process is well understood. To work with reasonable values we set the parameters $\texttt{E}$ and $\nu$ to those of Aluminum oxide ($Al_2O_3$) ceramics. The elastic material properties can be found in \cite{aluminum}. The Weibull modulus $m$ is measured in tensile tests \cite{baeker,munz}. As it generally depends on the technical details of the sintering process, $m$ is not a materials property that is determined by the chemistry. Technical ceramics usually comes with a Weibull module between 5 for low quality and 20 for a very controlled process. Here we choose $m=10$, which is a reasonable value and sill leads to tractable numerics.\\ As a test object we use a rod of length 0.6m and height 0.1m. The test object is fixed on the left boundary, that is where Dirichlet-boundary conditions hold. The part of the surface where surface forces may act is the right boundary, that is in our model the nodes on the right edge. In our example, we suppose this boundary part to be fixed as well. It is represented by a 9x61 grid, that is divided in triangles. The rod is deformed in the middle part, see Figure \ref{fig:ob_fun_val}. As element type we choose linear, Lagrangian triangle elements with three local degrees of freedom located at the vertices of the triangles. For the interpolation of the volume force and the surface force we use two-dimensional 7-point Gauss-quadrature and one-dimensional 3-point Gauss-quadrature, respectively. To construct the bilinear form we use 1-point Gauss-quadrature. To fit our purpose, we use a direct finite element solver written in \texttt{R} for the underlying elasticity equation. With the results of the solver, one can calculate the values of the objective functional on the test object \eqref{eqa:Jdisc}. One can see in the visualization in Figure \ref{fig:ob_fun_val} that the local intensity for the occurrence of critical cracks, i.e. the density with respect to $dx$ of $\nu_a(dx\times S^1\times [a_c\infty])$, takes the highest values in the critical area in the inner bow of the deformation, as practitioners would of course expect. \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Visualization_Objective_Functional.pdf}} \end{center} \caption{ Visualization of the objective functional} \label{fig:ob_fun_val} \end{figure} \subsection{Calculation and validation of the local and the global derivatives} \label{vallocal} \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Convergence_local_dx.pdf}} \end{center} \caption{ Convergence test for the local partial derivative $\parableit{J}{X}$} \label{fig:conv_dx} \end{figure} In Figure \ref{fig:conv_dx} the comparison of the differential quotient with the derivative of $\parableit{J}{X}$ is shown for the local derivatives on one element $K$ for five random directions. Convergence for small differences is excellent. For the partial derivatives, the same findings applied. As the node set of the test object is comparatively small, we can approximate the global shape derivative of the objective functional $\frac{d J}{d X}$ by its differential quotient for validation purposes. We calculate $\frac{J(X+\varepsilon V,U(X+\varepsilon V))-J(X,U(X))}{\varepsilon V}$, where $V$ is a random direction, with decreasing $\varepsilon$ and tested our resulting gradient against it. The results are visualized in Figure \ref{fig:conv_DJDX}. The actual gradient is visualized in Figure \ref{fig:finit_Diff}. \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Visualization_Gradient.pdf}} \end{center} \caption{Visualization of the gradient $-\frac{dJ}{dx}$} \label{fig:finit_Diff} \end{figure} With this result, evidence is given that the shape gradients of the objective functional work in a reasonable way. For the mechanical interpretation, it is visible that the negative gradient tends to increase the volume of the body, which is natural, as the force is constant and so a higher volume would diminish the force acting on each node - here we have set the gravitational force $f$ to zero. The problem that one can not expect an optimal solution for the given problem, if the gravitational force $f$ is absent, will be handled via a volume constraint in the optimization process.\\ Apart from the visualization, we also compare the euclidean norm of the finite differences solution with decreasing epsilon with the norm of the solution of the program. This is given in Figure \ref{fig:conv_DJDX}. One can see that a very good convergence is obtained.\\ \begin{figure} \centering \begin{minipage}{.5\linewidth} \centering {\includegraphics[width=\linewidth]{Convergence_Fine_DJDX.pdf}} \par{\vspace{0pt}} \end{minipage}\hfill \begin{minipage}{.4\linewidth} \centering \vspace{0pt} \begin{tabular}{lc} \toprule Epsilon & Ratio \\ \midrule $10^{-3}$ & 0.0603255\\ $10^{-4}$ & 0.7180954\\ $10^{-5}$ & 0.9672460\\ $10^{-6}$ & 0.9966801\\ $10^{-7}$ & 0.9996691\\ $10^{-8}$ & 0.9999882\\ \bottomrule \end{tabular} \par{\vspace{0pt}} \end{minipage} \caption{Convergence of $\,||\frac{dJ(X,U)}{dX}||_2/||(\frac{J(X+\varepsilon V,U(X+\varepsilon V))-J(X,U)}{\varepsilon V})||_2$ for the two-dimensional objective functional} \label{fig:conv_DJDX} \end{figure} \subsection{Shape flows towards higher reliability} \label{optim} To obtain a first optimization process, we use geometric mesh morphing and small step sizes. I.e. we apply a mapping $\theta\to X(\theta)$ such that the interior nodes of the node set $X$ follow the movement of the surface nodes such that a reasonable mesh quality is preserved. In our case, the design parameter set $\theta$ consists of all $y$-coordinates of all nodes between the two faces on the rod in $x$-direction, on which boundary conditions are applied. Thus, the shape is (up to discretization) not restricted by this parametrization. The internal nodes with a fixed $x$ coordinate are distributed with equal distances between the respective surface nodes. The shape gradient w.r.t. the $y$-position of the surface nodes, $\frac{dJ}{d\theta}=\frac{dJ}{dX}\frac{dX}{d\theta}$, can be easily calculated as $\frac{dX}{d\theta}$ is available analytically. The optimization problem in the design parameters $\theta$ thus is 118-dimensional (there are 61 rows with two surface nodes each and two of these rows are fixed). To keep the volume of the body constant, we calculate the volume gradient and project it from the shape gradient. To determine the new mesh in each iteration, we calculate the new boundary nodes by $\theta_{new}=\theta_{old}-\alpha \left(\frac{dJ}{d\theta}-\frac{\partial \text{Vol}}{\partial \theta}\right)$.\\ The outcome is a procedure, converging against the straightened rod, which is visualized in Figure \ref{fig:optim}. We can also investigate the (discretised) failure probability of the component for each iteration over the load parameter $1-p_s(t,\Omega|g)$, cf. \eqref{eqa:PSurvival}, where the reference $g$ for $t=1$ is chosen such that the resulting force is one N. In Figure \ref{fig:prob} we see the distributions for some iteration steps. It is obvious that we actually decrease the probability of failure with the present procedure, which finally converges to the evident optimum. \begin{figure}[!htb] \begin{center} \subfloat[\label{fig6:Bild1} ]{\includegraphics[width=0.3\textwidth]{Rod_One.pdf}} \hspace { 1cm } \subfloat[\label{fig6:Bild2} ]{\includegraphics[width=0.3\textwidth]{Rod_Two.pdf}} \hspace { 1cm } \subfloat[\label{fig6:Bild3} ]{\includegraphics[width=0.3\textwidth]{Rod_Three.pdf}} \hspace { 1cm } \subfloat[\label{fig6:Bild4} ]{\includegraphics[width=0.3\textwidth]{Rod_Four.pdf}} \hspace { 1cm } \subfloat[\label{fig6:Bild5} ]{\includegraphics[width=0.3\textwidth]{Rod_Five.pdf}} \hspace { 1cm } \subfloat[\label{fig6:Bild6} ]{\includegraphics[width=0.3\textwidth]{Rod_Six.pdf}} \end{center} \caption{ The optimization procedure } \label{fig:optim} \end{figure} \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Visualize_Prob_Rod.pdf}} \end{center} \caption{ Distribution of the bended rod's UTS during the optimization process} \label{fig:prob} \end{figure} We also test the procedure to improve the reliability of a S-shaped joint visualized in Figure \ref{fig:Ob_Fun_S}, combined with the visualization of the gradient. It is represented by a 61x17 mesh. For this joint it is not obvious in which direction the shape is supposed to tend to reduce the failure probability. \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Visualize_Gradient_S.pdf}} \end{center} \caption{ Visualization of the objective functional and the gradient of the S-shaped joint} \label{fig:Ob_Fun_S} \end{figure} Here as well, we did apply the optimization procedure we already applied to the first example. In Figure \ref{fig:prob_S} it is visible that, even though it is not a strict descent procedure, we are able to reduce the failure probability of the component significantly. \begin{figure} \centering \begin{minipage}{.4\linewidth} \centering {\includegraphics[width=\linewidth]{S-Shaped_Two.pdf}} \par{\vspace{0pt}} \caption{Iteration 2} \end{minipage}\hfill \begin{minipage}{.4\linewidth} \centering {\includegraphics[width=\linewidth]{S-Shaped_Three.pdf}} \par{\vspace{0pt}} \caption{Iteration 18} \end{minipage} \caption{Results of the optimization of the joint} \label{fig:op_S} \end{figure} \begin{figure} \begin{center} {\includegraphics[width=0.5\textwidth]{Visualize_Prob_S-Shape.pdf}} \end{center} \caption{ Distribution the UTS of the joint for different iterations in the optimization process} \label{fig:prob_S} \end{figure} \section{Conclusion and Outlook} \label{out} This paper for the first time provides a numerical shape optimization algorithm that is directly connected to the design criterion of (probabilistic) mechanical integrity. Based on a model for the probabilistic UTS \cite{batendorf,bruecknerfoit,weibull}, we computed the discretized shape gradient for the probability of failure of a ceramic component and embedded this in gradient based optimization procedure. We validated the implementation and showed that gradient flows with volume constraint are stable over major changes of the geometry. In a situation, where the optimal shape is intuitively clear, we observed convergence of the algorithm. While the algorithm gives quite satisfactory solutions from an intuitive engineering standpoint, there are several rather obvious research necessities from a more theoretical prospective: The first question is, in what sense the solution of the discretized problem is close to the solution of the shape optimization in not necessarily polygonal shapes. While the general strategy for such a proof is clear \cite{haslinger}, in the given context are some subtle points to keep track of: The discretized optimal shapes in our case are polygons. Given the problem of stress divergence $\sim r^{\lambda(\gamma)}$ as a function of the opening angle $\gamma$ of a reentrant corner \cite{nazarov}, the non discretized solutions on the discrete domains $\Omega_h$ will have diverging objective functionals $J(\Omega_h,u)$ if $m\lambda(\gamma)\geq d$. Given the high values of technical Weibull modules $m\approx 10\ldots 20$, this imposes a kind of discrete curvature restriction on the domains $\Omega_h$, where an error estimate is feasible. While one would expect that this problem will go away by itself if the mesh size $h$ is sufficiently fine, as the optimal configuration will only pick shapes $\Omega_h$ which stay away from the critical bound, it is somewhat tricky to cast this to a mathematical proof. We intend to revisit this problem in the future. The second obvious challenge is a first adjoin and then discretize solution for the given problem. This strategy however requires the shape differentiation of an objective functional, which is not even continuous in $H^1(\Omega,\mathbb{R}^d)$. This however can be achieved using smoother shapes and elliptic regularity theory \cite{agmon,agmon2,ciarlet,gottschalk}. An adequate discretization of the continuous shape derivatives thus also requires a careful handling of surface smoothness. \vspace{.3cm} \noindent\textbf{Acknowledgements:} \\ Hanno Gottschalk thanks Sebastian Schmitz form Siemens Energy, Gas Turbine Engineering Department, for interesting discussions. {\small \lineskip=.1cm \bibliographystyle{plain}
2,877,628,089,346
arxiv
\section{Motivations: Why Study {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} Systems?} One of the central motivations for studying intervening quasar absorption lines, is that they provide insights into galactic evolution from the perspective of the chemical, ionization, and kinematic conditions of interstellar, halo, and intragroup gas. In this contribution, a ``new'' taxonomy of absorption line systems is presented, one in which equal, simultaneous consideration is given to the {\hbox{{\rm H}\kern 0.1em{\sc i}}}, {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}}, and {\hbox{{\rm C}\kern 0.1em{\sc iv}}} absorption strengths and to the gas kinematics. Details of the work presented here can be found elsewhere (Churchill 1997; Churchill et~al.\ 1999a,b,c). Here, we investigate an extreme, rapidly evolving, class of {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} system and discuss the possibility that its further study may provide insights into the evolution of clustering on the scale of galaxy groups. Arguably, the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}--selected systems at $z\leq 1$ are best suited for a taxonomic study of absorption systems because: (1) their statistical (Lanzetta, Turnshek, \& Wolfe 1987; Steidel \& Sargent 1992) and kinematic (Petitjean \& Bergeron 1990; Churchill 1997; Charlton \& Churchill 1998) properties are thoroughly documented, (2) they arise in structures possessing a wide range of {\hbox{{\rm H}\kern 0.1em{\sc i}}} column densities, including sub--Lyman limit (Churchill et~al.\ 1999, 1999a), Lyman limit (e.g.\ Steidel \& Sargent 1992), and damped {\hbox{{\rm Ly}\kern 0.1em$\alpha$}} (e.g.\ Rao \& Turnshek 1998; Boiss\`{e} et~al.\ 1998) systems, (3) they give rise to a range of {\hbox{{\rm C}\kern 0.1em{\sc iv}}} absorption strengths (Bergeron et~al.\ 1994; Churchill et~al.\ 1999b,c), and (4) those with rest--frame equivalent widths, $W_{r}({\hbox{{\rm Mg}\kern 0.1em{\sc ii}}})$, greater than $0.3$~{\AA} are associated with normal, bright galaxies (Bergeron \& Boiss\'{e} 1991; Steidel, Dickinson, \& Persson 1994; Churchill, Steidel, \& Vogt 1996; Steidel 1998). \section{Ionization, Kinematics, and Absorber Taxonomy} The {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} kinematics, and the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}}, {\hbox{{\rm C}\kern 0.1em{\sc iv}}}, and {\hbox{{\rm Ly}\kern 0.1em$\alpha$}} absorption strengths, were studied for 45 {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorption--selected systems with redshifts 0.4 to 1.4. The kinematics of the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} and {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}} absorption was resolved at $\simeq 6$~{\hbox{km~s$^{-1}$}} resolution with the HIRES instrument (Vogt et~al.\ 1994) on Keck~I. The {\hbox{{\rm Ly}\kern 0.1em$\alpha$}} and {\hbox{{\rm C}\kern 0.1em{\sc iv}}} absorption was obtained from the {\it HST\/} archive of FOS spectra. These UV spectra have resolution $\simeq 230$~{\hbox{km~s$^{-1}$}}, so that the detailed kinematics of the neutral and high ionization gas are not available for study. See Figure~\ref{cwcfig:examples} for an example of the data. For any given $W_{r}({\hbox{{\rm Mg}\kern 0.1em{\sc ii}}})$, there is a large, $\sim 1$~dex, variation in the ratio $W_{r}({\hbox{{\rm C}\kern 0.1em{\sc iv}}})/W_{r}({\hbox{{\rm Mg}\kern 0.1em{\sc ii}}})$ (Churchill et~al.\ 1999b,c). This indicates a large spread in the global ionization conditions in {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers, and by implication, the ISM, and halos of the host galaxies, and possibly the intragroup media when small groups are intercepted by the line of sight. It was also found that $W_{r}({\hbox{{\rm C}\kern 0.1em{\sc iv}}})$ is strongly correlated to the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} kinematics (Churchill et~al.\ 1999a,c), where the kinematics is quantified using the second velocity moment of the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} $\lambda 2796$ optical depth. As such, there is a strong connection between the kinematic distribution of the low ionization gas and the presence of a strong, high ionization phase. For the majority of the systems, the gas must be multiphase in that a substantial fraction of the high ionization gas arises in a physically distinct phase from the lower ionization gas (Churchill et~al.\ 1999c; also see Churchill \& Charlton 1999). A clustering analysis (tree and $K$--means) was used to examine multivariate trends between the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} kinematics, and {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Ly}\kern 0.1em$\alpha$}}, and {\hbox{{\rm C}\kern 0.1em{\sc iv}}} absorption strengths. To a high level of significance (greater than 99.99\% confidence), it was found that the properties of {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} systems can be organized into five classes, which we have called ``DLA/{\hbox{{\rm H}\kern 0.1em{\sc i}}}--Rich'', ``Double'', ``Classic'', ``{\hbox{{\rm C}\kern 0.1em{\sc iv}}}--deficient'', and ``Single/Weak''. An example system for each of the five classes is shown in Figure~\ref{cwcfig:examples}. Ticks above the {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} and {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}} profiles (HIRES/Keck) give the velocities of the multiple Voigt profile components (Churchill 1997) for the singly ionized gas and ticks above the {\hbox{{\rm Ly}\kern 0.1em$\alpha$}} profile and both members of the {\hbox{{\rm C}\kern 0.1em{\sc iv}}} doublet (FOS/{\it HST}) show the expected location of these components for the neutral and higher ionization gas. \begin{figure}[th] \plotfiddle{cchurchill.fig1.eps}{3.2in}{0}{54}{54}{-215}{-55} \protect\caption{Examples of the five taxonomic classes of $z\sim 1$ {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers based upon {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Ly}\kern 0.1em$\alpha$}}, {\hbox{{\rm C}\kern 0.1em{\sc iv}}}, and {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}} absorption (left to right). The {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} and {\hbox{{\rm Fe}\kern 0.1em{\sc ii}}} profiles, shown over a velocity window of 460~{\hbox{km~s$^{-1}$}}, are measured at $\simeq 6$~{\hbox{km~s$^{-1}$}} resolution (HIRES/Keck). The {\hbox{{\rm Ly}\kern 0.1em$\alpha$}} and {\hbox{{\rm C}\kern 0.1em{\sc iv}}} profiles, shown over a velocity window of 1300~{\hbox{km~s$^{-1}$}}, are observed in the UV (FOS/{\it HST}) with resolution $\simeq 230$~{\hbox{km~s$^{-1}$}}. The five classes (top to bottom) are DLA, Double, Classic, {\hbox{{\rm C}\kern 0.1em{\sc iv}}}--deficient, and Weak. See text for further details.\label{cwcfig:examples}} \end{figure} \section{The Double Systems} In view of the topic of the meeting, we focus here on the Double systems, since they may provide clues to the clustering of material at higher redshifts. We present the HIRES/Keck {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} $\lambda 2796$ profiles of Double systems, including a few at $z>1.4$, in Figure~\ref{cwcfig:doubles}. Though Churchill et~al.\ (1999) suggested that Double systems may be associated with later--type galaxies undergoing concurrent star formation (i.e.\ the multiphase gas arises in superbubbles and from outflows, or chimneys, similar to the gaseous components of the Galaxy), there are at least two other obvious explanations for Double systems. The first scenario is that they might be two Classic systems nearly aligned on the sky and clustered within a $\sim 500$~{\hbox{km~s$^{-1}$}} velocity separation (i.e.\ galaxy pairs). An example of this scenario, at $z\simeq0$, is observed in the spectrum of SN 1993J (Bowen, Blades, \& Pettini 1995). The SN 1993J line of sight probes half the disk and halo of M81, half the disk and halo of the Galaxy, and the ``intergalactic'' material apparently from the strong dwarf--galaxy interactions taking place with both galaxies. The M81/Galaxy {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} $\lambda 2796$ absorption profile has a virtually identical kinematic spread, saturation, and complexity as that of the $z=1.79$ absorber toward Q~$1225+317$ (Figure~\ref{cwcfig:doubles}). Double systems constitute $\simeq 7$\% of our sample. Interestingly, at $z\sim 0.3$, roughly 7\% of all galaxies are observed to be in ``close physical pairs'' (Patton et~al.\ 1997), where a pair has a projected separation less than $20~h^{-1}$~kpc. Even accounting for the evidence that this fraction increases with redshift (e.g.\ Neuschaefer et~al.\ 1997), the fraction of Double systems in our sample is consistent with that of galaxy pairs at intermediate redshifts. The second scenario is that Double systems may consist of a primary and a satellite galaxy (e.g.\ York et~al.\ 1986), possibly in a group environment. Using the Local Group as a model and applying the simple cross--sectional dependence for $W_{r}({\hbox{{\rm Mg}\kern 0.1em{\sc ii}}})$ with galaxy luminosity (Steidel 1995), the probability of intercepting a ``double'' absorber for a random line of sight passing through a ``Milky Way'' galaxy in a ``Local Group'' was estimated (see Charlton \& Churchill 1996). Though the results are fairly sensitive to the assumed gas cross sections of small mass galaxies, we find a $\sim 25$\% chance of intercepting both the LMC and the Milky Way, and a $\sim 5$\% chance of intercepting both the SMC and the Milky Way. All other galaxies in the Local Group have negligible probabilities of being intercepted for a line of sight passing with 50 kpc of the Milky Way. If, at $z\sim1$, roughly 30\% of all galaxies typically have one LMC--like satellite galaxy within 50~kpc (see Zaritsky et~al.\ 1997), it could explain the observed fraction of ``Double'' systems found in our sample. \begin{figure}[tb] \plotfiddle{cchurchill.fig3.eps}{2.0in}{0}{60}{60}{-243}{-123} \vglue -0.1667in \protect\caption{The $\lambda 2796$ transitions (HIRES/Keck) of several higher redshift systems with $W_{r}({\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}) \geq 1.0$~{\AA}. These systems exhibit the characteristics expected from close pairs of galaxies (see Bowen et~al.\ 1995). \label{cwcfig:doubles}} \end{figure} \section{Galaxy Group Evolution} If most Double systems arise in the environments associated with galaxy pairs, then the redshift evolution observed in the number of galaxy pairs would necessarily need to be in step with the evolution in the class of ``Double'' {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers themselves. Over the redshift interval $1\leq z \leq 2$, it is seen that the galaxy pair fraction, evolves proportional to $(1+z)^{p}$, with $2 \leq p \leq 4$ (Neuschaefer et~al.\ 1997). This compares well with $p = 2.2\pm0.7$ for very strong {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers with $W_{r} > 1.0$~{\AA} (Steidel \& Sargent 1992). As such, galaxy pair evolution remains a plausible scenario for explaining the observed evolution in the class of the largest equivalent width {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers (illustrated in Figure~\ref{cwcfig:doubles}). None of these arguments are conclusive, nor absolutely compelling in the face of several attractive scenarios (i.e.\ intergalactic infall, star forming events, etc.) that are equally consistent with the available data. Even so, the hypothesis that the strongest, most kinematically complex {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorbers arise in galaxy groups or pairs is directly testable, and is thus useful for future investigations that probe galactic evolution from the point of view of absorption line systems. Deep imaging and redshift confirmation of the galaxies associated with Double systems and searches for high ionization intragroup gas, such as {\hbox{{\rm N}\kern 0.1em{\sc v}}} and {\hbox{{\rm O}\kern 0.1em{\sc vi}}} (Mulchaey et~al.\ 1996), may confirm this hypothesis. \acknowledgments I would like to thank my collaborators, Richard Mellon, Jane Charlton, and Buell Jannuzi, for their excellent contributions to work from which this contribution is based.
2,877,628,089,347
arxiv
\section{Introduction} Research on coupled oscillators in the past decade has been marked by the discovery of many intriguing patterns in the collective behavior of networks~\cite{arenas2008synchronization,rodrigues2016kuramoto}. Notable examples of such patterns are chimeras~\cite{panaggio2015chimera}, states in which populations of synchronous are asynchronous oscillators coexist; explosive synchronization transitions~\cite{gomez2011explosive,rodrigues2016kuramoto}, which appear as a consequent of constraints in the natural frequency assignment; and asymmetry-induced synchronization~\cite{nishikawa2016symmetric,*zhang2017asymmetry}, a state in which synchrony is counter-intuitively favored by oscillator heterogeneity. In all these cases, phase oscillator models had to be specially designed so that those non-trivial states could be scrutinized. Very recently, however, Nicolaou et al.~\cite{nicolaou2019multifaceted} defined an oscillator model coined as Janus oscillators; the name is inspired in the homonym two-faced god of Roman mythology and reflects the two-dimensional character of an isolated oscillator -- each ``face'' of a Janus unit consists of a Kuramoto oscillator, whose natural frequency has the same absolute value but opposite sign to the frequency of its counter-face. When coupled on one-dimensional regular graphs, Janus oscillators have been found to exhibit a striking rich dynamical behavior that encompasses the co-occurrence of several dynamical patterns, in spite of the simplicity of the topology and the oscillator model itself~\cite{nicolaou2019multifaceted}. The Janus model was introduced as a potential model for biological systems such as the Chlamydomonas cells with counterrotating flagella \cite{friedrich2012flagellar,wan2016coordinated}. It is thus important to understand the dynamics of a Janus system on related (realistic) topologies. Here we pose the question of whether the observed 1D rich collective dynamics exists on more complex networks of Janus oscillators. To address this issue, we employ the Ott-Antonsen (OA) ansatz~\cite{ott2008low} and obtain a reduced set of equations describing the system's evolution. From this reduced representation we find that, indeed, peculiar patterns of synchrony persist when Janus oscillators are placed on random regular, Erd\H{o}s-R\'enyi (ER) and scale-free (SF) random networks. We provide analytical and numerical evidence that the multitude of states in Janus dynamics is a consequence of the coexistence of infinite neutrally stable limit-cycle trajectories, which we denominate ``breathing standing-waves''. Co-occurrence between classical partially synchronized and standing-waves are also reported. We further show that for high average degrees the collective states of ER networks are accurately described by the reduced system obtained for random regular ones. Interestingly, we demonstrate that the coupling range in which global oscillations are possible vanishes in the thermodynamic limit of SF networks. We begin by defining the dynamics of $N$ Janus oscillators~\cite{nicolaou2019multifaceted} on heterogeneous networks as \begin{equation} \dot{\theta}_i = \omega_i + \sum_{j=1}^{2N} W_{ij}\sin(\theta_j - \theta_i),\; (i=1,...,2N) \label{eq:JanusOnNetworks_Compact} \end{equation} where the $2N \times 2N$ matrix $\mathbf{W}$ is defined as \begin{equation} \mathbf{W}=\left[\begin{array}{cc} 0 & \beta\mathbf{\mathbf{I}+\sigma A}\\ \beta\mathbf{I}+\sigma\mathbf{A}^{{\rm T}} & 0 \end{array}\right]. \label{eq:matrix_W} \end{equation} \begin{comment} \begin{equation} \begin{aligned} \dot{\theta}_{i}^{(1)}&=&\omega_{1}+\beta\sin(\theta_{i}^{2}-\theta_{i}^{1})+\sigma\sum_{j=1}^{N}A_{ij} \sin(\theta_{j}^{2}-\theta_{i}^{1}),\\ \dot{\theta}_{i}^{(2)}&=&\omega_{2}+\beta\sin(\theta_{i}^{1}-\theta_{i}^{2})+\sigma\sum_{j=1}^{N}A_{ji}\sin(\theta_{j}^{1}-\theta_{i}^{2}). \end{aligned} \label{eq:JanusOnNetworks} \end{equation} \end{comment} Natural frequencies are assigned as $\omega_i = \omega_0 + \Delta/2$, for $i=1,...,N$; and $\omega_i = \omega_0 - \Delta/2$, for $i=N+1,...,2N$, where $\Delta$ is the frequency mismatch and $\omega_0$ is the average frequency, which we assume $\omega_0=0$. System (\ref{eq:JanusOnNetworks_Compact}) is analogous to a bipartite network or a multilayer network in which oscillators belonging to the same group do not interact with one another, while connections between groups are encoded in matrix $\mathbf{A}$. Notice also that interactions between oscillators are weighted by the coupling strength $\sigma$, except for oscillators with indexes $(i,i+N)$, $i \in [1,N]$ -- these pairs of nodes interact with coupling strength $\beta$. By defining the local order parameters \begin{equation} R_i = \sum_{j=1}^{2N} W_{ij}e^{{\rm i}\theta_j}, \label{eq:R_i} \end{equation} Eqs.~\ref{eq:JanusOnNetworks_Compact} are then decoupled as \begin{equation} \dot{\theta}_i = \omega_i + \textrm{Im}[e^{-{\rm i} \theta_i}R_i]. \label{eq:decoupled_eqs} \end{equation} Following~\cite{barlev2011dynamics}, we consider an ensemble of systems defined by Eq.~\ref{eq:JanusOnNetworks_Compact} with fixed coupling matrix $\mathbf{W}$. In this formulation, we describe the system (\ref{eq:JanusOnNetworks_Compact}) at a given time step $t$ by the joint probability density $\rho_{2N}(\bm{\theta},\bm{\omega},t)$, where $\bm{\theta}=(\theta_1,...,\theta_{2N})$ is the vector containing the phases at time $t$, and $\bm{\omega}=(\omega_1,...,\omega_{2N})$ is the time-independent vector with the natural frequencies of the individual oscillators. The evolution of the join probability $\rho_{2N}$ is then dictated by $\partial_t \rho_{2N} + \sum_{i=1}^{2N} \partial_{\theta_i}(\rho_{2N} \dot{\theta}_i)=0$, where $\dot{\theta}_i$ is given by Eq.~\ref{eq:decoupled_eqs}. Let us suppose that frequencies $\omega_j$ are distributed according to a generic function $g(\omega_j)$. Multiplying the continuity equation of $\rho_{2N}$ by $\Pi_{j \neq i} d\omega_j d\theta_j$ and integrating, we obtain the evolution equation for the marginal oscillator density $\rho_i(\theta_i,\omega_i,t)= \int \int \rho_{2N} \Pi_{j\neq i} d\omega_j \theta_j$; that is \begin{equation} \frac{\partial \rho_i}{\partial t} + \frac{\partial }{\partial \theta_i}(\rho_i \dot{\theta}_i)=0. \label{eq:one_oscillator_prob} \end{equation} By expanding $\rho_i$ in Fourier series and applying the OA ansatz to its coefficients we have~\cite{ott2008low,barlev2011dynamics} \begin{equation} \rho_{i}(\theta_{i},\omega_{i},t)=\frac{g(\omega_{i})}{2\pi}\left[1+\sum_{n=1}^{\infty}\hat{\alpha}_{i}^{n}(\omega_{i},t)e^{ \textrm{i} n \theta_{i}}+\rm{c.c}\right] \label{eq:OA_ansatz} \end{equation} Inserting the previous equation into Eq.~\ref{eq:one_oscillator_prob}, we obtain the evolution for coefficients $\hat{\alpha}_i$: \begin{equation} \frac{d\hat{\alpha}_{i}}{dt}+{\rm i}\hat{\alpha}_{i}\omega_{i}+\frac{1}{2}\left[\hat{\alpha}_{i}^{2}R_{i}-R_{i}^{*}\right]=0\; (i=1,...2N), \label{eq:alphas} \end{equation} where, in this ensemble approach, the coefficients $R_i$ are calculated as \begin{equation} R_i = \sum_{j=1}^{2N} W_{ij} \int_{-\infty}^{\infty} \int_0^{2\pi} \rho_j(\theta_j,\omega_j,t) e^{\textrm{i} \theta_j} d\theta_j d \omega_j. \label{eq:Ri_ensemble} \end{equation} Inserting Eq.~\ref{eq:OA_ansatz} in the previous equation yields \begin{equation} R_i = \sum_{j=1}^{2N} W_{ij}\int_{-\infty}^{\infty} \hat{\alpha}_j^{*}(\omega_j,t)g(\omega_j) d\omega_j. \label{eq:Ri_alpha} \end{equation} \begin{figure}[t!] \centering \includegraphics[width=0.75\columnwidth]{Fig1.pdf} \caption{(Color online) Temporal evolution of (a) order parameters $r_{1,2}$ measuring the level of synchronization within subpopulations, and total order parameter $R = \frac{1}{2} \sqrt{r_1^2 + r_2^2 +2 r_1 r_2 \cos\delta}$; and (b) evolution of phase-lag $\delta$. Initial conditions: $r_1(0)=0.61$, $r_2(0)=0.83$, $\delta(0) = 2\pi/3$. Remaining parameters: $\beta = 0.0015$, $\Delta = 1$. Each subpopulation has $N = 2500$ oscillators.}% \label{fig:temporal}% \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\columnwidth]{Fig2.pdf} \caption{(Color online) (a) Stability diagram of system (\ref{eq:reduced_system_regular_networks}) for $k = 40$ and $\Delta = 1$. ``Partial sync'' refers to the state in which $r_{1,2} = 1$ and $\delta \neq 0$. SW denotes parameter regions for each $r_{1,2}=1$ and for which phase-lag $\delta$ rotates with a nonzero frequency. The regions where a multitude of solutions with $r_{1,2}<1$ and $\dot{\delta} \neq 0$ is found are labeled as ``breathing SW''. Order parameter curves for (b) $\beta = 0.25$ and (c) $\beta=-0.3$. Solid and dashed lines of $R$ for $\sigma \in (-\infty, \sigma_2] \cup [\sigma_1,\infty)$ are obtained with Eq.~\ref{eq:partial_synchronization_state_R}. The line for $r_1=r_2$ denotes the symmetric solution of Eq.~\ref{eq:reduced_system_regular_networks}. Dots are obtained by numerically evolving the original system (\ref{eq:JanusOnNetworks_Compact}) with $N=2500$ oscillators; each dot is an averaged over $t \in [250,500]$, with time step $dt = 0.05$. Gray lines in the region $\sigma_{c_4}< \sigma < \sigma_{c_3}$ are generated numerically by evolving the reduced system in Eq.~\ref{eq:reduced_system_regular_networks} with random initial conditions. Gray dots depict the corresponding result yielded by the original system (Eq.~\ref{eq:JanusOnNetworks_Compact}). For the sake of clarity, we only show a small sample of the possible states attainable in the gray area.}% \label{fig:doubleRegular}% \end{figure} Let us now consider the case in which each subpopulation of the Janus coupling scheme is a random regular network with degree $k$. More precisely, each oscillator of the subpopulation that rotates with frequency $\omega_1 = \Delta/2$ is randomly connected to $k$ oscillators of subpopulation 2 ($\omega_2 = -\Delta/2$), and vice-versa. In this scenario, since oscillators within each group are identical -- i.e. $g(\omega_i)=\delta(\omega_i - \Delta/2)$, for $i=1,...,N$; and $g(\omega_i) = \delta(\omega_i + \Delta/2)$, for $i=N+1,...,2N$--, we assume the following solution for coefficients $\hat{\alpha}_i$: \begin{equation} \begin{aligned} \hat{\alpha}_{1}=\hat{\alpha}_{2}=\cdots=\hat{\alpha}_{N}\equiv\alpha_{1}\\ \hat{\alpha}_{N+1}=\cdots=\hat{\alpha}_{2N}\equiv\alpha_{2}. \end{aligned} \label{eq:alpha_solution_regular} \end{equation} Hence, the local order parameters $R_i$ are reduced to: \begin{equation} R_{i}=\begin{cases} \alpha_{2}^{*}(\beta+k\sigma) \equiv R_1 & \textrm{ if }i=1,...,N;\\ \alpha_{1}^{*}(\beta+k\sigma) \equiv R_2 & \textrm{ if }i=N+1,...,2N. \end{cases} \label{eq:R_double_regular} \end{equation} Inserting solutions (\ref{eq:alpha_solution_regular}) and (\ref{eq:R_double_regular}) in Eq.~\ref{eq:alphas}, we obtain the reduced set of equations \begin{equation} \dot{\alpha}_{1,2}+{\rm i}\omega\alpha_{1,2}+\frac{1}{2}\left[\alpha_{1,2}^{2}R_{1,2}-R_{1,2}^{*}\right]=0, \label{eq:equation_alphas} \end{equation} which in terms of the coordinates $\alpha_{1,2} = r_{1,2} e^{i\psi_{1,2}}$ are written as \begin{equation} \begin{aligned} \dot{r}_{1} & = \frac{1 }{2}(\beta + k\sigma)r_{2}(1 - r_{1}^{2})\cos\delta\\ \dot{r}_{2} & = \frac{1}{2}(\beta + k\sigma)r_{1}(1 - r_{2}^{2})\cos\delta\\ \dot{\delta} & = -\Delta-\frac{1}{2}(\beta+k\sigma)\sin\delta\left[2r_{1}r_{2}+\frac{r_{1}}{r_{2}}+\frac{r_{2}}{r_{1}}\right], \end{aligned} \label{eq:reduced_system_regular_networks} \end{equation} where $\delta = \psi_1 - \psi_2$ is the phase-lag between subpopulations. Variables $r_1$ and $r_2$ turn out to be the order parameters measuring the level of synchronization within each subpopulation in the Janus system; the traditional Kuramoto order parameter evaluating the global synchrony is obtained through $R(t) = \frac{1}{2}|r_1 e^{i\psi_t(t)} + r_2 e^{i\psi_2(t)} |$. States that emerge from system (\ref{eq:reduced_system_regular_networks}) are summarized as: (1) a partially synchronized state in which $r_{1,2}=1$, while the subpopulations remain separated by a constant phase-lag $\delta$ (hence, $R<1$); (2) a standing-wave (SW) state, where the bulks of the two fully synchronized populations ($r_{1,2}=1$) rotate in opposite directions yielding a incessantly rotating $\delta$; (3) a distinct form of SW emerges when $0< r_{1,2}< 1$: along with the increment or decrease in phase-lag $\delta$, the order parameters $r_{1,2}$ exhibit a breathing behavior, as depicted in the simulation shown Fig.~\ref{fig:temporal}. Henceforth we refer to this state as ``breathing SW''. As we shall see, the classical incoherent state remains unstable for all coupling values. In order to uncover the conditions for the existence of the partially synchronized state, we set $r_1(t) = r_2(t) \equiv r (t)$ in Eqs.~\ref{eq:reduced_system_regular_networks}, leading to $\dot{r} = \frac{\gamma r}{2} (1 - r^2) \cos \delta$ and $\dot{\delta} = - \Delta - \gamma (1+r^2)\sin\delta$. Imposing $\dot{r} = 0$, we notice that $r=1$ is always a fixed point. Inserting the latter solution in the equation for $\dot{\delta}=0$, we find that $\sin \delta = -\Delta/2\gamma$. Thus, the partially synchronized regime remains stable when $-\Delta /2|\gamma| \leq 1$ is satisfied. In terms of coupling $\sigma$, we then write these critical conditions as \begin{equation} \sigma_{c_1} = -\frac{\Delta/2 + \beta}{k}\textrm{ and }\sigma_{c_2} = \frac{\Delta/2 - \beta}{k}. \label{eq:c_partial_synchronized_state} \end{equation} Couplings $\sigma_{c_1}$ and $\sigma_{c_2}$ determine the coupling range where the partially synchronized state exists. The total order parameter $R$ is then given by \begin{equation} R = \frac{1}{\sqrt{2}} \sqrt{1 \pm \sqrt{1 - \frac{\Delta^2}{4(\beta + k \sigma)^2}}}, \label{eq:partial_synchronization_state_R} \end{equation} where the ``-'' branch is stable for $\sigma \leq \sigma_{c_1}$, whereas the ``+'' branch is stable in the region $\sigma \geq \sigma_{c_2}$. For $\sigma_{c_1}<\sigma < \sigma_{c_2}$, the limit cycle solution of $\dot{\delta}$ holds and SW states emerge with perfectly synchronized subpopulations ($r_{1,2}=1$). A linear stability analysis of the incoherent state ($\alpha_{1,2}=0$) in Eqs.~\ref{eq:equation_alphas} reveals that the eigenvalues of the Jacobian matrix become purely imaginary at $|\beta + k\sigma | = \Delta$. Therefore, limit cycle solutions arise for \begin{equation} \sigma_{c_{3}}\equiv-\frac{(\Delta+\beta)}{k}<\sigma<\frac{\Delta-\beta}{k}\equiv\sigma_{c_{4}}. \label{eq:conditions_limit_cycles} \end{equation} Figure~\ref{fig:doubleRegular}(a) outlines the critical conditions given by Eqs.~\ref{eq:c_partial_synchronized_state} and ~\ref{eq:conditions_limit_cycles} in the plane spanned by couplings $\beta$ and $\sigma$. As it can be seen, the partially synchronized state is favored by extreme values of both couplings, whereas states with oscillating synchrony appear for intermediate values in the parameter space. In Figs.~\ref{fig:doubleRegular} (b) and (c) we visualize the evolution of the local and global order parameters over two vertical sections of the diagram in (a), namely for $\beta = 0.25$ and $-0.3$, respectively. Supposing we initiate the system with a negative $\sigma$ in the ``Partial sync'' region ($\sigma < \sigma_{c_4}$), the total order parameter $R$ collapses in the solution given by Eq.~\ref{eq:partial_synchronization_state_R}. As $\sigma$ is further increased, at $\sigma = \sigma_{c_2}$ the unstable and stable branches of $R$ merge via a saddle-node infinite-period bifurcation, whereby the limit cycle solution of the SW arises (see Figs.~\ref{fig:doubleRegular} (b) and (c)). Upon further continuation of $\sigma$, a saddle-node appears again at $\sigma = \sigma_{c_1}$ and the system is brought back to the partially synchronized state. Besides the branch of $R$ obtained under the symmetry condition $r_1 = r_2 = 1$, the numerical evolution of the reduced system predicts the existence of several other curves, which are upper bounded by $R$ of the $r_{1,2}=1$ solution for $\sigma \in [\sigma_{c_4}, \sigma_{c_3}]$. Insights about the nature of such states can be gained by investigating the stability of the SW state under perturbations transversal to the symmetric manifold $r_1 = r_2$~\citep{martens2009exact}. By defining the transversal and longitudinal coordinates $r_{\perp} = (r_1 - r_2)/2$ and $r_{\parallel} = (r_1 + r_2)/2$, we have $\dot{r}_{\perp}=\frac{\gamma}{2}(r_{\perp}^{2}-r_{\parallel}^{2}-1)r_{\perp}\cos\delta$, which in terms of variables $b_{\parallel} = r_{\parallel}^2$ and $b_{\perp} = r_{\perp}^2$ reads \begin{equation} \dot{b}_{\perp}=\gamma b_{\perp}(b_{\perp}-b_{\parallel}-1)\cos\delta. \label{eq:b} \end{equation} Linearization at a point $(b_0,\delta_0)$ lying on a limit cycle solution of the manifold $r_1 = r_2$ yields the variation equation $\dot{\delta b}_{\perp} = \lambda_{\perp} \delta b_{\perp}$, where \begin{equation} \lambda_{\perp} = -(\beta + k\sigma) (1 + b_0) \cos \delta_0. \label{eq:variational_equation} \end{equation} By averaging the previous equation over a period of oscillation and using the periodicity of the limit cycle ($\langle (d/dt) \ln b_0\rangle =0$) we find $\langle\lambda_{\perp}\rangle = -2(\beta + k\sigma)\langle\cos\delta_0\rangle$. Numerical calculations with original system (\ref{eq:JanusOnNetworks_Compact}) for extensive parameter combinations show that $\langle \cos \delta_0 \rangle \approx 0$ (and, consequently, $\langle \lambda_{\perp} \rangle \approx 0$) in the region $\sigma \in [\sigma_{c_2},\sigma_{c_1}]$, suggesting then that the limit cycle solution of the SW is neutrally stable. Although our numerical estimate does not give us an exact proof of the stability of the limit cycle solution, it sheds light on the existence of the multitude of curves observed in Fig.~\ref{fig:doubleRegular}(b) and (c). Essentially, any perturbation of the SW state leads to a new limit cycle with $r_{1,2}<1$, since nearby trajectories are not attracted nor repelled, explaining the origin of the numerous solutions encountered in the coupling region encompassed by $\sigma_{c_4}$ and $\sigma_{c_3}$ in Fig.~\ref{fig:doubleRegular}. Notice also that the lower branches in the region $\sigma \in [\sigma_{c_4}, \sigma_{c_3}]$ do not correspond precisely to the classical incoherent state, but rather represent limit cycles solutions with small amplitudes $r_{1,2}$. It is noteworthy mentioning also that although we have considered negative and positive couplings, we have not observed in the populations of Janus oscillators states akin to traveling waves and $\pi$-states, which are collective phenomena that are characteristic of the interplay between attractive and repulsive interactions~\cite{hong2011kuramoto,sonnenschein2015collective}. The theory developed for random regular networks can also provide insights on the dynamics of networks with mildly heterogeneous degree distributions. In Fig.~\ref{fig3}(a) we superimpose numerical results for ER networks with the branches for the partially synchronized states (Eq.~\ref{eq:partial_synchronization_state_R}) and critical conditions given by Eq.~\ref{eq:c_partial_synchronized_state} and~\ref{eq:conditions_limit_cycles}. Interestingly, we see that the dependence of the order parameters is reproduced with good precision with the expressions derived for simpler networks. Boundaries enclosing the breathing SW states in the random regular network also delineate the region with global oscillations for the ER network. Notice also that $\sigma_{c_{1,2}}$ again mark the limits of the partially synchronize branch of $R$; however, no state analogous to the perfectly symmetric SW ($r_{1,2}=1$) is observed in $\sigma_{c_2} < \sigma < \sigma_{c_1}$ for ER networks. \begin{figure}[t!] \centering \includegraphics[width=1.0\columnwidth]{Fig3.pdf} \caption{(Color online) (a) Evolution of order parameters $R$ and $r_{1,2}$ for ER networks with average-degree $\langle k \rangle = 40$. Dots are numerical experiments with $N = 2500$ oscillators. In order to highlight the dynamical similarity between random regular and dense ER networks, critical conditions and branches in this panel are obtained as in Fig.~\ref{fig:doubleRegular}. (b) Synchronization curves of an uncorrelated SF network with $p_k \sim k^{-\gamma}$, where $\gamma = 2.25$, $N = 10^{4}$ oscillators and minimum degree $k_{\min}=30$. Conditions $\sigma^{(\rm{net})}_{1,2}$ are given by Eq.~\ref{eq:conditions_limit_cycles_HMF}. (c) Evolution of mean-field frequency $\dot{\Theta}$ associated to total order parameter $R(t)e^{i\Theta(t)} = (r_1 e^{i\psi_1} + r_2 e^{i\psi_2})/2$ for the same SF network of panel (b). Vertical lines in (c) are given by $\sigma^{(\rm{net})}_{1,2}$.}% \label{fig3}% \end{figure} Let us take a step further in the analysis of heterogeneous structures and consider general uncorrelated networks with degree distribution $p_k$. In this case, we assume that nodes with same degree $k$ admit the same solution, i.e., $\alpha_i = \alpha_k$, if $k_i=k$. Thus, Eqs.~\ref{eq:alphas} are reduced to \begin{eqnarray*} \dot{\alpha}_{k,1}+{\rm i}\omega\alpha_{k,1}+\frac{1}{2}\left[\alpha_{k,1}^{2}\left(\beta\alpha_{k,2}^{*}+\sigma\frac{k}{\langle k\rangle}\sum_{k^{\prime}}k^{\prime}p_{k^{\prime}}\alpha_{k^{\prime},2}^{*}\right)\right.\\ -\left.\left(\beta\alpha_{k,2}+\sigma\frac{k}{\langle k\rangle}\sum_{k^{\prime}}k^{\prime}p_{k^{\prime}}\alpha_{k^{\prime},2}\right)\right] & = & 0, \label{eq:Janus_heterogeneous} \end{eqnarray*} where $\alpha_{k,1}$ describes the dynamics of oscillators with degree $k$ and frequency $\omega_i = \Delta/2$. Equations for coefficients $\alpha_{k,2}$ standing for the second face of Janus oscillators ($\omega_i = -\Delta/2$) are obtained accordingly. Linearizing the system around $\alpha_{k,1} = \delta \alpha_{k,1} \ll 1$ yields the following variational system \begin{equation} \begin{aligned} \dot{\delta\alpha}=-{\rm i}\omega\delta\bar{\alpha}+\frac{1}{2}\left[\beta+\sigma\frac{\langle k^{2}\rangle}{\langle k\rangle}\right]\delta\alpha\\ \dot{\delta\bar{\alpha}}=-{\rm i}\omega\delta\alpha-\frac{1}{2}\left[\beta+\sigma\frac{\langle k^{2}\rangle}{\langle k\rangle}\right]\delta\bar{\alpha}, \end{aligned} \label{eq:perturbation_HMF} \end{equation} where $\delta \alpha$ is a small perturbation of the complex order parameter $\alpha = \frac{1}{2\langle k \rangle} \sum_k k p_k (\alpha_{k,1} +\alpha_{k,2})$, and $\delta \bar{\alpha}$ is the analogous quantity for the parameter measuring the difference in the internal synchrony in Janus oscillators, i.e., $\bar{\alpha} = \frac{1}{2\langle k \rangle} \sum_k k p_k (\alpha_{k,1} - \alpha_{k,2})$. The eigenvalues of the Jacobian matrix of system (\ref{eq:perturbation_HMF}) become purely imaginary at $\Delta = |\beta + \sigma \langle k^2\rangle /\langle k \rangle|$. Therefore, we predict the appearance of states with global oscillations in the range \begin{equation} \sigma_{c_1}^{(\rm{net})}\equiv-(\Delta+\beta)\frac{\langle k \rangle}{\langle k^2 \rangle}<\sigma<\frac{\langle k \rangle}{\langle k^2 \rangle}(\Delta-\beta)\equiv \sigma_{c_2}^{\rm{(net})}. \label{eq:conditions_limit_cycles_HMF} \end{equation} We check the predictions of the equation above in Fig.~\ref{fig3}(b) for SF networks with power-law exponent $\gamma = 2.25$. At first sight, it seems that the condition $\sigma_{c_1}^{(\rm{net})}$ provides an inaccurate estimation of the region where the order parameters $R$ and $r_{1,2}$ are expected to exhibit an erratic behavior, suggesting perhaps that finite-size effects could be behind the deviation between $\sigma_{c_1}^{(\rm net)}$ and the point $\sigma \simeq 0.17$ at which the branch of $R$ collapses to values $R \approx 0$. However, Eq.~\ref{eq:conditions_limit_cycles_HMF} refers to the coupling range in which multiple oscillating states are expected to emerge. Visualizing in Fig.~\ref{fig3}(c) the evolution of the mean-field frequency $\dot{\Theta}$, we observe that $\sigma_{c_{1,2}}^{(\rm{net})}$ actually define very accurately the boundaries of the states with multiple oscillating solutions ($\sigma_{c_1}^{\rm{(net)}} < \sigma < \sigma_{c_2}^{\rm{(net)}}$). Given the dependence on $\langle k \rangle /\langle k^2\rangle$, one further envisions from Eq.~\ref{eq:conditions_limit_cycles_HMF} the absence of such oscillating states in the thermodynamic limit for SF networks with $2 < \gamma \leq 3$, since $\sigma_{c_{1,2}}^{\rm{(net)}}$ are expected to vanish as $N \rightarrow \infty$, similarly to the classical coupling for the onset of synchronization in such structures~\cite{peron2019onset,rodrigues2016kuramoto}. In conclusion, we have explored the collective dynamics of Janus oscillators on large homogeneous and heterogeneous random networks. By employing the scheme provided by the OA ansatz, we obtained, for random regular networks, a reduced set of equations whereby critical points of the dynamics were revealed. We found that several collective behaviors coexist for intermediate coupling values, elucidating the findings in~\cite{nicolaou2019multifaceted}. Although initially obtained for homogeneous networks, we verified that the solutions of the reduced system fitted accurately numerical experiments for dense ER networks. By analyzing the stability of general uncorrelated networks, we further verified that the coupling range for which global oscillations are possible shrinks in the thermodynamic limit of SF networks. It is pertinent to mention that the accuracy of the OA ansatz in predicting the transition points is deteriorated for $\sigma$ and $\beta$ values beyond the region depicted in Fig.~\ref{fig:doubleRegular}. Deviations from the temporal signature yielded by the reduced system for solutions of $r_{1,2}$ were also observed in simulations for some couplings $(\beta,\sigma)$ in the breathing SW area. All in all, we provided the first theoretical and numerical analysis of ensembles of Janus oscillators on homogeneous and heterogeneous random networks. As such, our work raises further interesting questions about the study initiated by Nicolaou et al.~\cite{nicolaou2019multifaceted}. For instance, future investigations should target the dynamics on sparse and correlated networks -- situations in which the ensemble approach in~\cite{barlev2011dynamics} and mean-field techniques are inaccurate in predicting tipping points of the system -- as well as limitations of the OA manifold in capturing the Janus dynamics. TP acknowledges FAPESP (Grants No. 2016/23827-6 and 2018/15589-3). DE acknowledges Kadir Has University internal Scientific Research Grant (BAF). FAR acknowledges CNPq (Grant No. 305940/2010-4) and FAPESP (Grants No. 2016/25682-5 and grants 2013/07375-0) for the financial support given to this research. YM acknowledges partial support from the Government of Aragon, Spain through grant E36-17R (FENOL), by MINECO and FEDER funds (FIS2017-87519-P) and by Intesa Sanpaolo Innovation Center. This research was carried out using the computational resources of the Center for Mathematical Sciences Applied to Industry (CeMEAI) funded by FAPESP (grant 2013/07375-0). The funders had no role in study design, data collection and analysis, or preparation of the manuscript.
2,877,628,089,348
arxiv
\section*{Contents} \vspace{0.018cm} \hspace*{-0.39cm}I.\,\,\hspace{0.01cm}Introduction \hfill 3 \\ \vspace*{-0.33cm}\\ \hspace{0.15cm}II.\,\,\hspace{0.01cm}Formalism \hfill 7 \\ \hspace*{0.70cm}A.\,\,\hspace{0.01cm}The operator product expansion \hfill 7 \\ \hspace*{0.70cm}B.\,\,\hspace{0.01cm}The OPE of the single-particle Green's function for general values of $a$ \hfill 8 \\ \hspace*{0.70cm}C.\,\,\hspace{0.01cm}Three-body scattering amplitude \hfill 9 \\ \hspace*{0.70cm}D.\,\,\hspace{0.01cm}The OPE of the single-particle Green's function in the unitary limit \hfill 10 \\ \hspace*{0.70cm}E.\,\,\,\hspace{0.01cm}Derivation of the sum rules \hfill 12 \\ \hspace*{0.70cm}F.\,\,\,\,\hspace{0.01cm}Choice of the kernel $\mathcal{K}(\omega)$ \hfill 14 \\ \vspace*{-0.33cm}\\ \hspace{0.00cm}III.\,\,\hspace{0.01cm}MEM analysis for the spectral density \hfill 18 \\ \hspace*{0.70cm}A.\,\,\hspace{0.01cm}The Borel window and the default model \hfill 18 \\ \hspace*{0.70cm}B.\,\,\hspace{0.01cm}The single-particle spectral density \hfill 19 \\ \vspace*{-0.33cm}\\ \hspace{0.03cm}IV.\,\,\hspace{0.01cm}Summary and conclusion \hfill 23 \\ \vspace*{-0.33cm}\\ \hspace{0.00cm}Appendix A.\,\,\hspace{0.01cm}Numerical solution of $T^{\mathrm{reg}}_{\up}(k,0;k,0)$ in the unitary limit \hfill 25 \\ \hspace{0.00cm}Appendix B.\,\,\hspace{0.01cm}Derivation of the sum rules for a generic kernel \hfill 27 \\ \hspace{0.00cm}Appendix C.\,\,\hspace{0.01cm}Finite energy sum rules for the unitary Fermi gas \hfill 34 \\ \hspace{0.00cm}Appendix D.\,\,\hspace{0.01cm}The maximum entropy method \hfill 40 \\ \vspace*{-0.33cm}\\ \hspace{0.00cm}References \hfill 42 \clearpage \section{\label{Intro} Introduction} The unitary Fermi gas, consisting of non-relativistic fermionic particles of two species with equal mass, has been studied intensively during the last decade \cite{Bloch,Giorgini,Zwerger}. The growing interest in this system was prompted especially by the ability of tuning the interaction between different fermionic species in ultracold atomic gases through a Feshbach resonance by varying an external magnetic field. This technique allows one to bring the two-body scattering length of the two species to infinity and therefore makes it possible to study the unitary Fermi gas experimentally. Using photoemission spectroscopy, the measurement of the elementary excitations of ultracold atomic gases has in recent years become a realistic possibility \cite{Dao,Stewart}. Understanding these elementary excitations from a theoretical point of view is hence important and a number of studies devoted to this topic have already been carried out \cite{Haussmann,Magierski,Carlson2}. We will in this work propose a new and independent method for computing the single-particle spectral density of the unitary Fermi gas, which makes use of the operator product expansion (OPE). The OPE, which was originally proposed in the late sixties independently by Wilson, Kadanoff and Polyakov \cite{Wilson,Kadanoff,Polyakov}, has proven to be a powerful tool for analyzing processes related to QCD (Quantum Chromo Dynamics), for which simple perturbation theory fails in most cases. The reason for this is the ability of the OPE to incorporate non-perturbative effects into the analysis as expectation values of a series of operators, which are ordered according to their scaling dimensions. Perturbative effects can on the other hand be treated as coefficients of these operators (the ``Wilson-coefficients"). The OPE has specifically been used to study deep inelastic scattering processes \cite{Muta} and has especially played a key role in the formulation of the so-called QCD sum rules \cite{Shifman1,Shifman2}. In recent years, it was noted that the OPE can also be applied to strongly coupled non-relativistic systems such as the unitary Fermi gas \cite{Braaten1,Braaten2,Braaten3,Braaten4,Braaten5,Son,Barth,Hofmann,Goldberger,Goldberger2,Nishida,Golkar,Goldberger3}. Initially, the OPE was used to rederive some of the Tan-relations \cite{Tan1,Tan2,Tan3} in a natural way \cite{Braaten1} and, for instance, to study the dynamic structure factor of unitary fermions in the large energy and momentum limit \cite{Son,Goldberger}. Furthermore, the OPE for the single-particle Green's function of the unitary Fermi gas was computed by one of the present authors \cite{Nishida} up to operators of momentum dimension 5, from which the single-particle dispersion relation was extracted. As the OPE is an expansion at small distances and times (or large momenta and energies), the result of such an analysis can be expected to give the correct behavior in the large momentum limit and is bound to become invalid at small momenta. The analysis of \cite{Nishida} confirmed this, but in addition somewhat surprisingly showed that the OPE is valid for momenta as small as the Fermi momentum $k_{\mathrm{F}}$, where the OPE still shows good agreement with the results obtained from quantum Monte-Carlo simulations \cite{Magierski}. The purpose of this paper is to extend this analysis to smaller momenta, by making use of the techniques of QCD sum rules, which have traditionally been employed to study hadronic spectra from the OPE applied to Green's functions in QCD. Our general strategy goes as follows: \begin{itemize} \item Step 1: Construct OPE At first, we need to obtain the OPE for the single-particle Green's function $\G_{\up}(k_0,\k)$ in the unitary limit, which can be rewritten as an expansion of the single-particle self-energy $\Sigma_{\up}(k_0,\k)$. The subscript $\up$ here represents the spin-up fermions. The main work of this step has already been carried out in \cite{Nishida}. $\Sigma_{\up}(k_0,\k)$ can be considered to be an analytic function on the complex plane of the energy variable $k_0$, with the exception of possible cuts and poles on the real axis. Considering the OPE at $T=0$, with equal densities for both fermionic species ($n_{\up} = n_{\down}$) and taking into account operators up to momentum dimension 5, the only parameters appearing in the OPE are the Bertsch parameter and the contact density, which are by now well known from both experimental measurements \cite{Ku,Zurn,Hoinka} and theoretical quantum Monte-Carlo calculations \cite{Carlson,Gandolfi}. \item Step 2: Derive sum rules From the fact that the OPE is valid at large $|k_0|$ and the analytic properties of the self-energy, a general class of sum rules for $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$ can be derived. In contrast to the complex $k_0$, $\omega$ here is a real parameter. These sum rules are relations between certain weighted integrals of $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$ and corresponding analytical expressions that can be obtained from the OPE result (for details see Section \ref{Formalism}): \begin{equation} D^{\mathrm{OPE}}_{\up}(M,\bm{k}) = \int^{\infty}_{-\infty}d\omega \mathcal{K}(\omega, M) \mathrm{Im} \Sigma_{\up}(\omega,\bm{k}). \label{eq:intro.sr} \end{equation} The kernel $\mathcal{K}(\omega, M)$ here must be an analytic function that is real on the real axis of $\omega$ and falls off to zero quickly enough at $\omega \to +\infty$, while $M$ is some general parameter that characterizes the form of the kernel. In the practical calculations of this paper, we will use the so-called Borel kernels of the form $\mathcal{K}_n(\omega, M) = (\omega/M)^n e^{-\omega^2/M^2}$. \item Step 3: Extract $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$ via MEM and obtain $\mathrm{Re}\Sigma_{\up}(\omega,\bm{k})$ from the Kramers-Kr$\mathrm{\ddot{o}}$nig relation As a next step, we use the maximum entropy method (MEM) to extract the most probable form of $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$ from the sum rules, following an approach proposed in \cite{Gubler} for the QCD sum rule case. It should be mentioned here that this method is somewhat different from the analysis procedure most commonly employed in QCD sum rule studies, where the spectral function (which corresponds to $\mathrm{Im} \Sigma_{\up}$ here) is parametrized using a simple functional ansatz with a small number of parameters which are then fitted to the sum rules. This method has traditionally worked well if some sort of prior knowledge on the spectral function is available and assumptions on its form can thus be justified. On the other hand, in cases where one does not really know what specific form the spectral function can be expected to have, sum rule analyses based on (potentially incorrect) assumptions on the spectral shape always involve the danger of giving ambiguous and even misleading results. MEM is therefore our method of choice, as it allows us to analyze the sum rules without making any strong assumption on the functional form of the spectral function and hence makes it possible to pick the most probable spectral shape among an infinitely large number of choices. Once $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$ is obtained from the MEM analysis of the sum rules, it is a simple matter to compute $\mathrm{Re}\Sigma_{\up}(\omega,\bm{k})$ by the Kramers-Kr$\mathrm{\ddot{o}}$nig relation, \begin{equation} \mathrm{Re}\Sigma_{\up}(\omega,\bm{k}) = -\frac{1}{\pi} \mathrm{P} \int_{-\infty}^{\infty} d\omega' \frac{\mathrm{Im} \Sigma_{\up}(\omega',\bm{k})}{\omega - \omega'}. \label{eq:Kram.Kroe} \end{equation} \item Step 4: Compute single-particle spectral density From the real and imaginary parts of the self-energy, the single-particle spectral density can then be obtained as, \begin{equation} A_{\up}(\omega,\bm{k}) = -\frac{1}{\pi} \mathrm{Im} \frac{1}{\omega + i0^{+}-\epsilon_{\bm{k}}-\Sigma_{\up}(\omega+i0^{+},\bm{k})}, \label{eq:spe} \end{equation} where $\epsilon_{\bm{k}}$ is defined as $\epsilon_{\bm{k}} = \bm{k}^2/(2m)$, with $m$ being the fermion mass. \end{itemize} The above steps are shown once more in pictorial form in Fig. \ref{fig:steps}. \begin{figure} \centering \vspace*{1cm} \input{step123456.eps_tex} \caption{\label{fig:steps} Steps for computing the single-particle spectral density from the OPE of the single-particle Green's function of a fermionic operator.} \end{figure} As a result of the above procedure, we find a two-peak structure in the imaginary part of the self-energy, the two peaks moving from the origin ($\omega=0$) to positive and negative directions of the energy with increasing momentum $|\bm{k}|$. Translated to the single-particle spectral density, this leads to a typical superfluid BCS-Bogoliubov-like dispersion relation with both hole and particle branches and a nonzero gap value. The paper is organized as follows. In Section \ref{Formalism}, we discuss the OPE of the single-particle Green's function and explain how it can be rewritten as an expansion of the single-particle self-energy. Next, we outline the derivation of the sum rules from the OPE. In Section \ref{Analysis}, the MEM analysis results of the sum rules are shown and the consequent final form of the single-particle spectral density and the dispersion relation are presented. The spectral density is visualized in Fig. \ref{fig:density.plot} as a density plot and the detailed numerical properties of the dispersion relation are described in Table \ref{tab:disp.para}. Section \ref{Summary} is devoted to the summary and conclusions of the paper. For the interested reader, we provide in the appendices detailed accounts of the relevant calculations, which were needed for this work. \section{\label{Formalism} Formalism} \subsection{\label{OPE} The operator product expansion} The operator product expansion (OPE) is based on the observation that a general product of non-local operators can be expanded as a series of local operators. This can be expressed as \begin{equation} \mathcal{O}_i(x + \tfrac{1}{2}y) \mathcal{O}_j(x - \tfrac{1}{2} y) = \sum_{k} W_{\mathcal{O}_k}(y) \mathcal{O}_k(x). \label{eq:OPE.exp} \end{equation} Here, we have used the abbreviations $(x)=(x_0,\,\bm{x})$ and $(y)=(y_0,\,\bm{y})$ for the four-dimensional vectors. $W_{\mathcal{O}_k}(y)$ are the Wilson-coefficients, which only depend on the relative time and distance $y$ of the two original operators. The operators on the right-hand side of Eq.(\ref{eq:OPE.exp}) are ordered according to their scaling dimensions $\Delta_k$, in ascending order. This expansion works for small time differences (or small distances), as the Wilson coefficients behave as $(\sqrt{|y_0|})^{\Delta_k - \Delta_i - \Delta_j}$ ($|\bm{y}|^{\Delta_k - \Delta_i - \Delta_j}$) and because the operators with larger scaling dimensions are thus suppressed by higher powers of $\sqrt{|y_0|}$ ($|\bm{y}|$). Fourier transforming Eq.(\ref{eq:OPE.exp}), the above statement is translated into energy-momentum space, where the OPE is a good approximation in the large energy or momentum limit as operators with larger scaling dimensions are suppressed by higher powers of $1/\sqrt{|k_0|}$ ($1/|\bm{k}|$). For the above expansion to work in the context of a non-relativistic atomic gas, certain conditions have to be satisfied. Firstly, it is important that the potential range $r_0$ of the atomic interaction is much smaller than all other length scales of the system, so that the detailed form of the interaction becomes irrelevant. Furthermore, the energy or momentum scale at which the system is probed needs to be much larger than the corresponding typical scales of the system. Hence, for the OPE to be a useful expansion, the following separation of scales must hold, which must be satisfied by either $1/\sqrt{|k_0|}$ or $1/|\k|$: \begin{equation} r_0 \ll 1/\sqrt{|k_0|},\,\,1/|\k| \ll |a|,\,n^{-1/3}_{\sigma},\,\lambda_T. \label{eq:conditionOPE} \end{equation} Here, $a$ is the s-wave scattering length between spin-up and -down fermions, $n^{-1/3}_{\sigma}$ the mean interparticle distance of both fermionic species, and $\lambda_T \sim 1/\sqrt{mT}$ the thermal de Broglie wave length. In other words, $\sqrt{|k_0|}$ or $|\k|$ have to be large enough so that for example an expansion in $1/(a\sqrt{|k_0|})$, $n^{1/3}_{\sigma}/\sqrt{|k_0|}$ and $1/(\lambda_T \sqrt{|k_0|})$ is valid, while they should be still small enough not to probe the actual structure of the individual atoms. In practice, we will in this work take the zero-range limit $r_0 \to 0$, study the system at vanishing temperature $T=0$ and will in the course of the derivation of the sum rules take the unitary limit $a \to \infty$. Furthermore, for studying the detailed momentum dependence of the spectral-density, we will in the following discussion make use of an expansion in $1/\sqrt{|k_0|}$, but not in $1/|\bm{k}|$. $|\bm{k}|$ will instead always be kept at the order of Fermi-momentum of the studied system. \subsection{\label{OPEforgenerala} The OPE of the single-particle Green's function for general values of $a$} In this paper, we will employ the OPE of the single-particle Green's function, which was computed in \cite{Nishida}. Let us here briefly recapitulate this result and discuss its form rewritten as an expansion of the self-energy $\Sigma_{\up}(k_0,\bm{k})$. The starting point of the calculation is \begin{equation} i\G_{\up}(k) \equiv \int \! dy\,e^{iky} \langle T[\psi_\up(x+\tfrac{y}2)\psi_\up^\+(x-\tfrac{y}2)] \rangle = \frac{i}{k_0-\ek-\Sigma_{\up}(k)}, \label{eq:Greensfunction} \end{equation} where $k$ should be understood as $(k)=(k_0,\k)$. The OPE for $\G_{\up}(k)$ can then be carried out, as discussed in detail in \cite{Nishida}. If translational and rotational invariance holds, all sorts of currents vanish and the OPE expression (taking into account terms up to momentum dimension 5) can be simplified as follows: \begin{equation} \begin{split} \G_{\up}^{\mathrm{OPE}}(k) =&\: G(k) - G^2(k) A(k) n_{\down} -\frac{\mathcal{C}}{4 \pi m a} G^2(k) \frac{\partial}{\partial k_0} A(k) - \frac{\mathcal{C}}{m^2} G^2(k) T^{\mathrm{reg}}_{\up}(k,0;k,0) \\ & - G^2(k) \Big[\frac{\partial}{\partial k_0} A(k) + \frac{m}{3} \sum_{i=1}^{3} \frac{\partial^2}{\partial k^2_i} A(k) \Big] \int \frac{d \bm{q}}{(2 \pi)^3} \frac{\bm{q}^2}{2m} \Big[\rho_{\down}(\bm{q}) - \frac{\mathcal{C}}{\bm{q}^4} \Big]. \end{split} \label{eq:GreensfunctionOPE} \end{equation} Here, $G(k)$ is the free fermion propagator, \begin{equation} G(k) = \frac{1}{k_0 - \epsilon_{\bm{k}}}, \label{eq:ferm.prop} \end{equation} $A(k)$ represents the two-body scattering amplitude between spin-up and -down fermions, \begin{equation} A(k) = \frac{4 \pi}{m} \frac{1}{\sqrt{\tfrac{\bm{k}^2}4 - m k_0 } -1/a}, \label{eq:scatt.amp} \end{equation} and $T^{\mathrm{reg}}_{\up}(k,p;k',p')$ stands for the regularized three-body scattering amplitude of a spin-up fermion with initial (final) momentum $k$ ($k'$) and a dimer with initial (final) momentum $p$ ($p'$). ``regularized" means that infrared divergences originally appearing in the three-body scattering amplitude have been subtracted (see Sections III C and III F of \cite{Nishida}): \begin{equation} T^{\mathrm{reg}}_{\up}(k,0;k,0) \equiv T_{\up}(k,0;k,0) - A(k) \int \frac{d \bm{q}}{(2 \pi)^3} \frac{m^2}{\bm{q}^4}. \label{eq:def.reg.three.amp} \end{equation} Furthermore, $\rho_{\sigma}(\q)$ is the momentum distribution function of spin-$\sigma$ fermions, $n_{\down}$ the density of spin-down fermions and $\mathcal{C}$ the so-called contact density \cite{Tan1,Tan2,Tan3}. Comparing Eq.(\ref{eq:GreensfunctionOPE}) with the definition of the self-energy of Eq.(\ref{eq:Greensfunction}), one can easily find an expression for $\Sigma _{\up}(k)$, which (again up to terms with momentum dimension 5) is consistent with the OPE of the single-particle Green's function: \begin{equation} \begin{split} \Sigma_{\up}^{\mathrm{OPE}}(k) =& - A(k) n_{\down} - \frac{\mathcal{C}}{4 \pi m a} \frac{\partial}{\partial k_0} A(k) - \frac{\mathcal{C}}{m^2} T^{\mathrm{reg}}_{\up}(k,0;k,0) \\ & - \Big[ \frac{\partial}{\partial k_0} A(k) + \frac{m}{3} \sum_{i=1}^{3} \frac{\partial^2}{\partial k^2_i} A(k) \Big] \int \frac{d \bm{q}}{(2 \pi)^3} \frac{\bm{q}^2}{2m} \Big[\rho_{\down}(\bm{q}) - \frac{\mathcal{C}}{\bm{q}^4} \Big]. \end{split} \label{eq:SelfenergyOPE} \end{equation} Assuming the considered system to be spin symmetric [$\rho_{\up}(\q) = \rho_{\down}(\q)$], the integral of the momentum distribution function appearing in the above equation can be evaluated by one of the Tan-relations \cite{Tan1,Tan2,Tan3}, \begin{equation} \sum_{\sigma=\up,\down} \int \frac{d \bm{q}}{(2 \pi)^3} \frac{\bm{q}^2}{2m} \Big[\rho_{\sigma}(\bm{q}) - \frac{\mathcal{C}}{\bm{q}^4} \Big] = \mathcal{E} + \frac{\mathcal{C}}{4 \pi m a}, \label{eq:Tan.rel.OPE} \end{equation} where $\mathcal{E}$ is the energy density of the system. We hence get, \begin{equation} \begin{split} \Sigma_{\up}^{\mathrm{OPE}}(k) =& - A(k) n_{\down} - \frac{\mathcal{C}}{4 \pi m a} \frac{\partial}{\partial k_0} A(k) - \frac{\mathcal{C}}{m^2} T^{\mathrm{reg}}_{\up}(k,0;k,0) \\ & - \frac{1}{2} \Big[ \frac{\partial}{\partial k_0} A(k) + \frac{m}{3} \sum_{i=1}^{3} \frac{\partial^2}{\partial k^2_i} A(k) \Big] \Big( \mathcal{E} + \frac{\mathcal{C}}{4 \pi m a} \Big). \end{split} \label{eq:SelfenergyOPE2} \end{equation} Among the various terms appearing in Eq.(\ref{eq:SelfenergyOPE2}), the most involved piece to evaluate is the three-body scattering amplitude $T^{\mathrm{reg}}_{\up}(k,0;k,0)$, which will be studied next in a separate subsection. \subsection{Three-body scattering amplitude} The difficulty in obtaining $T^{\mathrm{reg}}_{\up}(k,0;k,0)$ stems from the fact that this scattering amplitude by itself does not solve a closed integral equation and therefore can not be computed directly. We thus have to use $T_{\up}(k,0;p,k-p)$ with a more general momentum dependence, which will, for simplicity of notation, from now on be denoted as $T_{\up}(k;p)$. $T_{\up}(k;p)$ satisfies the following integral equation (note that we for the moment work with the non-regularized version of the amplitude): \begin{equation} \begin{split} T_\up(k;p) =&\: G(-p) + i\int\!\frac{d q_0 d\q}{(2\pi)^4}\, T_\up(k;q)G(q)A(k-q)G(k-p-q) \\ =& -\frac1{p_0+\ep} \\ &- \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4mik_0}-\frac1a} \frac{T_\up(k;\eq,\q)}{\frac{(\p+\q-\k)^2}2+m p_0+\frac{\q^2}2-m k_0}. \end{split} \label{eq:scatt.amp.1} \end{equation} In going to the second and third lines, the integral over $q_0$ is performed and thus $q_0$ is fixed to $\eq$. Next, setting $p_0 = \ep$ provides a closed equation, \begin{align} T_\up(k;\ep,\p) = -\frac{m}{\p^2} - \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} \frac{T_\up(k;\eq,\q)}{\frac{(\p+\q-\k)^2}2+\frac{\p^2+\q^2}2-m k_0}, \label{eq:scatt.amp.2} \end{align} which needs to be solved numerically. The technical details of this step are presented in Appendix \ref{ScattAmp}. Once the above equation is solved and $T_\up(k;\ep,\p)$ has hence been obtained, one can extract the desired amplitude $T_{\up}(k;k)$ from Eq.(\ref{eq:scatt.amp.1}) by setting $p=k$: \begin{equation} \begin{split} T_\up(k;k) &= -\frac1{k_0+\ek} - \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} \frac{T_\up(k;\eq,\q)}{\q^2} \\ &= -\frac1{k_0+\ek} + \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} \frac{m}{\q^4} \\ &\quad - \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} \frac{T_\up(k;\eq,\q)+\frac{m}{\q^2}}{\q^2}. \end{split} \label{eq:scatt.amp.3} \end{equation} Finally, returning to the regularized scattering amplitude $T^{\mathrm{reg}}_{\up}(k,0;k,0) = T^{\mathrm{reg}}_{\up}(k;k)$ [defined in Eq.(\ref{eq:def.reg.three.amp})], we get, \begin{equation} \begin{split} &\: T_\up^\reg(k;k) \\ =&\:T_\up(k;k) - A(k)\int\!\frac{d\q}{(2\pi)^3}\left(\frac{m}{\q^2}\right)^2 \\ =& -\frac1{k_0+\ek} + \int\!\frac{d\q}{(2\pi)^3} \left[\frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} - \frac{4\pi}{\frac12\sqrt{\k^2-4m k_0}-\frac1a}\right] \frac{m}{\q^4} \\ & - \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}-\frac1a} \frac{T_\up(k;\eq,\q)+\frac{m}{\q^2}}{\q^2}. \end{split} \label{eq:scatt.amp.4} \end{equation} \subsection{\label{OPE.unitary.limit} The OPE of the single-particle Green's function in the unitary limit} So far, we have studied the OPE for arbitrary values of the s-wave scattering length $a$ between the two spin degrees of freedom (which should however be kept large enough for the conditions of a valid OPE to apply). One could in principle continue with these general expressions, derive sum rules for nonzero $a^{-1}$ values and analyze them according to our strategy outlined in the introduction. In order to provide a clear account of the proposed method, we will however not do this here but concentrate on the unitary limit ($a \to \infty$), which considerably simplifies many of the equations needed to derive the sum rules, but already exhibits all non-trivial technical difficulties that will arise in an analogous, but more involved manner when generalizing the calculations to nonzero $a^{-1}$. Firstly, looking at the unitary limit of the OPE result of Eq.(\ref{eq:SelfenergyOPE2}), the terms proportional to $a^{-1}$ vanish and the factor containing derivatives of $A(k)$ can be obtained in a simple form: \begin{equation} \frac{\partial}{\partial k_0} A(k) + \frac{m}{3} \sum_{i=1}^{3} \frac{\partial^2}{\partial k^2_i} A(k) = \frac{2^{5/2} \pi}{m^{3/2}} \frac{\ek - k_0}{(\ek - 2 k_0)^{5/2}}. \label{eq:unitarylimit1} \end{equation} As for the calculation of the three-body scattering amplitude $T^{\mathrm{reg}}_{\up}(k,0;k,0)$, the integral equation of Eq.(\ref{eq:scatt.amp.2}) is made slightly more manageable because of a vanishing $a^{-1}$ term in the first denominator of the integrand on the right-hand side. The regularized scattering amplitude itself, given in Eq.(\ref{eq:scatt.amp.4}), also simplifies as the integral appearing in its second term [see Eq.(\ref{eq:scatt.amp.4})] can now be performed analytically: \begin{equation} \begin{split} \int\!\frac{d\q}{(2\pi)^3} \left[\frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}} - \frac{4\pi}{\frac12\sqrt{\k^2-4m k_0}}\right] \frac{m}{\q^4}\\ =\frac{1}{\pi} \Bigg[ \frac{\sqrt{3}}{2k_0 - \ek} + \frac{3k_0 - \ek}{\sqrt{\ek}(\ek - 2k_0)^{3/2}} \log \Bigg(\frac{1 + \sqrt{3} \sqrt{1 - 2k_0 /\ek}}{-1 + \sqrt{3} \sqrt{1 - 2k_0 / \ek}} \Bigg) \Bigg]. \end{split} \label{eq:integral1} \end{equation} For a spin-symmetric system, making use of the equations of motion and Tan-relations, it is possible to express the expectation values of the local operators appearing in the OPE in terms of particle density $n_{\down}$, energy density $\E$, and contact density $\C$ [see Eq.(\ref{eq:SelfenergyOPE2})]. In the unitary limit, these quantities only depend on one single scale, which determines the properties of the system. Here, we define the Fermi momentum and the Fermi energy by $n_\up=n_\down\equiv\kF^3/(6\pi^2)$ and $\eF\equiv\kF^2/(2m)$. At infinite scattering length $a\to\infty$ (and zero temperature $T=0$), $\E$ and $\C$ are then given as \begin{align} \E = \xi\,\frac{\kF^5}{10\pi^2m}, \qquad\qquad \C = \zeta\,\frac{\kF^4}{3\pi^2}. \end{align} These values have by now been extracted from both theoretical quantum Monte-Carlo simulations and experimental measurements, which give consistent results, as shown in Table \ref{tab:input.para}. \begin{table} \begin{center} \caption{Numerical values of the Bertsch parameter $\xi$ and the dimensionless contact density $\zeta$ in the unitary limit at zero temperature. The column ``simulation" gives numbers extracted from quantum Monte-Carlo simulations, while the column ``experiment" contains values from ultracold-atom experiments.} \vspace{0.2cm} \label{tab:input.para} \begin{tabular}{lll} \hline & \hspace{1cm} simulation & experiment \\ \hline $\xi$ & \hspace{1cm} 0.372(5) \,\,\cite{Carlson} \hspace{1.0cm}& 0.370(5)(8) \,\, \cite{Ku,Zurn} \\ $\zeta$ & \hspace{1cm} 3.40(1) \,\,\cite{Gandolfi} \hspace{1.0cm} & 3.33(7) \,\, \cite{Hoinka} \\ \hline \end{tabular} \end{center} \end{table} In the specific analyses presented in this paper, we will use the values obtained from quantum Monte-Carlo studies (denoted as ``simulation" in Table \ref{tab:input.para}). Assembling all the results and definitions of the last two subsections, we reach the following final form for the OPE in the unitary limit, \begin{equation} \begin{split} \Sigma^{\mathrm{OPE}}_{\up}(k_0, \bm{k}) = &-\frac{8}{3\pi} \eF^{3/2} \frac{1}{\sqrt{\ek - 2 k_0}} +\frac{4}{3\pi^2} \zeta \eF^2 \Biggl[ \frac{1}{k_0 + \ek } - \frac{\sqrt{3}}{\pi} \frac{1}{2 k_0 - \ek } \\ &- \frac{1}{\pi} \frac{3k_0 - \ek}{\sqrt{\ek}(\ek - 2k_0)^{3/2}} \log \Bigg(\frac{1 + \sqrt{3} \sqrt{1 - 2k_0 /\ek}}{-1 + \sqrt{3} \sqrt{1 - 2k_0 / \ek}} \Bigg) +\frac{1}{\ek} L\bigl(\tfrac{k_0}{\ek}\bigr) \Biggr] \\ &- \frac{8}{5\pi} \xi \eF^{5/2} \frac{\ek - k_0}{(\ek - 2 k_0)^{5/2}} +O(k_0^{-2}), \end{split} \label{eq:OPE1} \end{equation} where we, for simplicity of notation, have introduced the function $L(x)$, which is defined as: \begin{equation} L\bigl(\tfrac{k_0}{\ek}\bigr) = \ek \int\!\frac{d\q}{(2\pi)^3} \frac{4\pi}{\frac12\sqrt{3\q^2-2\q\cdot\k+\k^2-4m k_0}} \frac{T_\up(k;\eq,\q)+\frac{m}{\q^2}}{\q^2}. \label{eq:definition1} \end{equation} Note that we here have made use of the fact that $L(x)$ is dimensionless and hence can only depend on the ratio $k_0/\ek$. As mentioned earlier, $L(x)$ can be obtained by solving Eq.(\ref{eq:scatt.amp.2}) and substituting the result into the above definition. The detailed steps of this procedure are given in Appendix \ref{ScattAmp}. Here, we simply note that the imaginary part of $L(x)$ (which is its only piece that will play a role in the sum rules to be derived later) is a finite, but sharply peaked function, which is non-zero only in the interval: $1/3 < x < 1$ (see Fig. \ref{fig:inteq.res}). \subsection{Derivation of the sum rules} We now derive the sum rules from the OPE of Eq.(\ref{eq:OPE1}). For doing this, we consider $k_0$ to be a complex variable and study the contour integral, \begin{equation} \int_{C_1+C_2} d k_0 \Big[ \Sigma_{\up}(k_0, \bm{k}) - \Sigma^{\mathrm{OPE}}_{\up}(k_0, \bm{k})\Big]\mathcal{K}(k_0) = 0. \label{eq:integral} \end{equation} Here, $\Sigma_{\up}(k_0, \bm{k})$ is the exact (and at this moment unknown) self-energy, $\Sigma^{\mathrm{OPE}}_{\up}(k_0, \bm{k})$ is its approximate OPE expression of Eq.(\ref{eq:OPE1}). $\mathcal{K}(k_0)$ is assumed to be an analytic function on the upper and lower half of the complex plane of $k_0$ and to be real on the real axis, but is otherwise completely arbitrary. The contours $C_1$ and $C_2$ are shown in Fig. \ref{fig:contour}, in which the wavy line depicts possible non-analytic poles or cuts of $\Sigma_{\up}(k_0, \bm{k})$ and $\Sigma^{\mathrm{OPE}}_{\up}(k_0, \bm{k})$, whose actual locations depend on the chosen value of $|\bm{k}|$. \begin{figure} \begin{center} \includegraphics[width=10.0cm]{contour2.eps \vspace{-0.5cm} \caption{\label{fig:contour} The contours $C_1$ and $C_2$ on the complex plane of $k_0$, used for deriving the sum rules. The wavy line on the real axis represents possible locations of non-analytic poles or cuts of $\Sigma_{\up}(k_0, \bm{k})$ and $\Sigma^{\mathrm{OPE}}_{\up}(k_0, \bm{k})$.} \end{center} \end{figure} The above integral vanishes because the exact self-energy $\Sigma_{\up}(k_0, \bm{k})$ and its OPE counterpart are analytic in the upper and lower half of the complex plane. Furthermore, we know that the OPE is valid at large $|k_0|$, from which follows that the integrand on the left-hand side of Eq.(\ref{eq:integral}) vanishes (to the order we are considering) along the large half-circles in $C_1$ and $C_2$. As we have assumed $\mathcal{K}(k_0)$ to be real on the real axis, it is noted that the added contour sections along the real axis leave just the imaginary parts of the self-energies, while their real parts vanish. Thus, we can now write down the sum rules as \begin{equation} \int_{-\infty}^{\infty} d \omega \mathrm{Im}\Sigma_{\up}(\omega+i0^{+}, \bm{k}) \mathcal{K}(\omega) = \int_{-\infty}^{\infty} d \omega \mathrm{Im}\Sigma^{\mathrm{OPE}}_{\up}(\omega+i0^{+}, \bm{k}) \mathcal{K}(\omega), \label{eq:sum.rule} \end{equation} where here and in the rest of the paper $\omega$ is understood to be a real variable. The right-hand side of this equation can be calculated from Eq.(\ref{eq:OPE1}), once the kernel $\mathcal{K}(\omega)$ is specified. This last step, however needs some care, as some terms of Eq.(\ref{eq:OPE1}) at first sight lead to divergences on the right-hand side of Eq.(\ref{eq:sum.rule}). This is for instance the case for the last term in Eq.(\ref{eq:OPE1}), which has an imaginary part for $k_0=\omega>\ek/2$ and diverges as $(\omega - \ek/2)^{-5/2}$, when $\omega$ approaches $\ek/2$ from above. This superficial divergence originates in our sloppiness of treating cuts in the above derivation and can be cured by taking into account all parts of the contours $C_1$ and $C_2$ which run along the cuts and their thresholds. The details of this procedure are given in Appendix \ref{OPE.detail}, where it is explicitly shown how all superficial divergences cancel and that hence the right-hand side of Eq.(\ref{eq:sum.rule}) is indeed finite. All this then leads us to the following form of the sum rules: \begin{equation} \begin{split} &\:\int^{\infty}_{-\infty}d\omega \mathcal{K}(\omega) \mathrm{Im} \Sigma_{\uparrow}(\omega + i0^{+}, \k) \\ =&\:\frac{8}{3\pi} \eF^{3/2} \int^{\infty}_{\ek/2} d\omega \sqrt{2\omega - \ek} \mathcal{K}'(\omega) +\frac{4}{3\pi} \zeta \eF^2 \Bigl[ \frac{\sqrt{3}}{\pi}\mathcal{K}\bigl(\tfrac{\ek}{3}\bigr) - \mathcal{K}(-\ek) \Bigr] \\ &+\frac{4}{3\pi^2} \zeta \frac{\eF^2}{\sqrt{\ek}} \int_{\ek/3}^{\ek/2} d\omega \sqrt{\ek - 2\omega} \Bigl[ 6\mathcal{K}'(\omega) - (\ek -3\omega) \mathcal{K}''(\omega)\Bigr] \\ &+\frac{4}{3\pi^2} \zeta \frac{\eF^2}{\ek} \int^{\ek}_{\ek/3} d\omega \mathcal{K}(\omega) \mathrm{Im} \Bigl[ L\bigl( \tfrac{\omega}{\ek} \bigr) \Bigr] \\ &-\frac{8}{15\pi} \xi \eF^{5/2} \int_{\ek/2}^{\infty} d\omega \sqrt{2\omega - \ek} \Bigl[ 3\mathcal{K}''(\omega) + (\omega - \ek) \mathcal{K}'''(\omega)\Bigr]. \end{split} \label{eq:sum.rule2} \end{equation} For deriving this expression, we have, additionally to the assumptions mentioned earlier, assumed that $\mathcal{K}(\omega)$ vanishes at $\omega \to \infty$ faster than $1/\sqrt{\omega}$. If one wishes to use kernels which behave differently (as for instance in the so-called finite energy sum rules in QCD \cite{Krasnikov}, see also Appendix \ref{finite.energy}), one should go back to the OPE of Eq.(\ref{eq:OPE1}) and rederive the corresponding sum rules. Our statement made above on the cancellations of superficial divergences however still holds for this case. Furthermore, in the limit $k_0=\omega \gg \ek$, Eq.(\ref{eq:OPE1}) takes a considerably simpler form, making it thus possible to derive the resultant sum rule with much less effort. Moreover, if one introduces certain assumptions of the functional form of the self-energy, one can even analytically extract some of its properties from the sum rules. How this can be done by making use of the finite energy sum rules, is demonstrated in Appendix \ref{finite.energy}. While providing a simple and qualitatively correct picture, this approach however has the drawback of relying rather heavily on mean-field theory for fixing the form of the self-energy and therefore is inferior to the MEM analysis to be presented in the following sections, which does not need any other input besides the sum rules themselves. \subsection{Choice of the kernel $\mathcal{K}(\omega)$} As a next step, we have to fix the concrete form of the kernel $\mathcal{K}(\omega)$. As discussed in the previous sections, this kernel must be analytic on the complex plane of $\omega$ and real on the real axis. Furthermore, $\mathcal{K}(\omega)$ should vanish faster than $1/\sqrt{\omega}$ at $\omega \to \infty$ on the real axis. Obviously, these restrictions still give room for an infinite number of choices. From the experience of QCD sum rule analyses, it is however known that a simple Gaussian centered at the origin works well for extracting the lowest poles of the spectral function. We will in this paper follow a similar strategy and use \begin{equation} \mathcal{K}_n(\omega, M) = \Bigl( \frac{\omega}{M}\Bigr)^n e^{-\omega^2/M^2}, \hspace{1cm} n=0,1 \label{eq:Borel1} \end{equation} as our kernel. $M$ is usually referred to as the Borel mass in the QCD sum rule literature, which we will follow in this work, while in \cite{Goldberger,Goldberger2} the symbol $\omega_0$ was used for this variable. $M$ can in principle be freely chosen as long as the OPE converges. As will however be shown in Fig. \ref{fig:OPE}, the OPE convergence worsens for decreasing values of $M$, which means that there exists some lower boundary of $M$, below which the OPE is not a valid expansion. As the imaginary part of the self-energy on the right-hand side of Eq.(\ref{eq:sum.rule2}) extends to both positive and negative values of $\omega$ and is in general not an even function, it is noted that using only the most simple kernel with $n=0$ does not suffice to determine $\mathrm{Im} \Sigma_{\up}(\omega + i0^{+}, \bm{k})$ as for this kernel all odd-function contributions automatically drop out of the sum rules. We hence need to introduce one more kernel which should be an odd function in $\omega$, for which the $n=1$ case in Eq.(\ref{eq:Borel1}) seems to be the most natural choice. Let us mention here that in the literature of QCD sum rules, other kernel choices have been proposed, such as a Gaussian with a variable center \cite{Bertlmann,Orlandini,Ohtani} or with complex Borel masses, which leads to an oscillating kernel \cite{Ioffe,Araki}. For this first study, we however prefer Eq.(\ref{eq:Borel1}) because of its simple analytic form. Substituting the above kernels into Eq.(\ref{eq:sum.rule2}) then gives the final form of the sum rules, \begin{align} &\int^{\infty}_{-\infty}d\omega \mathcal{K}_0(\omega, M) \mathrm{Im} \Sigma_{\up}(\omega, \bm{k}) = D^{\mathrm{OPE}}_{\up,\,0}(M,\bm{k}) = \nonumber \\ & -\frac{2\sqrt{2}}{3\pi} \eF^{3/2} \sqrt{\ek} e^{-\frac{\ek^2}{8M^2}} K_{\frac{1}{4}}\Big(\frac{\ek^2}{8M^2}\Big) \nonumber \\ &+\frac{4}{3\pi} \zeta \eF^2 \Bigl( \frac{\sqrt{3}}{\pi}e^{-\frac{\ek^2}{9M^2}} - e^{-\frac{\ek^2}{4M^2}} \Bigr) \nonumber \\ &-\frac{8}{3\pi^2} \zeta \eF^2 \Bigl(\frac{M}{\sqrt{\ek}}\Bigr)^{1/2} G^1_0\Big(\frac{\ek}{M}\Big) + \frac{4}{3\pi^2} \zeta \eF^2 \frac{M}{\ek} G^2_0 \Big(\frac{\ek}{M}\Big) \nonumber \\ &-\frac{1}{30} \xi \eF^{5/2} \frac{1}{\sqrt{\ek}} e^{-\frac{\ek^2}{8M^2}} \Biggl\{ \Big(12 + 3\frac{\ek^2}{M^2} - \frac{\ek^4}{M^4}\Big) I_{\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) +\frac{\ek^2}{M^2} \Big(1 + \frac{\ek^2}{M^2}\Big) I_{-\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) \nonumber \\ &- \frac{\ek^2}{M^2}\Big(3 + \frac{\ek^2}{M^2}\Big) \Biggl[ I_{\frac{3}{4}}\Big(\frac{\ek^2}{8M^2} \Big) - I_{\frac{5}{4}}\Big(\frac{\ek^2}{8M^2} \Big) \Biggr] \Biggr\} \label{eq:sum.rule3.1} \end{align} and \begin{align} &\int^{\infty}_{-\infty}d\omega \mathcal{K}_1(\omega, M) \mathrm{Im} \Sigma_{\up}(\omega, \bm{k}) = D^{\mathrm{OPE}}_{\up,\,1}(M,\bm{k}) = \nonumber \\ & -\frac{1}{6} \eF^{3/2} \frac{M}{\sqrt{\ek}} e^{-\frac{\ek^2}{8M^2}} \Biggl\{ \Big(4 - \frac{\ek^2}{M^2} \Big) I_{\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) +\frac{\ek^2}{M^2} \Biggr[ I_{-\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) -I_{\frac{3}{4}}\Big(\frac{\ek^2}{8M^2} \Big) + I_{\frac{5}{4}}\Big(\frac{\ek^2}{8M^2} \Big) \Biggr] \Biggr\} \nonumber \\ &+\frac{4}{3\pi} \zeta \eF^2 \frac{\ek}{M} \Bigl( \frac{\sqrt{3}}{3\pi}e^{-\frac{\ek^2}{9M^2}} + e^{-\frac{\ek^2}{4M^2}} \Bigr) \nonumber \\ &+\frac{4}{3\pi^2} \zeta \eF^2 \frac{\sqrt{M}}{\sqrt{\ek}} G^1_1(\ek/M) + \frac{4}{3\pi^2} \zeta \eF^2 \frac{M}{\ek} G^2_1(\ek/M) \nonumber \\ &+\frac{1}{60} \xi \eF^{5/2} \frac{\sqrt{\ek}}{M} e^{-\frac{\ek^2}{8M^2}} \Biggl\{\Big(6 + 2\frac{\ek^2}{M^2} - \frac{\ek^4}{M^4}\Big) I_{-\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) \nonumber \\ &-\Big(6 + 6\frac{\ek^2}{M^2} -\frac{\ek^4}{M^4}\Big) I_{\frac{1}{4}}\Big(\frac{\ek^2}{8M^2} \Big) + \frac{\ek^4}{M^4} \Biggl[ I_{\frac{3}{4}}\Big(\frac{\ek^2}{8M^2} \Big) - I_{\frac{5}{4}}\Big(\frac{\ek^2}{8M^2} \Big) \Biggr] \Biggr\}, \label{eq:sum.rule3.2} \end{align} where $I_{\nu}(y)$ and $K_{\nu}(y)$ are the modified Bessel functions of the first and second kind, respectively. Furthermore, the functions $G^i_n(y)$ have been defined as follows: \begin{equation} \begin{split} G^1_0(y) =& \int_{y/3}^{y/2} dx \sqrt{y - 2x} \Bigl[ 6x - (y - 3x) (1 - 2x^2) \Bigr] e^{-x^2}, \\ G^1_1(y) =& \int_{y/3}^{y/2} dx \sqrt{y - 2x} \Bigl[ 6(1 -2x^2) + 2x(y - 3x)(3 - 2x^2) \Bigr] e^{-x^2}, \\ G^2_0(y) =& \int^{y}_{y/3} dx \mathrm{Im} \Bigl[ L\bigl( \tfrac{x}{y} \bigr) \Bigr] e^{-x^2}, \\ G^2_1(y) =& \int^{y}_{y/3} dx x \mathrm{Im} \Bigl[ L\bigl( \tfrac{x}{y} \bigr) \Bigr] e^{-x^2}. \end{split} \label{eq:defs} \end{equation} The ratios of the right-hand sides of Eqs.(\ref{eq:sum.rule3.1}-\ref{eq:sum.rule3.2}) and their respective leading order terms are shown in Fig. \ref{fig:OPE} as functions of the Borel mass $M$ for three typical values of the momentum $|\bm{k}|$. \begin{figure} \begin{center} \includegraphics[width=7.40cm]{OPE.y0.0.nr0.forpaper.new.eps} \includegraphics[width=7.40cm]{OPE.y0.0.nr1.forpaper.new.eps} \includegraphics[width=7.40cm]{OPE.y0.6.nr0.forpaper.new.eps} \includegraphics[width=7.40cm]{OPE.y0.6.nr1.forpaper.new.eps} \includegraphics[width=7.40cm]{OPE.y1.2.nr0.forpaper.new.eps} \includegraphics[width=7.40cm]{OPE.y1.2.nr1.forpaper.new.eps} \vspace{-0.3cm} \caption{\label{fig:OPE} The right-hand sides of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}), divided by their LO terms, as a function of the Borel mass $M$. The left and right plots show the cases of $n=0$ and $n=1$, respectively. Starting from the top, each line shows the OPE for momenta $|\bm{k}|/k_{\mathrm{F}}=0$, $0.6$ and $1.2$. Here, LO corresponds to the first line on the right-hand side of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}), NLO to the second and third lines and NNLO to the fourth and fifth lines. The vertical arrows at the bottom of each plot indicate the lower and upper boundaries of the regions of $M$, which will be used in the MEM analysis of Section \ref{Analysis}.} \end{center} \end{figure} The sum rules of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}) look quite cumbersome, but their analytic structure becomes clearer if one takes the small momentum limit ($\ek \to 0$). Using the kernel of Eq.(\ref{eq:Borel1}) with general values of $n$, one can show that in this limit the LO term behaves as $M^{1/2+n}$ and the NNLO term as $M^{-1/2+n}$. The NLO term on the other hand can be shown to be proportional to $M^0=1$ for $n=0$, while it vanishes for all other positive $n$ values. The results for $n=0$ and $n=1$ are given by \begin{align} &\int^{\infty}_{-\infty}d\omega \mathcal{K}_0(\omega, M) \mathrm{Im} \Sigma_{\up}(\omega, \bm{k}) \nonumber \\ =& -\frac{2 \sqrt{2}}{3 \pi} \Gamma(1/4) \eF^{3/2} M^{1/2} - \frac{0.207498}{3 \pi} \eF^2 \zeta -\frac{4}{5} \frac{1}{\Gamma(1/4)} \eF^{5/2} \Bigl( \xi - \frac{5}{3} \frac{\ek}{\eF} \Bigr) \frac{1}{M^{1/2}} \label{eq:sum.rule4.1} \end{align} and \begin{align} &\int^{\infty}_{-\infty}d\omega \mathcal{K}_1(\omega, M) \mathrm{Im} \Sigma_{\up} (\omega, \bm{k}) \nonumber \\ =& -\frac{4}{3} \frac{1}{\Gamma(1/4)} \eF^{3/2} M^{3/2} + \frac{\sqrt{2}}{10 \pi} \Gamma(1/4) \eF^{5/2} \Bigl( \xi - \frac{5}{3} \frac{\ek}{\eF} \Bigr) M^{1/2}. \label{eq:sum.rule4.2} \end{align} Here, the term proportional to $\ek$ in the last term comes from Taylor expanding the leading order terms of the first lines of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}) in $\ek/M$. The above equations should give the reader an idea on the behavior of the OPE at least for small $|\bm{k}|$. In the actual analysis of the next section, we will however use the full result of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}). \section{\label{Analysis} MEM analysis for the spectral density} Next, we discuss the imaginary parts of the self-energies, which we have extracted numerically from the sum rules by using the maximum entropy method (MEM). This sort of approach for analyzing sum rules, was recently applied to QCD in a similar way \cite{Gubler} and has during the last few years been used to study hadrons in various environments \cite{Ohtani,Gubler3,Gubler2,Suzuki,Ohtani2,Gubler4}. For the technical details of this analysis, we refer the reader to Appendix \ref{MEM} and the references cited therein. \subsection{The Borel window and the default model} Before discussing our results, let us here at first briefly explain how to determine the lower and upper boundaries of the Borel mass $M$ used in the analysis. For fixing the lower boundary $M_{\mathrm{min}}$, we demand that the highest order (NNLO) OPE term, which is proportional to $\xi$, should be smaller than 10\% of the leading order term. Note, that this condition generally leads to a value of $M_{\mathrm{min}}$, which depends on the momentum $|\bm{k}|$. We will here first fix $M_{\mathrm{min}}$ at $|\bm{k}|=0$ and take this momentum dependence into account only if it leads to an increasing value of $M_{\mathrm{min}}$. This keeps the momentum dependence of $M_{\mathrm{min}}$ to a minimum and at the same time ensures that for any value of $|\bm{k}|$, only Borel mass ranges with a satisfactory OPE convergence are used as input for the MEM analysis. For fixing the upper boundary $M_{\mathrm{max}}$, we do not have such a clear-cut criterion and therefore can in principle choose it freely as long as it lies above $M_{\mathrm{min}}$. For the analysis presented in this paper, we will set it as $M_{\mathrm{max}} = M_{\mathrm{min}} + x$, with $x = 5\,\epsilon_F$. We have checked that our results do not much depend on this choice and the exact value of $x$ hence does not play any important role in the present analysis. The specific values of $M_{\mathrm{min}}$ and $M_{\mathrm{max}}$ for some typical momentum values are indicated in Fig. \ref{fig:OPE} as vertical arrows at the bottom of each plot. As for the default model $m(\omega)$, which is an input of the MEM algorithm (see Appendix \ref{MEM} for details), we will use \begin{equation} m(\omega) = -\frac{4\sqrt{2}}{3\pi} \eF^{3/2} \frac{1}{(\omega^2 + y)^{1/4}}, \label{eq:def.model} \end{equation} with $y=\epsilon_{\mathrm{F}}^2$. As can be understood from Eq.(\ref{eq:OPE1}), the above default model approaches the correct asymptotic limit of $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k}) \simeq -(4\sqrt{2} \eF^{3/2})/(3\pi \sqrt{\omega})$, as $\omega \gg \ek$ and is therefore a suitable choice for the present analysis. For avoiding singularities at $\omega=0$, we have introduced the parameter $y$ for smoothing out the function around the origin. We have tested different choices for $y$ and found that this affects our analysis results only very weakly. \subsection{The single-particle spectral density} After these preparations, we can now finally proceed to our analysis results. First, we show the imaginary part of the self-energy, for three representative momenta in the left column of Fig.~\ref{fig:spec.func}. \begin{figure} \begin{center} \includegraphics[width=5.2cm]{real.y0.0.forpaper.new.eps} \includegraphics[width=5.2cm]{realpart.y0.0.forpaper.new.eps} \includegraphics[width=5.2cm]{spec.y0.0.forpaper.new.eps} \includegraphics[width=5.2cm]{real.y0.6.forpaper.new.eps} \includegraphics[width=5.2cm]{realpart.y0.6.forpaper.new.eps} \includegraphics[width=5.2cm]{spec.y0.6.forpaper.new.eps} \includegraphics[width=5.2cm]{real.y1.2.forpaper.new.eps} \includegraphics[width=5.2cm]{realpart.y1.2.forpaper.new.eps} \includegraphics[width=5.2cm]{spec.y1.2.forpaper.new.eps} \vspace{-0.5cm} \caption{\label{fig:spec.func} Left column: Results of the MEM analysis of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}) are shown as red lines, while the used default model [see Eq.(\ref{eq:def.model})] is indicated in blue. Middle column: The real parts of the self-energies obtained from Eq.(\ref{eq:disp}) and $\mathrm{Im} \Sigma_{\up}(\omega,\k)$ are plotted as red lines, and the function $\omega - \epsilon_{\bm{k}}$ is given in blue. Right column: The spectral density $A_{\up}(\omega,\bm{k})$, as computed from the results of the two columns on the left and Eq.(\ref{eq:spe}). As in Fig. \ref{fig:OPE}, each row from top to bottom corresponds to momenta $|\bm{k}|/k_{\mathrm{F}}=0.0$, $0.6$ and $1.2$, respectively.} \end{center} \end{figure} For illustration, we show in these plots also the used default model of Eq.(\ref{eq:def.model}). It is seen that for zero momentum, the spectral function is composed of one single peak around $\omega=0$ and a continuum behaving as $\sim 1/\sqrt{\omega}$ in the positive energy region. As the momentum increases, the initial peak separates into two distinct peaks which start to move into opposite directions. The continuum also recedes into the positive $\omega$ region with increasing momentum, leaving a growing region around the origin without any strength at all. With the extracted $\mathrm{Im} \Sigma_{\up}(\omega,\bm{k})$, we next compute the real part of the self-energy by using the Kramers-Kr$\mathrm{\ddot{o}}$nig relation \begin{equation} \mathrm{Re}\Sigma_{\up}(\omega,\bm{k}) = - \frac{1}{\pi} \mathrm{P} \int_{-\infty}^{\infty} d\omega' \frac{\mathrm{Im} \Sigma_{\up}(\omega',\bm{k})}{\omega - \omega'}, \label{eq:disp} \end{equation} and executing the principal value integral numerically. The result of this evaluation is given in the middle column of Fig. \ref{fig:spec.func}, where we also show the curve $\omega - \epsilon_{\bm{k}}$, which appears in the denominator of the right-hand side of Eq.(\ref{eq:spe}). It is clear from this equation that if the imaginary part of the self-energy happens to be small, the single-particle spectral density will have a narrow peak wherever $\mathrm{Re}\Sigma_{\up}(\omega,\bm{k})$ coincides with $\omega - \epsilon_{\bm{k}}$. As a last step, we simply plug the real and imaginary parts of the self-energy into \begin{equation} A_{\up}(\omega,\bm{k}) = -\frac{1}{\pi} \mathrm{Im} \frac{1}{\omega + i0^{+}-\epsilon_{\bm{k}}-\Sigma_{\up}(\omega+i0^{+},\bm{k})}, \label{eq:spe.second.time} \end{equation} to obtain the single-particle spectral density $A_{\up}(\omega,\bm{k})$. The resulting functions are given in the right column of Fig. \ref{fig:spec.func}. It can be seen there, that for small momenta $|\bm{k}|$, the spectral density is dominated by the narrow hole-branch in the negative energy region, while the particle-branch consists of only a relatively broad bump. This changes at around $|\bm{k}| \sim 0.5\,k_{\mathrm{F}}$, where the main strength of the spectral density switches over to the particle branch, which, as the momentum is further increased, starts to move into the positive energy direction. On the other hand, the hole-branch bends back into the negative energy region, while gradually losing its strength. To give the reader a better visual grasp of the spectral density as a whole and especially on the behavior of the particle and hole branches, we show $A_{\up}(\omega,\bm{k})$ in a density plot as a function of both energy $\omega$ and momentum $|\bm{k}|$ in Fig. \ref{fig:density.plot}. To improve the visibility of this plot without changing its essential features, we have artificially increased the imaginary part of $\Sigma_{\up}(\omega,\bm{k})$ in Eq.(\ref{eq:spe.second.time}) by an amount of $0.2\,\eF$. \begin{center} \begin{figure} \vspace{-1.0cm} \hspace*{-0.5cm} \includegraphics[width=15cm,bb=0 0 360 252]{spec.density.plot.adim0.2.3.eps} \vspace{-1.0cm} \caption{\label{fig:density.plot} Density plot of the spectral density $A_{\up}(\omega,\bm{k})$ shown as a function of energy $\omega$ and momentum $|\bm{k}|$. The green dashed lines indicate the results of a fit of the particle and hole peak-maxima to Eq.(\ref{eq:fit.func}).} \end{figure} \end{center} In this figure, the typical BCS-like dispersion of the particle and hole branches clearly manifest themselves. Qualitatively, this result agrees with the spectral densities extracted from both quantum Monte-Carlo calculations \cite{Magierski} and a Luttinger-Ward approach \cite{Haussmann}. In order to make a quantitative comparison with other methods, we fit the peak maxima to a dispersion relation parametrized as \begin{equation} E_{\bm{k}}^{\pm} = \mu \pm \sqrt{\left( \frac{m}{m^{\pm}} \epsilon_{\bm{k}} + U^{\pm} - \mu \right)^2 + \Delta^2}, \label{eq:fit.func} \end{equation} which we have adopted from \cite{Haussmann}. The resultant curves are shown in Fig. \ref{fig:density.plot} as green dashed lines, while the corresponding values of $\mu$, $\Delta$, $m^{\pm}$ and $U^{\pm}$ are given in Table \ref{tab:disp.para}. \begin{table} \begin{center} \caption{Fit results of the particle and hole branches shown in Fig. \ref{fig:density.plot} to a dispersion relation parametrized as in Eq.(\ref{eq:fit.func}).} \label{tab:disp.para} \begin{tabular}{ccccccc} \hline & & &\multicolumn{2}{c}{Particle} & \multicolumn{2}{c}{Hole} \\ \cline{4-5} \cline{6-7} & $\mu/\epsilon_{\mathrm{F}}$ & $\Delta /\epsilon_{\mathrm{F}}$ & $m^{+}/m$ & $U^{+}/\epsilon_{\mathrm{F}}$ & $m^{-}/m$ & $U^{-}/\epsilon_{\mathrm{F}}$ \\ \hline this work & \hspace*{0.5cm}-0.18\hspace*{0.5cm} & \hspace*{0.5cm}0.57\hspace*{0.5cm} & \hspace*{0.5cm}1.02\hspace*{0.5cm} &\hspace*{0.5cm} -0.37 \hspace*{0.5cm} &\hspace*{0.5cm} 1.09\hspace*{0.5cm} & \hspace*{0.5cm}-0.12\hspace*{0.5cm} \\ \cite{Haussmann} & \hspace*{0.5cm}0.36\hspace*{0.5cm} & \hspace*{0.5cm}0.46\hspace*{0.5cm} & \hspace*{0.5cm}1.00\hspace*{0.5cm} &\hspace*{0.5cm} -0.50 \hspace*{0.5cm} &\hspace*{0.5cm} 1.19\hspace*{0.5cm} & \hspace*{0.5cm}-0.35\hspace*{0.5cm} \\ \hline \end{tabular} \end{center} \end{table} It is seen in Figure \ref{fig:density.plot} that the fit is able to reproduce our dispersion relation fairly well, with the exception of the low momentum region of the particle branch, whose curvature can not be captured fully by the simple formula of Eq.(\ref{eq:fit.func}). Note that this leads to a slight overestimation of the gap $\Delta$. If we simply read it of from the point at which the particle and hole branches are closest, we get a value of $\Delta/\eF=0.54$ instead of the one given in Table \ref{tab:disp.para}. Comparing the values of this work with those of \cite{Haussmann}, it is seen that the two approaches give comparable results for the gap parameter $\Delta$, effective masses $m^{\pm}$ and Hartree shifts $U^{\pm}$ for both the particle and hole branches. On the other hand, the chemical potential $\mu$ deviates significantly from \cite{Haussmann}, even giving a different sign. The reason for this discrepancy apparently originates in the low sensitivity of the sum rules to the absolute position of the $\omega$ axis. This can be understood by inspecting the OPE of Eq.(\ref{eq:OPE1}). After setting $k_0 = \omega$ and making a change of variables $\omega \to \omega'$ as $\omega = \omega' + \omega_0$, with $\omega_0$ of the order of $\epsilon_{\mathrm{F}}$ and expanding the resulting expression in $\omega_0/\omega'$, one notes that only the NNLO term of the OPE will be modified, which must be kept small due to the convergence condition of the OPE. Therefore, we can expect that such a change of variables will introduce no qualitative modification of the OPE, while the spectral density experiences a parallel shift of $\omega_0$. It is in principle possible to choose $\omega_0$ such that the fitted value of $\mu$ approaches the correct value of around $0.36\,\eF$. Due to the convergence criterion of the OPE, such a choice however leads to a significantly larger value of $M_{\mathrm{min}}$ and therefore to a rather poor resolution of the MEM extraction of $\mathrm{Im}\Sigma_{\up}(\omega,\bm{k})$. We have thus not explored this possibility any further and simply note that at the present stage, the absolute positions of the structures appearing in the spectral density should not be taken too seriously. As a final point, we study the density of states of the single argument $\omega$, $\rho_{\up}(\omega)$, which is obtained by integrating the spectral density over the momentum $|\bm{k}|$: \begin{equation} \rho_{\up}(\omega) = \int \frac{d^3k}{(2\pi)^3} A_{\up}(\omega,\bm{k}). \label{eq:int.k} \end{equation} This function is shown in Fig. \ref{fig:int.k.plot}, from which one can immediately read off the approximate gap value, which can be regarded as half the width of the region where $\rho_{\up}(\omega)$ loses almost all of its strength. To draw Fig. \ref{fig:int.k.plot}, we have added an constant amount of $0.002\,\eF$ to the imaginary part of $\Sigma_{\up}(\omega,\bm{k})$, which reduces artificial effects caused by evaluating the integral numerically from a discrete number of data points, but does not change the gap structure of this plot. \begin{center} \begin{figure} \includegraphics[width=12cm]{int.k.5.0.nlarge100.adim0.002.eps} \vspace{-0.7cm} \caption{\label{fig:int.k.plot} The density of states, $\rho_{\up}(\omega)$, obtained by integrating the spectral density $A_{\up}(\omega,\bm{k})$ over the momentum $\bm{k}$ as shown in Eq.(\ref{eq:int.k}).} \end{figure} \end{center} \section{\label{Summary} Summary and conclusion} The work presented in this paper was carried out with two essential goals in mind. As the introduced techniques are new and have not been applied to cold atom systems so far, we first needed to test to what extent the sum rules and MEM are able to extract the single-particle spectral density from the result of the OPE. This is by no means a trivial test, because the OPE considered in momentum space does not converge for momenta below the Fermi momentum \cite{Nishida}, as we have already discussed in the introduction. It was therefore at the beginning not clear to what degree the sum rules can extend the applicability of the OPE to lower momenta or energies. As it however turns out, even at zero momentum $|\bm{k}|$ and small $\omega$, the sum rules of Eqs.(\ref{eq:sum.rule3.1}) and (\ref{eq:sum.rule3.2}) lead a fairly reasonable behavior for the spectral density, which suggests that our approach is indeed useful for extracting the spectral density at any momentum and energy. Once the proposed method is proven to work well, our second goal was to provide an independent framework for evaluating the superfluid pairing gap $\Delta$ of the unitary Fermi gas. Our obtained value is given in Table \ref{tab:disp.para} and can be inferred from Fig. \ref{fig:density.plot}. We wish to emphasize here that even though we have only taken into account the first few terms of the OPE, in which the Bertsch parameter and the contact density are the only input values, our numerical result shows reasonable agreement with other theoretical approaches \cite{Haussmann,Carlson2}. Specifically, we obtain $\Delta/\eF=0.54$, when extracting the gap from point of smallest distance between the particle and hole branches and $\Delta/\eF=0.57$ from an overall fit of our dispersion relation to Eq.(\ref{eq:fit.func}), while \cite{Haussmann} and \cite{Carlson2} get $\Delta/\eF=0.46$ and $\Delta/\eF=0.50(3)$, respectively. For confirming these results in the future, it will be necessary to consider still higher order terms in the OPE, evaluate the size of their contributions and examine their impact on the spectral density. Using the method proposed in this work, we have so far only studied the fermionic single-particle channel at zero temperature. As long as the conditions for its applicability (that is $r_0 \ll 1/\sqrt{|k_0|} \ll |a|,\,n^{-1/3},\,\lambda_T$) are satisfied, the OPE technique is fairly general and can in principle be applied to any kind of bosonic or fermionic systems with one or more constituents. One can therefore envisage various future applications of this approach. For instance, in \cite{Goldberger} the OPE for the retarded correlator of the density operator has already been worked out, and one in principle just needs to apply MEM or some other sort of fitting method to extract information on the dynamic structure factor from the OPE expression. Another interesting direction of research could be the generalization of this approach to finite temperature. For being able to do this, one however needs information on the finite temperature behavior of the operator expectation values which appear in the OPE of the channel of interest. For the system considered in this paper, this would correspond to the finite temperature values of the Bertsch parameter and the contact density, which are calculable using quantum Monte-Carlo simulations \cite{Drut}. \section*{Acknowledgments} This work was supported by RIKEN Foreign Postdoctoral Researcher Program, the RIKEN iTHES Project and JSPS KAKENHI Grant Numbers 20192700, 25887020, 26887032 and \break 30130876.
2,877,628,089,349
arxiv
\section{Background} \label{sec:background} Our work focuses on WoS and Scopus which are the two most influential and most researched scholarly indexing services. Both index various types of source titles such as journals, conference proceedings and books. Starting with WoS, is has been traditionally considered a more reliable source for bibliometric analysis and extensive research has been conducted focusing on this index. Research analysing its journal subject classification system focused on the mapping of science and clustering based on its journal subject category classification system \citep{leydesdorff2013global, zhang2010subject}. Other studies identified some of the problems associated this classification system. \citet{leydesdorff2016operationalization} showed that WoS subject categories are insufficient for performing bibliometric normalization due to \say{indexer effects}. They focused on the two fields- \say{Library and Information Science} (LIS), which has a WoS subject category and \say{Science and Technology Studies} (STS) which does not, and performed a mapping of citation behavior for journals in these fields. Their results showed that normalization using these categories might seriously harm the quality of the evaluation. \citet{haustein2012multidimensional} identified that WoS subject classification is controversial and problematic especially with regards to interdisciplinary fields due to the pigeonholing process taken when performing the classification. They claim that an alternative system to WoS subject classification is needed. In line with this recommendation, additional research by \citet{perianes2017comparison, shu2019comparing} compared WoS journal level classification system with publication level classification systems. They concluded that publication level classification systems constitute a credible alternative to WoS classification system. Following these studies, \citet{milojevic2020practical} presented a method for reclassification of WoS indexed articles into existing WoS categories as well as into 14 broad areas, based on the article references. Turning to Scopus, which is a more recent indexing system, little research was done on its journal subject classification system. An early study by \citet{de2007coverage} focused on Scopus journal subject distribution, geographical distribution, language of publication among other measures. Their analyses shows that Scopus has quite homogeneous global representation in nearly all areas except Arts and Humanities. This study was conducted only 3 years after Scopus started indexing journals. A recent longitudinal analysis by \citet{bordignon2019tracking} observed the changes in number of categories per journal and number of journals per category. They showed an increase on average in both aspects and concluded that newly added sources have been assigned to more fields and sub-fields on average than those indexed before the time period examined. Their findings corroborate those found in \citet{wang2016large} which observed that Scopus journals are assigned to a large number of categories. Their analysis further identified some issues related to category naming including near identical names and categories which are labelled as \say{miscellaneous}. In \citet{lazic2017reliability} the authors compared Scopus subject classification with the official classification of social sciences in Croatia and found a significant difference in the classifications. Their results showed Scopus mis-classifies journals to social sciences subject categories despite them publishing almost exclusively works related to natural sciences or biomedicine. Many studies have compared the two systems along with other indexing databases. The main focus of these studies were the accuracy of these databases. This accuracy was measured using the rankings both systems induced by ordering the retrieved publications in decreasing order of the number of citations \citep{bar2007some} or the citation links completeness and accuracy \citep{visser2021large, franceschini2016empirical}. These studies identified that both systems suffer from incompleteness and inaccuracy of citation links and incorrect transcription of author names and/or title. The work by \citet{meho2009assessing} compared Scopus and WoS using citations behavior focusing on Information Science researchers. Their findings show that when the analysis was based on small entities, such as journals and institutions, the scholarly impact measure produced by the two systems vary significantly, while analysis based on larger entities such as countries and research domains produced similar scholarly impact measure. They claimed that the need to use one or both indexing services will vary among research domains when used for assessing research impact. Several studies analysed author related metrics generated from citations in these systems \citep{bar2008h, harzing2016google}. In \citet{bar2008h} WoS, Scopus and Google Scholar (GS) were compared in terms of the h-index for a specific set of researchers. Their findings show that, except for a few cases, the differences in the h-indices between WoS and Scopus are not significant, but the differences between GS and the two other systems are much more considerable. \citet{harzing2016google} performed a longitudinal cross-disciplinary comparison of the WoS, Scopus and GS. Their results show that using h-index with WoS as a data source, in Life Science and Sciences was on average nearly eight times higher than in Humanities. Other studies compared these systems in respect to the coverage and distribution of journals and publications. These studies show that the difference in journal coverage between Scopus and WoS has grown over time and that differences in coverage result in variations in research output volumes, rank and global share of different countries \citep{singh2021journal, jacso2005we, mongeon2016journal}. In \citet{mongeon2016journal} the authors observed that there is an over representation of certain countries and languages both in WoS and Scopus journal coverage. In addition, they show that WoS and Scopus journal coverage differ the most in Natural Science and Engineering and in Arts and Humanities fields. \citet{bartol2014assessment} showed that Scopus provides more records and more citations per record and, when focusing on disciplines, Scopus showed better coverage than WoS in Agriculture, Medical, and Natural Sciences and most noticeably in Engineering \& Technology. Turning to comparison of the journal subject category classification methodology used by both systems, \citet{wang2016large} performed a detailed comparison of the classification systems of WoS and Scopus based on citation relations, where they measured the \say{connectedness} of a journal in respect to its assigned category and to other categories based on the citation percentage. They observed that, on average, journals have significantly more categories assignments in Scopus than in WoS. Furthermore, in Scopus journals are assigned to categories with which they are only weakly connected much more frequently than in WoS. They conclude that WoS and especially Scopus tend to be too lenient in assigning journals to categories. The analyses in the above studies were based upon citation analysis, thus exploring one aspect of the classification systems. In this work, we complement these findings by considering two additional aspects: First, we examine the association between the number of categories and the ranking of any journal across its assigned categories. We do this by leveraging known distance measures. Second, we investigate the relationship between categories in each system and across the two systems in respect to the journals contained in them. In order to perform this analysis we leverage the logical set theory. To the best of our knowledge, this approach is unique in the analysis of journal classification and has yet to be applied in this realm. \subsection*{Mathematical Definitions} \label{subsec:math_def} In this work, we use the following the mathematical definitions and notations. A journal subject classification system assigns each journal $j_i\in J$ possibly more than a single category from the set $C$. We denote the set of categories associated with journal $j_i$ as $C_i=\{c_k\}$. For each of the assigned categories $c_k$, $j$ may be ranked differently based on its impact measure -- Journal Impact Factor (JIF, in WoS) or SCImago Journal Rank (SJR, in Scopus). We denote this ranking as $j_i^{c_k}$ reading as journal $j_i$'s ranking in category $c_k$. For our analysis, we consider the ranking as the journal's percentile ranking in each of its classified categories. Thus, for each journal, we have a set of percentiles corresponding to its set of classified categories. In order to evaluate possible differences in rankings across categories, we adopt two standard distance measures: Min-Max (MM) and Variance (VAR). The Min-Max of journal $j_i$ is defined as follows: \[ MM(j_i) = \max_{C_i}(j_i^{c_k}) - \min_{C_i}(j_i^{c_k}) \] In words, $MM(j_i)$ is the difference between the highest and lowest percentile journal $j_i$ is ranked to in any of its assigned categories. This measure captures the \say{range} of rankings associated with a specific journals. The Variance is defined as follows: \[ VAR(j_i) = \frac{1}{|C|}\displaystyle\sum_{C_i}(j_i^{c_k} - \mu)^2 \] In words, $VAR(j_i)$ is the sum of squared distances from the mean of all percentiles journal $j_i$ is ranked to in all of its assigned categories, divided by the number of categories it is assigned to. This measure captures the variance in rankings associated with a specific journal-- that is, how \say{noisy} are the rankings of a journal across its assigned categories. Our study leverages different aspect of the logical set theory \citep{cantor1874ueber}. Set theory is a branch of mathematical logic that studies the characteristics and relations between collections of objects. In this work, we consider sets of journals and, separately, sets of categories as needed. Let $A$ and $B$ be sets. We use the following operations \citep{kolmogorov1975introductory}: \begin{itemize} \item Union - The set of all members of $A$, $B$, or both, denoted $A \cup B$ \item Subset and Superset - set $A$ is a subset of set $B$ if all members of $A$ are also members of $B$, denoted $A \subseteq B$. if $A \subseteq B$ than $B$ is a superset of $A$ denoted $B \supseteq A$ \item Equivalence - if $A \subseteq B$ and $B \subseteq A$ then $A$ is equivalent to $B$, denoted $A=B$ \item Intersection - the set all members of $A$ which are also members of $B$, denoted $A \cap B$. \item Cover - a collection of sets ($B,C,\cdots)$ excluding $A$ is said to cover $A$ if all members of $A$ are members of $B\cup C\cup\cdots$. A \say{minimal cover} of $A$ is the smallest number of sets needed to cover $A$. \end{itemize} We are not the first to adopt set theory-based analysis in bibliometrics. \citet{rodriguez2014evolutionary} performed graph based analysis where categories are vertices and edges represent the shared journals. In \citet{subochev2018ranking} the authors proposed ranking journals using methods from social choice and set theories to define aggregation methods of existing metrics. Fuzzy sets theory was applied to bibliometric analysis in order to display the \say{fuzzy} nature of field delineation \citep{bensman2001bradford, egghe2002proposal}. In a study related to ours, \citet{rons2012partition} proposed a a normalization method based on partitioning WoS categories according on their intersecting journals, however, their focus was on adaptation of standard field normalization to an observed publication record. To the best of our knowledge, our study is the first study to apply the core set theory to the study of journals subject category classification systems. \section{Data Collection} \label{sec:data_collection} Our study focused on WoS and Scopus indexing systems. From each of these systems we downloaded the complete set of the indexed journals, their associated categories and metrics as of end of 2020. Overall, 21,424 journals were extracted from WoS and 40,804 journals were extracted from Scopus. All associated metrics reflect the 2019 scores. We excluded journals which were showing as \say{discontinued} or \say{Inactive} in Scopus (WoS does not contain this data). The number of journals indexed in Scopus, which remained after this cleanup, was 25,751. Since the two systems do not cover the same set of journals and one of the aims of our study is to perform a comparative analysis between the two, we focus only on journals which are indexed in both systems. To identify these journals, journals from both systems were matched primarily based on their ISSN. In cases where the ISSN did not provide a match, we used the journals' e-ISSN as a secondary matching criteria\footnote{In order to perform these matches, cleanup was performed by removing the dash symbol which appeared in some of the identifiers as well as any leading zeros.}. Finally, for the very few cases not matched by ISSN or e-ISSN, we used the name as a matching criteria. The name matching was done as case insensitive exact matching. In addition, journals which did not have a scientometric score in either of the systems were further excluded from the analysis. The final set of journals for our analysis comprised of 13,247 journals with 254 categories in WoS and 327 categories in Scopus\footnote{The category \say{Reviews and References (medical)} was excluded as it did not include any journals from our analysis}. 8,177 journals from WoS and 12,504 journals from Scopus which were not matched by these identifiers were removed from further consideration. Table \ref{tbl:general_stats} summarizes the general statistics for each of these systems and of the data set used in the analysis. The descriptive statistics of mean, SD and median in Table \ref{tbl:general_stats} were calculated on the sets of journals and categories under analysis. \begin{table} \centering \begin{tabular}{c|c|c} Descriptive Statistics & WoS & Scopus \\ \toprule Number of categories & 254 & 327 \\ Total number of journals & 21,424 & 40,804 \\ Number of journals analysed & 13,247 & 13,247 \\ \midrule Mean number of journals per category & 83.67 & 94.28 \\ SD number of journals per category & 68.7 & 107.7 \\ Median number journals per category & 63 & 67 \\ \midrule Mean number of subject categories assigned to each journal & 1.58 & 2.34 \\ SD number of subject categories assigned to each journal & 0.78 & 1.28 \\ Median number of subject categories assigned to each journal & 1 & 2 \\ \end{tabular} \caption{General statistics} \label{tbl:general_stats} \end{table} All data and code is available in GitHub under \textcolor{blue}{\href{https://github.com/shirAviv/journals_categories} {Journals Subject Classification}}. \section{Introduction} Journal classification into subject categories is an important aspect of the journal indexing systems. From a theoretical perspective, this classification is an external expression of the internal structure of science and thus, it can foster research on the inherent relationships between scientific fields, institutes and researchers as well as many others scientometric phenomena. From a practical perspective, this classification is often used by researchers in order to find information related to their field of work and allows one to avoid sifting through possibly many irrelevant journals. Universities and other institutions also rely on these subject categories for their evaluation and, in many cases, request their researchers to publish their works in journals in a specific set of classifications or in journals which are highly ranked in their classified subject categories. In many cases, researchers are mainly evaluated based on the articles they published in high ranking journals (e.g., top 25\% of the journals in a subject category) \citep{dennis2006research, mckiernan2019meta, rice2020academic}. Unfortunately, classifying journals into subject categories is an ill-defined problem since the delineation of a scientific field of research is, itself, unclear and journals' boundaries need not necessarily align with those of any given field of study. Several key challenges include interdisciplinarity, multipisciplinarity and the dynamic nature of scientific enquiry \citep{zitt2019bibliometric}. While various frameworks for the delineation have been proposed in the past (e.g., \citet{hammarfelt2017scientific}), these differ in the structure of the classification, its granularity, semantic aspects and additional parameters (see \citet{waltman2019field, archambault2011towards}). As such, major indexing services such as Web of Science (WoS), Scopus and others developed their own unique journal subject classification systems. Journal classification systems use a variety of (partially) overlapping and non-exhaustive subject categories. This results in many journals being classified into more than a single category. As a result, various discrepancies are likely to be encountered \textit{within} any given system and \textit{between} different systems. The journal subject classification systems of WoS and Scopus are widely used in practice and thus play a central role in the academic community. As such, in this work, we focus on their two journal subject classification systems and examine the \textit{intra-system} (within each system) and \textit{inter-system} (between the two systems) discrepancies. We speculate that these discrepancies are not anecdotal, but in fact are systematic and encompass various scientometric phenomena. Specifically, we set the following research questions: \begin{itemize} \item \textit{How is the number of categories a journal is classified to associated with that journal's impact metrics and rankings?} We hypothesize that an inverse association exists between the number of categories and the ranking metrics of a specific journal, where a higher number of categories a journal is classified to is indicative of lower ranking of the journal. \item \textit{To what extent do subject categories intersect within each system?} We lay several hypotheses related to this questions. \begin{enumerate} \item We hypothesize that nearly all subject categories will exhibit an overlap in their classified journals with at least one other category (i.e., intersecting subject categories), and that the number of categories each category has overlaps with is high. \item Focusing on the size of the overlap, we hypothesize that only a small number of subject categories will exhibit a large overlap in their classified journals (i.e., intersecting subject categories with an overlap of $>85\%$). \item Following the two hypotheses detailed above we further hypothesize that each category's journals can be minimally \say{covered} by a small number of categories (relative to the number of categories in the system). \end{enumerate} \item \textit{To what extent do subject categories intersect between the two examined systems?} In this respect, we hypothesize that all subject categories in one system will exhibit a large overlap in their classified journals with at least one category from the second system and that each category in one system can be minimally \say{covered} by a very small number of categories from the second system. (i.e., categories across the two systems display strong similarity). \end{itemize} It is important to note that various works have examined different aspects of WoS and Scopus indexing services (most recently \citet{martin2021google, singh2021journal, visser2021large}), separately and/or combined. However, the possible discrepancies within and between the two associated \textit{journal subject classification systems} have received little attention. Existing literature in this realm has predominantly relied on citation analysis techniques to examine these discrepancies. As such, our work complements the existing knowledge and can be instrumental in understanding the impact these discrepancies can have on academic evaluation of researchers and their work. In addition, it can promote further scientometric enquiry, especially in field delineation and impact normalization challenges. \subsection{Inter-System analysis} In this part of the work we compare one system against the other. We begin by assuming that the journal subject classification given by one system is the \say{true} (or \say{correct}) one and analyse the second system against that classification. Then, we switch the roles between the two systems. \paragraph{Categories Distribution.} Following the analysis we performed in Section \ref{para:intra_cats_dist}, we matched the number of categories assigned to each journal in WoS and Scopus. As the number of categories per journal in each system is not normally distributed, we use the Wilcoxon signed rank test \citep{wilcoxon1992individual} to comparw the two systems. The results show that the number of categories assigned to a journal in Scopus was statistically significant higher than in WoS, (t=3979962.5, p<0.001). The descriptive statistics are displayed in Table \ref{tbl:general_stats}. \paragraph{Relations between categories.} In a similar fashion to the analysis done in \ref{para:intra_relationship}, we examine subsets, supersets, intersecting or equivalent categories in respect to the shared journals between the two systems. Recall that, in this work, we consider only the journals indexed by both systems. Thus, all categories in one system had at least one intersection with at least one category in the other system. Since no equivalent categories could be found between the two systems, we start by examining subset categories. Starting with WoS being considered as the \say{true} categorization, we identified 13 categories in WoS which have subsets in Scopus. When performing the same analysis starting with Scopus, 9 such subsets were found. Figures \ref{fig:wos_scopus_subsets} and \ref{fig:scopus_wos_subsets} display the subsets with more than a single journal in the subset. Again, the names of the categories are very indicative of their similarities. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{wos_to_scopus_venn_subsets.png} \caption{Scopus categories as subsets of WoS categories. Dark purple - WoS categories, lilac and gray - Scopus categories} \label{fig:wos_scopus_subsets} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{scopus_to_wos_venn_subsets.png} \caption{WoS categories as subsets of Scopus categories.Dark purple - Scopus categories, lilac and gray - WoS categories} \label{fig:scopus_wos_subsets} \end{figure} Following our subset analysis, we now look at the intersections of categories across the two systems. Starting with WoS, we found 6 high intersecting categories with Scopus (see Figure \ref{fig:wos_scopus_intersect_95}). That is, 6 of WoS's categories have an intersection greater than 95\% with a category from Scopus. Turning to Scopus, we found only a single such category (see Figure \ref{fig:scopus_wos_intersect_95}). As can be observed from the names of the categories, these tend to display very similar meanings. The number of journals in the intersection of the large intersecting categories ranges from $\sim$60 to $\sim$140. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{wos_to_scopus_venn_intersect_0.95.png} \caption{WoS to scopus intersect, above 95\%. Purple- WoS categories, Cyan - Scopus categories} \label{fig:wos_scopus_intersect_95} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{scopus_to_wos_venn_intersect_0.95.png} \caption{Scopus to WoS intersect above 95\%. Purple- Scopus category, Cyan - WoS category} \label{fig:scopus_wos_intersect_95} \end{figure} \paragraph{Similar Categories.} We now turn to identify \say{similar} categories in both systems. By similar categories we refer to categories which share a large percentage of journals. For each category in one system, we identified all categories in the other system which had any shared journals (i.e., intersecting categories). Starting with a threshold of 5\%, we measured how many categories in one system had an intersection larger than that threshold with another category in the the other system. The threshold was increased gradually in steps of 5\% and the process was repeated until the maximum threshold of 100\% (i.e., equivalent categories which we already establish do not exist). Then, we repeated the same process starting with the second system. The results are plotted in Figure \ref{fig:threshold_match} and show that for low thresholds, as can be expected, all categories in one system have \say{enough} shared journals with a category in the second system to be considered similar. It is evident from the figure that the blue line, representing Scopus categories which are similar to WoS categories is above the green line at all points in the plot. This means that starting with WoS categories and identifying similar Scopus categories, there will be more such categories at any threshold, then when starting with Scopus and identifying similar WoS categories. For example, when the threshold is set at 55\%, $\sim$75\% of WoS categories share journals at that threshold with Scopus categories, but less than 60\% of Scopus categories share journals at that threshold with WoS categories. Our results lend themselves to the question of minimal cover - Could any category in one system be covered by two (or more) categories in the other system. This is examined next. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{cats_threshold_match.png} \caption{Match between WoS categories and Scopus categories based on percentage of shared journals in respect to total journals in the category} \label{fig:threshold_match} \end{figure} \paragraph{Minimal Cover.} In this part of our analysis we looked at the minimal number of categories in one system that are needed in order to cover a single category in the second system. This analysis complements the analysis done in \ref{para:intra_minimal_cover}. We varied the threshold required for minimum cover starting with full coverage (100\%) and gradually decreased it in step of 5\%, thus relaxing the requirement for full coverage. As a first step, we considered the entire set of journals under analysis in this study (13,247), as being categorized to a single \say{meta-category} in each system. That is, we examine how many categories are needed from one system in order to cover all journals in the other. Starting with WoS, the minimal cover set of categories from Scopus required to fully cover WoS is 279 (85\% of the total Scopus categories). The minimal cover set with 95\% threshold is 177 (54\%) and with 90\% threshold is 130 (40\%). Turning to Scopus, the minimal cover set of categories from WoS required to fully cover Scopus is 248 (98\% of WOS categories). The minimal cover set with 95\% threshold is 182 (72\%) and with 90\% threshold is 140 (55\%). Taken jointly, these results indicate that the number of categories required to cover all journals in one system by the other is similar for both systems. In Scopus, it seems that a large number of categories (almost 50) is \say{redundant} in the sense that those are not needed in order to fully cover the set of journals in WoS. In fact, 279 categories are needed to fully cover WoS, which is quite close to the number of categories in WoS - 254. However, the reverse is not true. Almost all WoS categories were needed in order to cover Scopus journals. Similarly to the analysis we have done in Section \ref{para:intra_minimal_cover} for each system separately, we set to examine the minimal cover required for \textit{every category} in one system by the second system. For each category in one system, we found the minimal set of categories from the second system required in order to cover all journals in that category. Starting with WoS, the minimal number of cover categories from Scopus required to cover each single WoS category ranged from 1 to 26. Similarly, when covering all categories from Scopus with categories from WoS the minimal number of cover categories from WoS required to cover a single Scopus category ranged from 1 to 98. We varied the threshold for cover from full coverage (100\%) to partial coverage at 95\% and 90\%. From Figure \ref{fig:min_cover_set_inter} it can be seen that as the threshold varies, the number of categories requiring a high minimal cover quickly decreases, as can be expected. This is more evident in Scopus, where, without a relaxed threshold, one category requires almost 100 (40\%) of WoS categories to cover it, but with the 90\% threshold the number of categories required to cover any category in it is at most 53 (20\%). Covering WoS, the minimal cover set from Scopus ranges up to 26 (0.08\%) categories without a relaxed threshold and up to 15 (0.045\%) with 90\% threshold. Figure \ref{fig:min_cover_set_inter_cumulative} displays the cumulative minimal cover needed for each system and as can be seen from it WoS and Scopus show similar trend where the 90\% threshold plot increases quickly at the lower cover set size. For WoS, without a threshold $\sim$80\% of the categories can be covered by a maximum of 10 categories from Scopus. With a 90\% threshold $\sim$80\% of the categories can be covered by a maximum of 5 categories and almost all 254 categories can be covered by 10 categories. Scopus shows a similar, although weaker, trend as $\sim$80\% of its categories can be covered by 15 categories from WoS without a threshold. With a 90\% threshold $\sim$80\% of the categories can be covered by a maximum of 8 categories from WoS. For both systems, we also looked at how the size of the minimal cover set is related to the number of journals in each category and to the total number of covering categories from the other system. Figure \ref{fig:min_cover_set_no_ouliers} displays the results. It is noticeable that categories with relatively small number of journals (less than 200) can be covered by $\sim$10 categories from the second system. Furthermore, a linear relation between the minimal cover size and the number of total covering categories can be seen on the right hand side of the Figure. Due to the dense display, in Figure \ref{fig:min_cover_set_bottom}, we focus on categories with very low number of journals (20 or less) and on those with very low number of covering categories. In this narrow perspective no clear trend, between the size of the minimal cover and the number of journals, emerges. However, our analysis shows that despite the very low number of journals, the categories in either system can not be covered by single or dual categories from the second system. E.g- WoS contains 28 small categories, 9 require more than 3 categories from Scopus to cover them and 1 category requires 7 Scopus categories to cover it. Scopus contains 56 small categories, 24 requiring a cover set of more than 3 categories from WoS and 1 category requires 11 categories from WoS to cover it. The same holds true in respect to the total number of covering categories. It is more evident for Scopus, where, as we have discussed before, the number of categories with low number of journals classified to them is much larger than in WoS. This could indicate that some of these small categories may contain multidisciplinary journals requiring multiple categories to cover them. Figure \ref{fig:min_cover_set_top} focuses on the categories with high minimal cover and shows that only a small number of WoS categories can be minimally covered by a relatively high number of categories from Scopus, even when the number of journals or the total number of covering categories is high (blue dots in the figure). For Scopus this is not the case as we can see categories with less than 200 journals covered by a large set of WoS categories (green dots in the figure). \begin{figure} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{wos_min_cover_set_by_threshold.png} \caption{WoS} \label{fig:wos_scopus_min_cover_set} \end{subfigure}% ~~~ \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{scopus_min_cover_set_by_threshold.png} \caption{Scopus} \label{fig:scopus_wos_min_cover_set} \end{subfigure}% \caption{Minimal cover} \label{fig:min_cover_set_inter} \end{figure} \begin{figure} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{wos_min_cover_set_by_threshold-cumulative.png} \caption{WoS} \label{fig:wos_scopus_min_cover_set_cumulative} \end{subfigure}% ~~~ \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{scopus_min_cover_set_by_threshold-cumulative.png} \caption{Scopus} \label{fig:scopus_wos_min_cover_set_cumulative} \end{subfigure}% \caption{Cumulative Minimal cover} \label{fig:min_cover_set_inter_cumulative} \end{figure} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{min_cover_set_inter.png} \caption{Match between WoS categories and Scopus categories. Minimum Scopus categories covering single WoS category and minimum WoS categories covering single Scopus category. Outliers removed.} \label{fig:min_cover_set_no_ouliers} \end{figure} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{min_cover_set_bottom.png} \caption{Match between WoS categories and Scopus categories. Minimum Scopus categories set covering a single WoS category and minimum WoS categories covering a single Scopus category. Categories with 20 journals or less and with corresponding 10 categories or less are shown} \label{fig:min_cover_set_bottom} \end{figure} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{min_cover_set_top_min_cover.png} \caption{Match between WoS categories and Scopus categories. Minimum Scopus categories set covering a single WoS category and minimum WoS categories covering a single Scopus category. Outliers focus- only categories with high minimal cover are shown} \label{fig:min_cover_set_top} \end{figure} \subsection{Intra-System Analysis} \label{subsec:intra} \input{small_large_cats_table} In this part, we analysed each of the two systems separately along the following criteria. \paragraph{Journals distribution.} \label{para:intra_cats} In the first part of the analysis, we identified the distribution of journals within categories. As can be seen from Figure \ref{fig:num_journals_per_cat}, in both WoS and Scopus, $\sim$40 categories contain a small number of journals, ranging from 6-21 journals in WoS and 1-16 in Scopus. We can further see that the number of categories containing a large number of journals quickly decreases in both systems. Looking at the smallest and largest categories in both systems, Scopus has a significantly larger number of both small and large categories than WoS. This may be due to its higher number of total categories. Tables \ref{tbl:small_cats} and \ref{tbl:large_cats} display categories containing less than 10 journals in each system and those containing more than 350 journals in each category. It can be observed that while no category in WoS had less than 6 journals and only 3 categories had less than 10 journals, Scopus had 8 categories containing a single or two journals and 30 categories had less than 10 journals. Similarly, the largest category in WoS had 372 journals, while 8 categories in Scopus had more than 350 journals and the largest category had 1139 journals, almost 10\% of all analysed journals. These results seem to align with the statistics displayed in Table \ref{tbl:general_stats}, specifically, the large SD in Scopus. \begin{figure} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{journals_dist_wos.png} \caption{WoS} \label{fig:num_journals_per_cat_wos} \end{subfigure}% ~~~ \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{journals_dist_scopus.png} \caption{Scopus} \label{fig:num_journals_per_cat_scopus} \end{subfigure}% \caption{Number of journals contained in each category. X-Axis is the number of journals (in bins of 15), Y-Axis is the number of categories containing the associated number of journals.} \label{fig:num_journals_per_cat} \end{figure} \paragraph{Categories distribution.} \label{para:intra_cats_dist} We examine at the number of categories each journal was classified to. Figures \ref{fig:num_cats_per_journal_wos} and \ref{fig:num_cats_per_journal_scopus} display the number of categories any single journal is classified to. It can be seen that, in WoS, the number of categories each journal is classified to decreases quickly, with most journals being classified to a single category and the highest number of classifications for a single journal is 6. In Scopus, however, more than 4,000 journals are classified in two categories (approximately a third of all journals under analysis), with one journal being classified to 11 different categories. \begin{figure} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.85\textwidth]{wos_num_categories_per_journal_histogram.png} \caption{WoS} \label{fig:num_cats_per_journal_wos} \end{subfigure}% ~~~ \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.85\textwidth]{scopus_num_categories_per_journal_histogram.png} \caption{Scopus} \label{fig:num_cats_per_journal_scopus} \end{subfigure}% \caption{Number of categories assigned to each journal.} \label{fig:num_cats_per_journal} \end{figure} \paragraph{Categories and Journals' Metrics.} In order to understand how the scientometric scores of journals is related to the journals subject classifications in each system, we calculated the descriptive statistics of the JIF score for every journal in WoS and SJR score for every journal in Scopus, respectively. We then analysed these values in respect to the number of categories each journal was classified to. The results are displayed in Figure \ref{fig:scientometrics_by_categories} and show the scientometric statistics by number of categories. Specifically, the top parts of the figure display the highest score in respect to the number of categories. The box plots in the bottom part of the figure show the quartile values of the scientometric measure, extending from Q1 to Q3, with an horizontal line at the median (Q2). The whiskers extend from the edges of the box to show the range of the data up to 1.5 times the inter quartile range. The $\triangle$ shows the mean of the scientometric measure. In WoS, we can see that the highest JIF ranking decreases with the number of categories assigned to a journal, while all other statistics display an opposite trend, increasing with the number of categories a journal is classified to. In Scopus, only a single journal is classified into 11 categories and no journal is classified into 10 categories, thus we omit this journal in the following analysis. Observing the classification of up to 9 categories, the highest SJR score shows a similar behaviour to that observed in WoS, namely decreasing with the number of categories. However, all other metrics displayed do not show any clear upward or downward trend. This means that while for WoS higher number of categories is associated with higher metric score on average, this is not the case for Scopus. This could possibly be explained by the nature of the categories associated with the journals, specifically that in WoS the categories associated with a journal display similar citation pattern, while those in Scopus do not. Following these observations, we further wanted to understand the relation between the number of categories a journal is classified to and the differences in rankings among these categories. To that end, we evaluated the MM and VAR (see Section \ref{sec:background}) as functions of the number of classified categories. Recall that the MM is the size of the range of the rankings across the categories a journal is classified to while the VAR is the variance in these rankings. Figures \ref{fig:mm} and \ref{fig:avg-var} display the findings. The box plots in the figures extend from Q1 to Q3 quartile values for the MM and VAR of the percentile rankings, with a horizontal line at the median (Q2). The whiskers extend from the edges of the box to show the range of the data up to 1.5 times the inter quartile range. The $\triangle$ shows the mean of the respective MM and VAR functions. As can be seen, WoS shows an increase in both MM and VAR as the number of categories increases up to five categories and a decrease for 6 categories classification. Scopus shows a small yet consistent increase in both the MM and VAR as the number of classified categories increases. This means that as the number of categories any single journal is classified to, its range of rankings as well as the variance within this range, tends to increase. To verify our observations, we calculated the Pearson's correlation on both the MM and VAR measures with respect to the number of classified categories, both for WoS and Scopus. In both WoS and Scopus results were weakly positive yet significant for the MM (WoS: r=0.26, p<0.001; Scopus: r=0.313, p<0.001) and for the VAR (WoS: r=0.08, p<0.001; Scopus: r=0.08, p<0.001). Overall, Scopus displays a stronger effect size than WoS as can be seen from both the plots and the statistical correlation. \begin{figure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{JIF_max_by_num_cats_wos.png} \caption{JIF - WoS} \label{fig:JIF-max-wos} \end{subfigure}% ~~~ \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{SJR_max_by_num_cats_scopus.png} \caption{SJR- Scopus} \label{fig:SJR-max-scopus} \end{subfigure}% ~~~ \newline \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{JIF_stats_by_num_cats_wos.png} \caption{JIF - WoS} \label{fig:JIF-wos} \end{subfigure}% ~~~ \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{sjr_stats_by_num_cats_scopus.png} \caption{SJR- Scopus} \label{fig:SJR-scopus} \end{subfigure}% ~~~ \caption{highest scientometric score and q1 - q3 range of scientometric scores} \label{fig:scientometrics_by_categories} \end{figure} \begin{figure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{MM_stats_by_num_cats_wos.png} \caption{WoS} \label{fig:mm-wos} \end{subfigure}% ~~~ \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{MM_stats_by_num_cats_scopus.png} \caption{Scopus} \label{fig:mm-scopus} \end{subfigure}% ~~~ \caption{MM of percentile rankings per number of categories} \label{fig:mm} \end{figure} \begin{figure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{var_stats_by_num_cats_wos.png} \caption{WoS} \label{fig:avg-vars-wos} \end{subfigure}% ~~~ \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{var_stats_by_num_cats_scopus.png} \caption{Scopus} \label{fig:avg-var-scopus} \end{subfigure}% ~~~ \caption{VAR of percentile rankings per number of categories} \label{fig:avg-var} \end{figure} \paragraph{Relations between Categories.} \label{para:intra_relationship} In this part of the analysis, we focus on the following possible relationships between subject categories: equivalent categories, subset and superset categories and intersecting categories as defined in Section \ref{subsec:math_def}. Starting with WoS, we found that it has no equivalent categories and no subset categories. The number of categories with an intersection with at least one other category in WoS is 252 - 99.2\% of all WoS categories. Specifically, only 2 categories have no intersection with any other category. These are: \begin{itemize} \item \say{Dance}, number of journals: 8 \item \say{Literature, African, Australian, Canadian}, number of journals: 6 \end{itemize} These categories are two of the three smallest categories as can be seen in Table \ref{tbl:small_cats}. Turning to Scopus, again, no equivalent categories were found. However, 7 superset categories and 12 subset categories were found. The number of categories with an intersection with at least one other category is 325 - 99.4\% of all Scopus categories. Identically to WoS, only 2 categories have no intersection with any other category. However, these two categories contain only a single journal: \begin{itemize} \item \say{Dental Hygiene}, number of journals: 1 \item \say{Nurse Assisting}, number of journals: 1 \end{itemize} Four categories are \say{pure} subsets of other categories, meaning that all journals in each of these categories appear only in these categories and in their superset categories. These are: \begin{itemize} \item \say{Podiatry}, number of journals: 1 \item \say{Immunology and Microbiology (miscellaneous)}, number of journals: 1 \item \say{Emergency Medical Services}, number of journals: 1 \item \say{Drug Guides}, number of journals: 1 \end{itemize} Note that these are all single-journal categories. Figure \ref{fig:scopus_subsets} shows all subset categories in Scopus. As can be observed from the figure, the \say{Podiatry} category is a subset of three categories which, seemingly, are not related to it at all (\say{Language and Linguistics}, \say{Linguistics and Language} and \say{Computer Science Applications}). \begin{figure} \centering \begin{subfigure}[t]{0.95\textwidth} \centering \includegraphics[width=0.95\textwidth]{scopus_venn_subsets.png} \end{subfigure}% ~~~ \newline \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=0.7\textwidth]{scopus_venn_subsets2.png} \end{subfigure}% \caption{Subsets of Scopus categories.} \label{fig:scopus_subsets} \end{figure} Figures \ref{fig:num_intersect_cats_per_cat-wos} and \ref{fig:num_intersect_cats_per_cat-scopus} show the distribution of intersections of categories. While both WoS and Scopus show a similar distribution pattern, overall, Scopus categories intersect with many more categories. In WoS, no category intersect with more than 60 categories while, in Scopus, about 1/6 of the categories have an intersection with over 60 other categories each, 7 categories have intersections with over 100 categories with the \say{General Medicine} category having intersections with over 200 categories. Looking at the intersecting categories, we are also interested in identifying \say{similar} categories. In order to measure the \say{closeness} between two intersecting categories, we compute the intersection size and divide it by the size of the smaller category in that intersection. In WoS, only a single pair of categories had an intersection greater than 85\%. This is shown in Figure \ref{fig:wos_closest_intersect}. In Scopus, 6 pairs of categories had a \say{closeness} of above 80\% with two pairs of categories displaying an intersection of over 90\%, these are displayed in Figure \ref{fig:closest_intersect_scopus}. \begin{figure} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{wos_num_intersecting_categories_per_category_histogram.png} \caption{WoS} \label{fig:num_intersect_cats_per_cat-wos} \end{subfigure}% ~~~ \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{scopus_num_intersecting_categories_per_category_histogram.png} \caption{Scopus} \label{fig:num_intersect_cats_per_cat-scopus} \end{subfigure}% \caption{Number of intersecting categories per category} \label{fig:num_intersect_cats_per_cat} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{wos_venn_intersect_th_0.8.png} \caption{intersect of WoS categories- closest categories, above 85\%} \label{fig:wos_closest_intersect} \end{figure} \begin{figure} \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=0.99\textwidth]{scopus_venn_intersect_th_0.95.png} \caption{above 95\%} \end{subfigure} ~~~~ \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=0.99\textwidth]{scopus_venn_intersect_th_0.90.png} \caption{above 90\%} \end{subfigure} ~~~~ \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=0.99\textwidth]{scopus_venn_intersect_th_0.85.png} \caption{above 85\%} \end{subfigure}% ~~~ \newline \begin{subfigure}[t]{0.95\textwidth} \centering \includegraphics[width=0.99\textwidth]{scopus_venn_intersect_th_0.80.png} \caption{above 80\%} \end{subfigure}% ~~~ \caption{intersect of Scopus categories- closest categories} \label{fig:closest_intersect_scopus} \end{figure} \paragraph{Minimal Cover.} \label{para:intra_minimal_cover} Following our analysis of the intersecting and subset categories, we looked at the minimal cover as defined in Section \ref{subsec:math_def}. We performed this analysis for each category using \textit{all other categories in its system}. That is, we examine which categories can be covered by other categories and what is the minimal number of such categories needed in order to cover it. In WoS, only 6 categories could be fully covered (~2\%). All these categories consist of between 13 and 78 journals in each category. The minimal number of categories needed to cover each of these six categories ranged from 4 to 15. Based on the very low number of categories which could be covered, we can deduce that all the other categories in WoS (~98\%) had at least one journal which was classified solely to that specific category (and thus could not be covered by others). In Scopus, on the other hand, 56 categories (~17\%) could be fully covered. These categories consist of between 1 and 183 journals in each category. The minimal number of categories needed to cover each of these six categories ranged from 1 to 24. Recall that Scopus contains \say{pure} subset categories, thus these categories can be minimally covered by a single other category. We further examine situations in which a category can be \say{almost} covered. Namely, we add a flexible threshold allowing for 95\% cover of journals in a category and still consider it as \say{covered}. In WoS, 14 additional categories could now be covered (~8\%). The number of journals in each of these categories ranged from 13 to 193. The minimal number categories needed to cover these categories ranged from 5 to 39. In Scopus, 80 additional categories (~41\%) could now be covered, with number of journals ranging from 1 to 375 journals. The minimal number of categories needed to cover each of these categories ranged from 1 to 56. Figure \ref{fig:minimal_cover_intra} displays the minimal cover in respect to the number of journals in each category and to the total number of categories needed to cover this category. Top plots display no threshold (i.e., full cover). Bottom plots display 95\% cover. \begin{figure} \centering \begin{subfigure}[t]{0.95\textwidth} \centering \includegraphics[width=0.95\textwidth]{min_cover_set_intra.png} \caption{Threshold - 100\%} \end{subfigure}% ~~~ \newline \begin{subfigure}[t]{0.95\textwidth} \centering \includegraphics[width=0.95\textwidth]{min_cover_set_intra_0.95.png} \caption{Threshold - 95\%} \end{subfigure}% \caption{Minimal Cover} \label{fig:minimal_cover_intra} \end{figure} It is evident from our results that lowering the threshold even by a small percentage leads to a fast increase in the number of categories which can be minimally covered, most notably for Scopus. As can also be seen from the plots, when lowering the threshold, the added categories with minimal cover are mostly those containing less than 100 journals. However, Scopus shows an increase also in the categories containing 100-300 journals. Furthermore, the apparent linear relation between the minimal cover size and the number of total covering categories which can be seen on the right hand side of Figure \ref{fig:minimal_cover_intra} (a) and (b) displays a larger incline in the 95\% threshold case. \newpage \section{Conclusion and Discussion} \label{sec:discussion} In this study we have analysed the journal subject classification systems in WoS and Scopus indexing services. We employed two types of analysis to address our research questions: intra-system analysis, in which each system was examined separately, and inter-system analysis, in which both systems were compared against each other. From the intra-system analysis perspective, our analysis showed that while the distribution pattern of journals in categories in both systems is similar, in Scopus a significant number of categories contain an extremely small number of journals or an extremely large number of journals. Notably, in WoS, no category contains less than 6 journals or more than 400 journals while, in Scopus, 16 categories contain less than 6 journals and 7 categories contain over 400 journals. Both very small and very large categories could lead to adverse outcomes when used for field normalization or ranking of journals. For example - ranking a journal in a very small category does not provide any meaningful value while performing the same in a very large category renders the difference between two closely ranked journals negligible. In order to answer our first research question, we focused on the most frequently used scientometric scores, the JIF and SJR scores, and analysed how these scores and the rankings they induce within a category relate to the number of categories a journal is classified to. Our findings show that while the maximum score a journal receives decreases with number of categories that journal is classified to, this is not the case when observing the other statistics of these metrics, especially in regards to JIF. Our findings suggest that in WoS- the mean, median and Q1 to Q3 quantile JIF score increases as the number of categories a journal is classified to increases. Scopus does not display this upward trend. When examining our distance metrics, we see a positive correlation between the ranking range (as captured by the MM) and the number of categories the journal is classified to. This finding suggests that the more categories the journal is classified to the larger the range of its rankings. In terms of variance in each journal's rankings (as measured by VAR), a weak yet significant correlation between the VAR and the number of categories was found, meaning that a higher number of categories a journal is classified to is marginal indicative of higher variance of ranking among these categories. Combined, these results suggest that as the number of categories a journal is classified to increases, not only is the the range of ranking higher, but that within this range a larger variance is expected. Namely, while a journal is singly given a value (JIF or SJR), the ranking induced by this single value varies greatly across categories and the variation increases as the number of assigned categories increases. This variation in rankings across categories could be exploited to \say{pick and choose} the \say{best} ranking among the assigned categories. To answer our research questions related to categories similarities, we focused on the set theory relationships detailed in Section \ref{sec:background}. Our analysis showed that in either system, no equivalent categories exist, however in each system two of the \textbf{smallest} categories were \say{standalone} categories, meaning that the journals classified to them were not classified to any other category. The extremely small categories and the single classification of journals in these categories renders the usage of scientometric measures of the journals in them pointless. Ranking a very small number of journals, or at worst, a single one in the only category they are classified to cannot provide a an adequate indication of their merit. While, in both systems, other small categories exist, the journals there are classified to additional categories in the same system and could be scientometricly evaluated in the other, significantly larger, categories. Our subset analysis of each system shows that while WoS had no pure subsets, Scopus had numerous such categories and notably, one of these categories, Podiatry, had a single journal and was a subset of three different categories, none of them related to Podiatry. Again, from the low number of journals and due to these categories being pure subsets of other, significantly larger, categories, evaluating the journals in respect to the small categories is not meaningful. The larger categories which are supersets of these categories could potentially give a more meaningful ranking of the journals assigned to them. Surprisingly, looking at the largest categories of each system, WoS has no categories which are supersets and all Scopus categories which are supersets are not the large categories we observed in Table \ref{tbl:large_cats}, except for \say{Education}. Turning to the intersection of categories, our findings show that all of the large categories except the \say{Literature and Literary Theory} category have an intersection with a high percent of categories- 20\% of all categories. This seems to indicate that these categories contain journals from a large diversity of research fields and are in fact multidisciplinary or interdisciplinary. Previous research in respect to such categories had already observed the problems associated with using scientometrics to evaluate journals in these categories, e.g \citep{haustein2012multidimensional}. This is especially true if the metrics are source normalized as the normalization is often based on field of research. Analysing the percentage of journals in the intersecting categories, we can see a few categories which have a very high percentage of intersection, especially in Scopus. The high intersection renders the categories as either almost identical as is the case in \say{Language and Linguistics} and \say{Linguistics and Language} or as one category being a subset of the other category (such as in \say{Radiological and Ultrasound Technology} and \say{Radiology, Nuclear Medicine and Imaging}). The near identity of some categories in Scopus was also observed in \citet{wang2016large}, but their findings were based on the name similarity, while our findings corroborate these by focusing on the journals associated with those categories. Turning to the two categories which have no intersection at all, it is unclear why the single \say{Dental Hygiene} journal is not categorized under \say{Dentistry (miscellaneous)}. A similar argument holds for \say{Nurse Assisting} which its single journal could be categorized under \say{Nursing (miscellaneous)} or \say{Critical Care Nursing}. Taken together, all of the above findings point to a subset of categories which provide little to no meaning when ranking journals. These categories are too small, too redundant and utilizing existing scientometrics to evaluate their journals is not likely to bring about valuable insights. Our minimal cover set analysis showed that in WoS ~98\% of categories \textbf{could not} be covered completely. We can conclude that all of these categories had at least one journal which was classified only to them. Scopus showed different behavior when nearly 1/6 of its categories could be covered. While this contradicts our original hypothesis, this could be expected from both systems as was observed before in Figure \ref{fig:num_cats_per_journal}. Recall that WoS has very large number of journals which are classified to a single category and, in contrast, the majority of journals in Scopus which are multi-classified. Relaxing the threshold required for minimal cover to 95\% shows a near linear relation between the minimal cover size and the number of journals and, similarly, between the minimal cover size and the total number of covering categories. This suggests that a higher number of covering categories is indicative of a small number of shared journals between each of the categories and the category covered. Turning our focus to the inter-system analysis, the results corroborate previous findings showing that journals are classified to statistically significantly higher number of categories in Scopus than in WoS. Viewing the shared journals across categories in the two systems shows an almost linear decrease in number of shared journals as the requirement threshold increases, which is expected as we require a higher percentage of shared journals per category. This is true both when considering WoS as the \say{true} (i.e originating) classification system and when considering Scopus as the \say{true} system. However, from Figure \ref{fig:threshold_match}, we can see that using WoS as the \say{true} classification system, the decrease in shared journals is slow at first with almost 100\% of the categories having 40\% of journals shared with a \say{matching} Scopus category. This trend continues to be slower than the pattern shown when using Scopus as \say{true} classification system all the way to 90\% shared journals threshold. These results indicate that Scopus does not provide a finer grained classification system compared to WoS, but rather a significantly different one. Our analysis of the categories showed that, in both systems, a very small number of categories have a large percentage of shared journals. Considering a 95\% intersection threshold, 6 WoS categories intersected with Scopus categories and one Scopus category intersected with a WoS category. A deeper look regarding the categories with the highest number of shared journals in the intersection shows that these indeed are similar categories as can be observed by their names. In some of these categories, the name is identical, while in the others they are highly similar. This is the case for \say{Cardiac $\&$ Cardiovascular Systems} in WoS and \say{Cardiology and Cardiovascular Medicine} in Scopus or \say{Literature, Romance} in WoS and \say{Literature and Literary Theory} in Scopus. Only the category \say{Ophthalmology} had a large intersection with the category from the second system twice, once when WoS was chosen as \say{true} classification system, and once when Scopus was chosen as such. Following our intersection analysis we focused on categories in one system which are subsets of categories in the other system. our findings showed that these subsets are either very small, containing only 1 or 2 journals, or are indeed sub categories within the larger categories. Such an example is the WoS category \say{Dentistry, Oral Surgery $\&$ Medicine} which is a superset of \say{Oral Surgery} and \say{Orthodontics}, which are two Scopus categories. Finally we looked at the minimal number of categories in one system required to cover any single category in the other system. Due to the nature of our methodology, i.e., only journals which are indexed in both systems were analysed, every category in one system is bound to be covered by categories from the other system. Observing the cumulative cover set with a varying threshold on the percentage of journals required for coverage, we can see that for both systems over 50\% of the categories can be covered by 10 categories at most and as the threshold varies this number decreases. Observing the small categories, with low number of journals or low number of covering categories from the second system show no specific trend in respect to the minimal number of categories required to cover them. Observing \say{outlier} categories, those which require a high number of categories to cover them, our findings show that even categories with a fairly low number of journals (<100) may require a high number of categories to minimally cover them. For WoS categories, $\sim$25\% of them are covered by at least 8 Scopus categories. More notably, for Scopus categories, $\sim$50\% of them are minimally covered by at least 8 WoS categories. As the minimal cover required is non negligible, these results further support our previous observation that Scopus does not provide a finer grained classification system compared to WoS. We recognize that this study is limited in several respects. First, our selection and cleanup process as detailed in Section \ref{sec:data_collection} could have missed identifying journals with slightly different names or added incorrect ones. However, we have performed a manual analysis of some of the journals in order to mitigate this issue as much as possible. In addition, while our study was comprised of a very large set of journals, encompassing an extensive variety of research fields, it contained only the subset of journals indexed by both systems. Thus our findings are on the one hand limited to the journals and categories under analysis, but allow for a more accurate and succinct comparison. The metrics used in our analysis were the JIF and SJR. These two metrics are calculated differently and could be a confounding factor in our ranking analysis and comparison. However, previous studies have shown the correlation of these metrics \citep{subochev2018ranking}. Finally, our analysis did not consider additional metrics which are normalized utilizing different normalization approaches. In future studies we would like to see how such normalization techniques correlate to ranking across categories. Some interesting questions in this area arise: How would the difference in ranking look like when using normalized metrics such as SNIP? Will this difference be consistent? In this aspect the new \textit{Journal Citation Indicator} from WoS would could be compared to current metrics. Another aspect which we wish to further explore will be to combine our analysis with citation based analysis in order to gain a more complete perspective. Finally, we wish to further expand our studies to include additional indexing systems such as Dimensions and Microsoft Academic. \section{Data Analysis} \label{sec:data_analysis} \input{Results_intra} \input{Results_inter} \input{discussion.tex} \section*{} Declarations of interest: none \newline \newline This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. \newpage \section{References} \bibliographystyle{apalike}
2,877,628,089,350
arxiv
\section{Introduction} There has been an increasing interest in named entity recognition or more generally recognizing entity mentions\footnote{Mentions are defined as references to entities that could be named, nominal or pronominal \cite{florian2004statistical}.} \cite{alex2007recognising,finkel2009nested,lu2015joint,muis2017labeling} that the nested hierarchical structure of entity mentions should be taken into account to better facilitate downstream tasks like question answering \cite{abney2000answer}, relation extraction \cite{mintz2009distant,liu2017heterogeneous}, event extraction \cite{riedel2011fast,li-ji-huang:2013:ACL2013}, and coreference resolution \cite{soon2001machine,ng2002improving,chang2013constrained}. Practically, the mentions with nested structures frequently exist in news \cite{doddington2004automatic} and biomedical documents \cite{kim2003genia}. For example in Figure \ref{fig:example}, ``UN Secretary General" of type Person also contains ``UN" of type Organization. \begin{figure}[t] \includegraphics[width=8cm]{image/example.png} \centering \caption{An example sentence of nested mentions represented in the structure of forest. PER:Person, ORG:Organization, GPE:Geo-Political Entity. } \label{fig:example} \vspace{0mm} \end{figure} Traditional sequence labeling models such as conditional random fields (CRF) \cite{lafferty2001conditional} do not allow hierarchical structures between segments, making them incapable to handle such problems. \citet{finkel2009nested} presented a chart-based parsing approach where each sentence with nested mentions is mapped to a rooted constituent tree. The issue of using a chart-based parser is its cubic time complexity in the number of words in the sentence. To achieve a scalable and effective solution for recognizing nested mentions, we design a transition-based system which is inspired by the recent success of employing transition-based methods for constituent parsing \cite{zhang2009transition} and named entity recognition \cite{lou2017transition}, especially when they are paired with neural networks \cite{watanabe2015transition}. Generally, each sentence with nested mentions is mapped to a forest where each outermost mention forms a tree consisting of its inner mentions. Then our transition-based system learns to construct this forest through a sequence of shift-reduce actions. Figure \ref{fig:example} shows an example of such a forest. In contrast, the tree structure by \citet{finkel2009nested} further uses a root node to connect all tree elements. Our forest representation eliminates the root node so that the number of actions required to construct it can be reduced significantly. Following \cite{P15-1033}, we employ Stack-LSTM to represent the system's state, which consists of the states of input, stack and action history, in a continuous space incrementally. The (partially) processed nested mentions in the stack are encoded with recursive neural networks \cite{socher2013recursive} where composition functions are used to capture dependencies between nested mentions. Based on the observation that letter-level patterns such as capitalization and prefix can be beneficial in detecting mentions, we incorporate a character-level LSTM to capture such morphological information. Meanwhile, this character-level component can also help deal with the out-of-vocabulary problem of neural models. We conduct experiments in three standard datasets. Our system achieves the state-of-the-art performance on ACE datasets and comparable performance in GENIA dataset. \section{Related Work} Entity mention recognition with nested structures has been explored first with rule-based approaches \cite{zhang2004enhancing,zhou2004recognizing,zhou2006recognizing} where the authors first detected the innermost mentions and then relied on rule-based post-processing methods to identify outer mentions. \citet{mcdonald2005flexible} proposed a structured multi-label model to represent overlapping segments in a sentence. but it came with a cubic time complexity in the number of words. \citet{alex2007recognising} proposed several ways to combine multiple conditional random fields (CRF) \cite{lafferty2001conditional} for such tasks. Their best results were obtained by cascading several CRF models in a specific order while each model is responsible for detecting mentions of a particular type. However, such an approach cannot model nested mentions of the same type, which frequently appear. \citet{lu2015joint} and \citet{muis2017labeling} proposed new representations of mention hypergraph and mention separator to model {\em overlapping mentions}. However, the nested structure is not guaranteed in such approaches since overlapping structures additionally include the {\em crossing structures}\footnote{For example, in a four-word sentence ABCD, the phrase ABC and BCD together form a {crossing structure}.}, which rarely exist in practice \cite{lu2015joint}. {\color{black}Also, their representations did not model the dependencies between nested mentions explicitly, which may limit their performance.} In contrast, the chart-based parsing method \cite{finkel2009nested} can capture {\color{black} the dependencies between nested mentions with composition rules which allow an outer entity to be influenced by its contained entities}. However, their cubic time complexity makes them not scalable to large datasets. As neural network based approaches are proven effective in entity or mention recognition \cite{collobert2011natural,lample2016neural,huang2015bidirectional,chiu2016named,ma-hovy:2016:P16-1}, recent efforts focus on incorporating neural components for recognizing nested mentions. \citet{N18-1131} dynamically stacked multiple LSTM-CRF layers \cite{lample2016neural}, detecting mentions in an inside-out manner until no outer entities are extracted. \citet{N18-1079} used recurrent neural networks to extract features for a hypergraph which encodes all nested mentions based on the BILOU tagging scheme. \section{Model} Specifically, given a sequence of words $\lbrace x_0, x_1, \dots, x_n\rbrace$, the goal of our system is to output a set of mentions where nested structures are allowed. We use the forest structure to model the nested mentions scattered in a sentence, as shown in Figure \ref{fig:example}. The mapping is straightforward: each outermost mention forms a tree where the mention is the root and its contained mentions correspond to constituents of the tree.\footnote{Note that words that are not contained in any mention each forms a single-node tree.} \subsection{Shift-Reduce System} \begin{figure}[t!] \begin{center} \renewcommand{\arraystretch}{0.8} \begin{tabular}{>{\small}c>{\small}c} Initial State & $[\phi, 0, \phi]$\\ Final State & $[ S, n, A]$\\ \\ \textsc{Shift} & {\Large$\frac{[S, \ i, \ A]}{[S|w, \ i+1, \ A|\textsc{Shift}]}$} \\ \\ \textsc{Reduce-X} & {\Large$\frac{[S|t_1t_0, \ i, \ A]}{[S|X, \ i, \ A|\textsc{Reduce-X}]}$} \\ \\ \textsc{Unary-X} & {\Large$\frac{[S|t_0, \ i, \ A]}{[S|X, \ i, \ A|\textsc{Unary-X}]}$} \\ \\ \end{tabular} \end{center} \caption{\label{fig:ds1} Deduction rules. $[ S, i, A]$ denotes stack, buffer front index and action history respectively. } \end{figure} Our transition-based model is based on the shift-reduce parser for constituency parsing \cite{watanabe2015transition}, which adopts \cite{zhang2009transition,sagae2005classifier}. Generally, our system employs a stack to store (partially) processed nested elements. The system's state is defined as $[S, i, A]$ which denotes stack, buffer front index and action history respectively. In each step. an action is applied to change the system's state. Our system consists of three types of transition actions, which are also summarized in Figure \ref{fig:ds1}: \begin{itemize} \item \textsc{Shift} pushes the next word from buffer to the stack. \item \textsc{Reduce-X} pops the top two items $t_0$ and $t_1$ from the tack and combines them as a new tree element \{$\textsc{X} \rightarrow t_0t_1$\} which is then pushed onto the stack. \item \textsc{Unary-X} pops the top item $t_0$ from the stack and constructs a new tree element \{$\textsc{X} \rightarrow t_0$\} which is pushed back to the stack. \end{itemize} Since the shift-reduce system assumes unary and binary branching, we binarize the trees in each forest in a left-branching manner. For example, if three consecutive words $A, B, C$ are annotated as Person, we convert it into a binary tree $\{Person \rightarrow \{ Person* \rightarrow A, B \}, C\}$ where $Person*$ is a temporary label for $Person$. Hence, the $\textsc{X}$ in reduce- actions will also include such temporary labels. Note that since most words are not contained in any mention, they are only shifted to the stack and not involved in any reduce- or unary- actions. An example sequence of transitions can be found in Figure \ref{fig:transitions}. Our shift-reduce system is different from previous parsers in terms of the terminal state. 1) It does not require the terminal stack to be a rooted tree. Instead, the final stack should be a forest consisting of multiple nested elements with tree structures. 2) To conveniently determine the ending of our transition process, we add an auxiliary symbol $\$$ to each sentence. Once it is pushed to the stack, it implies that all deductions of actual words are finished. Since we do not allow unary rules between labels like $\textsc{X1} \rightarrow \textsc{X2}$, the length of maximal action sequence is $3n$.\footnote{In this case, each word is shifted ($n$) and involved in a unary action ($n$). Then all elements are reduced to a single node ($n-1$). The last action is to shift the symbol $\$$. } \begin{figure}[t] \includegraphics[width=\linewidth, height=9.2cm]{image/transitions.pdf} \centering \caption{An example sequence of transition actions for the sentence ``Indonesian leaders visited him''. \$ is the special symbol indicating the termination of transitions. PER:Person, GPE:Geo-Political Entity.} \label{fig:transitions} \end{figure} \subsection{Action Constraints} To make sure that each action sequence is valid, we need to make some hard constraints on the action to take. For example, reduce- action can only be conducted when there are at least two elements in the stack. Please see the Appendix for the full list of restrictions. Formally, we use $\mathcal{V}(S,i,A)$ to denote the valid actions given the parser state. Let us denote the feature vector for the parser state at time step $k$ as $\mathbf{p}_k$. The distribution of actions is computed as follows: \begin{equation} p(z_k \mid \mathbf{p}_k) = \frac{\exp \left( \mathbf{w}_{z_k}^{\top} \mathbf{p}_k + b_{z_k} \right)}{\sum_{z' \in \mathcal{V}(S,i,A)} \exp \left( \mathbf{w}_{z'}^{\top} \mathbf{p}_k + b_{z'} \right)} \label{eq:probs} \end{equation} \noindent where $\mathbf{w}_z$ is a column weight vector for action $z$, and $b_z$ is a bias term. \subsection{Neural Transition-based Model} We use neural networks to learn the representation of the parser state, which is $\mathbf{p}_k$ in (\ref{eq:probs}). \subsubsection*{Representation of Words} \label{sec:wr} Words are represented by concatenating three vectors: \begin{equation} \mathbf e_{x_i} = [ \mathbf e_{w_i}, \mathbf e_{p_i}, \mathbf c_{w_i}] \label{eq:nmr_x} \end{equation} \noindent where $ \mathbf e_{w_i} $ and $\mathbf e_{p_i}$ denote the embeddings for $i$-th word and its POS tag respectively. $\mathbf c_{w_i}$ denotes the representation learned by a character-level model using a bidirectional LSTM. Specifically, for character sequence $s_0, s_1,\dots, s_n$ in the $i$-{th} word, we use the last hidden states of forward and backward LSTM as the character-based representation of this word, as shown below: \begin{equation} \mathbf c_{w_i} = [ \overrightarrow \LSTMc (s_0, \dots, s_n), \overleftarrow \LSTMc (s_0, \dots, s_n) ] \label{eq:char_x} \end{equation} \subsubsection*{Representation of Parser States} Generally, the buffer and action history are encoded using two vanilla LSTMs \cite{graves2005framewise}. For the stack that involves popping out top elements, we use the Stack-LSTM \cite{P15-1033} to efficiently encode it. Formally, if the unprocessed word sequence in the buffer is $x_i, x_{i+1},\dots, x_{n}$ and action history sequence is $a_0, a_{1},\dots,a_{k-1}$, then we can compute buffer representation $\mathbf{b}_k$ and action history representation $\mathbf{a}_k$ at time step $k$ as follows: \begin{align} \mathbf{b}_k = \overleftarrow \LSTMb [\mathbf{e}_{x_i}, \dots, \mathbf{e}_{x_{n}} ] \ \ \ \\ \label{eq:buffer} \mathbf{a}_k = \overrightarrow \LSTMa[\mathbf{e}_{a_0}, \dots, \mathbf{e}_{a_{k-1}} ] \end{align} where each action is also mapped to a distributed representation $\mathbf{e}_{a}$.\footnote{Note that $\LSTMb$ runs in a right-to-left order such that the output can represent the contextual information of $x_i$.} For the state of the stack, we also use an LSTM to encode a sequence of tree elements. However, the top elements of the stack are updated frequently. Stack-LSTM provides an efficient implementation that incorporates a stack-pointer.\footnote{Please refer to \citet{P15-1033} for details.} Formally, the state of the stack $\mathbf b_k$ at time step $k$ is computed as: \begin{equation} \mathbf{s}_k = \LSTMs [\mathbf{h}_{t_m}, \dots, \mathbf{h}_{t_0} ] \end{equation} where $\mathbf{h}_{t_i}$ denotes the representation of the $i$-{th} tree element from the top, which can be computed recursively similar to Recursive Neural Network \cite{socher2013recursive} as follows: \begin{align} \mathbf{h}_{parent} = \mathbf{W}_{u,l}^{\top} \mathbf{h}_{child} + \mathbf{b}_{u,l} \quad \quad \quad \quad \ \\ \mathbf{h}_{parent} = \mathbf{W}_{b,l}^{\top} [\mathbf{h}_{lchild}, \mathbf{h}_{rchild} ] + \mathbf{b}_{u,l} \ \ \end{align} where $\mathbf{W}_{u,l}$ and $\mathbf{W}_{b,l}$ denote the weight matrices for unary($u$) and binary($b$) composition with parent node being label($l$). Note that the composition function is distinct for each label $l$. Recall that the leaf nodes of each tree element are raw words. Instead of representing them with their original embeddings introduced in Section \ref{sec:wr}, we found that concatenating the buffer state in (\ref{eq:buffer}) are beneficial during our initial experiments. Formally, when a word $x_i$ is shifted to the stack at time step $k$, its representation is computed as: \begin{equation} \mathbf{h}_{leaf} = \mathbf{W}_{leaf}^{\top} [ \mathbf e_{x_i}, \mathbf b_k ] + \mathbf{b}_{leaf} \end{equation} Finally, the state of the system $\mathbf p_k$ is the concatenation of the states of buffer, stack and action history: \begin{equation} \mathbf{p}_{k} = [\mathbf b_k, \mathbf s_k, \mathbf a_k] \end{equation} \subsubsection*{Training} We employ the greedy strategy to maximize the log-likelihood of the local action classifier in (\ref{eq:probs}). Specifically, let $z_{ik}$ denote the $k$-{th} action for the $i$-{th} sentence, the loss function with $\ell_2$ norm is: \begin{equation} \mathcal L(\theta) = - \sum_i \sum_k \log p(z_{ik}) + \frac{\lambda}{2} \Vert \theta \Vert ^2 \end{equation} where $\lambda$ is the $\ell_2$ coefficient. \begin{table}[t!] \centering \scalebox{0.72} { \begin{tabular}{l|c|c|c|c} Models & ACE04 & ACE05 & GENIA & $w/s$ \\ \hline \citet{finkel2009nested} & - & - & 70.3 & 38$^\dagger$ \\ \citet{lu2015joint} & 62.8 & 62.5 & 70.3 & 454 \\ \citet{muis2017labeling} & 64.5 & 63.1 & 70.8 & 263 \\ \citet{N18-1079} & 72.7 & 70.5 & 73.6 & -\\ \citet{N18-1131} \footnote{Note that in ACE2005, \citet{N18-1131} did their experiments with a different split from \citet{lu2015joint} and \citet{muis2017labeling} which we follow as our split. } & - & 72.2 & \bf{74.7} & -\\ \hline Ours & \bf{73.3} & \bf{73.0} & 73.9 & 1445 \\ - char-level LSTM & 72.3 & 71.9 & 72.1 & 1546 \\ - pre-trained embeddings & 71.3 & 71.5 & 72.0 & 1452 \\ - dropout layer & 71.7 & 72.0 & 72.7 & 1440 \end{tabular} } \caption{Main results in terms of $F_1$ score (\%). $w/s$: \# of words decoded per second, number with $\dagger$ is retrieved from the original paper. } \label{tab:result} \end{table} \section{Experiments} We mainly evaluate our models on the standard ACE-04, ACE-05 \cite{doddington2004automatic}, and GENIA \cite{kim2003genia} datasets with the same splits used by previous research efforts \cite{lu2015joint,muis2017labeling}. In ACE datasets, more than 40\% of the mentions form nested structures with some other mention. In GENIA, this number is 18\%. Please see \citet{lu2015joint} for the full statistics. \subsection{Setup} Pre-trained embeddings GloVe \cite{pennington2014glove} of dimension 100 are used to initialize the word vectors for all three datasets.\footnote{We also additionally tried using embeddings trained on PubMed for GENIA but the performance was comparable.} The embeddings of POS tags are initialized randomly with dimension 32. The model is trained using Adam \cite{kingma2014adam} and a gradient clipping of 3.0. Early stopping is used based on the performance of development sets. Dropout \cite{srivastava2014dropout} is used after the input layer. The $\ell_2$ coefficient $\lambda$ is also tuned during development process. \subsection{Results} The main results are reported in Table \ref{tab:result}. Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of $F_1$ measure. We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the portions of nested mentions in our datasets. To verify this, we design an experiment to evaluate how well a system can recognize nested mentions. \subsubsection*{Handling Nested Mentions} The idea is that we split the test data into two portions: sentences with and without nested mentions. The results of GENIA are listed in Table \ref{tab:nested}. We can observe that the margin of improvement is more significant in the portion of nested mentions, revealing our model's effectiveness in handling nested mentions. This observation helps explain why our model achieves greater improvement in ACE than in GENIA in Table \ref{tab:result} since the former has much more nested structures than the latter. Moreover, \citet{N18-1131} performs better when it comes to non-nested mentions possibly due to the CRF they used, which globally normalizes each stacked layer. \subsubsection*{Decoding Speed} Note that \citet{lu2015joint} and \citet{muis2017labeling} also feature linear-time complexity, but with a greater constant factor. To compare the decoding speed, we re-implemented their model with the same platform (PyTorch) and run them on the same machine (CPU: Intel i5 2.7GHz). Our model turns out to be around 3-5 times faster than theirs, showing its scalability. \subsubsection*{Ablation Study} To evaluate the contribution of neural components including pre-trained embeddings, the character-level LSTM and dropout layers, we test the performances of ablated models. The results are listed in Table \ref{tab:result}. From the performance gap, we can conclude that these components contribute significantly to the effectiveness of our model in all three datasets. \begin{table}[t!] \centering \scalebox{0.75} { \begin{tabular}{l|ccc|ccc} & \multicolumn{6}{c}{GENIA} \\ & \multicolumn{3}{c|}{{Nested}} & \multicolumn{3}{c}{{Non-Nested}} \\ &$P$ & $R$ & $F_1$ & $P$ & $R$ & $F_1$ \\ \hline \citet{lu2015joint} & 76.3 & 60.8 & 67.7 & 73.1 & 70.7 & 71.9 \\ \citet{muis2017labeling} & 76.5 & 60.3 & 67.4 & 74.8 & 71.3 & 73.0 \\ \citet{N18-1131} & 79.4 & 63.6 & 70.6 & 78.5 & 77.5 & 78.0\\ \hline Ours & 80.3 & 64.6 & 71.6 & 76.8 & 73.9 & 75.3 \\ \end{tabular} } \caption{Results (\%) on different types of sentences on the GENIA dataset.} \label{tab:nested} \end{table} \section{Conclusion and Future Work} In this paper, we present a transition-based model for nested mention recognition using a forest representation. Coupled with Stack-LSTM for representing the system's state, our neural model can capture dependencies between nested mentions efficiently. Moreover, the character-based component helps capture letter-level patterns in words. The system achieves the state-of-the-art performance in ACE datasets. One potential drawback of the system is the greedy training and decoding. We believe that alternatives like beam search and training with exploration \cite{goldberg2012dynamic} could further boost the performance. Another direction that we plan to work on is to apply this model to recognizing overlapping and entities that involve discontinuous spans \cite{muis2016learning} which frequently exist in the biomedical domain. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their valuable comments. We also thank Meizhi Ju for providing raw predictions and helpful discussions. This work was done after the first author visited Singapore University of Technology and Design. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156.
2,877,628,089,351
arxiv
\section{Introduction} In a recent paper, we studied the spectrum of a modified form of the usual Schr\"odinger-Newton system (SN) of gravity coupled to quantum mechanics (SN was originally developed in~\cite{Bon}). Now we turn to the spherical dynamics of the self-coupled gravity introduced, in this quantum mechanical setting, in~\cite{us}. For the SN system, we have Newtonian gravity determining the potential $\Phi$ using the wave function itself to describe the mass density, so the coupled system is \begin{equation}\label{SN} \begin{aligned} i \, \hbar \, \frac{\partial \Psi}{\partial t} &= -\frac{\hbar^2}{2 \, m} \, \nabla^2 \Psi + m \, \Phi \, \Psi \\ \nabla^2 \Phi &= 4 \, \pi \, G \, m \, \Psi^* \, \Psi. \end{aligned} \end{equation} The spectrum and dynamics of this system of equations has been studied extensively, and its relevance to single-particle collapse similarly explored -- see~\cite{Carlip, Giulini2} and references therein for a review of that discussion. Motivated by the special relativistic notion that energy and mass are equivalent, we modified the gravitational piece to include the self-gravity of $\Phi$ itself -- the resulting static theory of gravity was originally introduced by Einstein in~\cite{Einstein}, and has been re-developed periodically (see~\cite{FandN, DandH, GiuliniSC, FranklinAJP}, for example). When we combine this new gravity model with Schr\"odinger's equation, we get \begin{equation}\label{SCSG} \begin{aligned} i \, \hbar \, \frac{\partial \Psi}{\partial t} &= -\frac{\hbar^2}{2 \, m} \, \nabla^2 \Psi + m \, \Phi \, \Psi \\ \nabla^2 \sqrt{\Phi} &= \frac{2 \, \pi \, G}{c^2} \, m \, \Psi^* \, \Psi \, \sqrt{\Phi}. \end{aligned} \end{equation} Here, we have modified the field equation for gravity to reflect the same sort of self-consistent self-coupling that is found in full general relativity (albeit in a scalar setting). The form comes from considering the combined gravity/quantum mechanical equation, from~\cite{Moller, Rosenfeld}, \begin{equation}\label{MRG} G_{\mu\nu} = 8 \, \pi \, \langle \hat T_{\mu\nu} \rangle, \end{equation} and making a gravitational field equation in~\refeq{SCSG} that is more like the nonlinear (Einstein tensor) left-hand side of~\refeq{MRG} than the linear Poisson equation for gravity found in~\refeq{SN}. Both SN and our modification take the source to be $m \, \Psi^*\, \Psi$, and the approach can be viewed either as part of a multi-body Hartree approximation, or fundamental (the many-body view would not change the gravitational field equation here -- we would still have to incorporate the energy self-coupling). In this work, we will take a single-particle wave-function which cannot be viewed, by itself, as a Hartree approximation (due to the lack of self-interaction in the Hartree approach~\cite{Adler}). There are other ways of extending the gravitational field equation to capture additional relativistic effects, like introducing the gravito-magnetic contribution as in~\cite{Manfredi}. That allows the ``magnetic" component of weak-field gravity to play a role in the SN setting. But that extension retains the linearity of the gravitational field equations themselves. We are working in a complementary direction, in which we extend to include the self-energy coupling that leads to the nonlinearity of general relativity. The dynamics of the SN system, in particular, the details of spherical collapse, have been studied, and our goal is to compare the SN collapse with the (potential) spherical collapse of an initial Gaussian evolved using~\refeq{SCSG}. En route to that comparison, we will first consider the role of the relativistic Dirac equation with the modified gravity. Then we will estimate the critical mass at which the gravitational interaction balances the spreading of a free Gaussian, for both SN and the modified gravitational form. In the SN case, this critical mass defines the boundary between collapse (to a ground state) and dissipation. For the self-coupled case, there is no collapse to the ground state, although at the critical mass, there is a balance between gravity and quantum mechanical dissipation. \section{Dirac Equation} Given that we are using the relativistic notion of energy and mass equivalence to motivate the use of the modified form of gravity appearing in~\refeq{SCSG}, it is reasonable to introduce the competing relativistic effects on the quantum mechanical side. If we start with the Dirac Lagrangian, coupled to the Lagrangian appropriate to the modified form of gravity (that gravitational Lagrangian can be found in~\cite{us,FranklinAJP}), \begin{equation} \mathcal L = i \, \hbar \, \bar \Psi \, \gamma^\nu \, \partial_\mu \, \Psi - m \, c^2 \, \bar \Psi \, \Psi - m \, \Phi \, \bar \Psi \, \gamma^0 \, \Psi - \frac{c^2}{8 \, \pi \, G \, \Phi} \nabla \Phi \cdot \nabla \Phi, \end{equation} then the resulting Dirac equation and modified gravity coupling gives an eigenvalue problem for the ground state that looks like (already in spherical coordinates): \begin{equation} \begin{aligned} \left[ \begin{array}{cc} m \, c^2 + m \, \Phi & \hbar \, c \, \of{-\frac{d}{d r} + \frac{\kappa}{r}} \\ \hbar \, c \, \of{\frac{d}{dr} + \frac{\kappa}{r}} & - m\, c^2 + m \, \Phi \end{array} \right] \, \left[\begin{array}{c} u \\ v \end{array} \right] &= E \, \left[\begin{array}{c} u \\ v \end{array} \right] \\ \frac{d^2}{d r^2} \, \of{r \, \sqrt{\Phi}} &= \frac{2 \, G \, m}{c^2 \, r} \, \of{u^* u + v^* v} \, \sqrt{\Phi}, \end{aligned} \end{equation} where we take $\kappa = 1/2$ (no orbital angular momentum). We can solve this coupled system just as we did in~\cite{us} -- the numerical method doesn't change significantly, although there are relativistic details that need to be addressed (the presence of negative energy states, for example, means we need to be careful how we identify the ground state). We modified our method to accommodate the additional complexity, and proceeded to find the ground state energies for the new system (see~\cite{DanGuo}). The Dirac ground state energy, as a function of mass, is shown in~\reffig{fig:west}. In that figure, we also show the effect of using the Dirac equation together with Newtonian gravity, and the ground state energy of SN itself, all for comparison. \begin{figure}[htbp] \centering \includegraphics[width=4in]{Diracenergies} \caption{The (dimensionless) energy, as a function of mass (in units of Planck mass), for the ground state of the modified-gravity-Dirac system is shown with black dots. The same calculation using a Newtonian gravitational field and the Dirac equation is shown in gray dots, and the solid line is the SN ground state energy, for comparison.} \label{fig:west} \end{figure} By how much does the ground state energy change when we use the Dirac equation instead of Schr\"odinger? We can compare the energy estimates directly, as shown in~\reffig{fig:comparE}. There, the percentage difference between the energies computed using the Schr\"odinger equation vs. the Dirac equation are shown (both cases use the modified form of gravity, of course). \begin{figure}[htbp] \centering \includegraphics[width=4in]{comparE} \caption{The percentage difference between the ground state energies as computed using the Dirac equation and the Schr\"odinger equation. Mass is in units of Planck mass.} \label{fig:comparE} \end{figure} The divergence of the two energies at the masses shown is relatively mild, with a difference of $10\%$ at five Planck masses. For the temporal evolution of an initial Gaussian, we'll use the Schr\"odinger equation, where the numerical method is easy to generate and verify. We will work with large masses, between $1$ and $5$ Planck mass, where the ground state energies differ by $\sim 5-10 \%$ between Schr\"odinger and Dirac. While we are well within the relativistic regime at these masses, the difference in energy is small, and we expect the basic qualitative dynamics to hold using the Schr\"odinger equation instead of the Dirac equation. \section{Dimensionless Form, Units} Starting from~\refeq{SN} and~\refeq{SCSG}, let $P \equiv r\, \Psi$, and then set $r = r_0 \, R$, $t = t_0 \, T$, and let $\Phi = c^2\, \bar \Phi$, $P = P_0 \, \bar P$, and $m = m_0 \, \bar m$ with $m_0 \equiv \sqrt{\frac{\hbar \, c}{G}}$ the Planck mass. The Schr\"odinger equation becomes \begin{equation} -\frac{\partial^2 \bar P}{\partial R^2} + \bar m^2 \, \bar \phi \, \bar P = i \, \bar m \, \frac{\partial \bar P}{\partial T} \end{equation} and then we use either Newtonian gravity (top) or the self-coupled form (bottom): \begin{equation} \begin{aligned} \frac{\partial^2}{\partial R^2} \, \of{R \, \bar \phi} &= \frac{\bar m}{R} \, \bar P^* \, \bar P \\ \frac{\partial^2}{\partial R^2} \, \of{R \, \sqrt{\bar \phi}} &= \frac{1}{2} \, \bar m \, \frac{\sqrt{\bar \phi}}{R} \, \bar P^* \, \bar P \end{aligned} \end{equation} where we have set \begin{equation}\label{scales} r_0 = \frac{\hbar}{\sqrt{2} \, m_0 \, c} \, \, \, \, \, \, \, \, \, \, \, t_0 = \frac{\hbar}{m_0 \, c^2} \, \, \, \, \, \, \, \, \, \, \, P_0 = \frac{c}{\sqrt{4 \, \pi \, m_0 \, G}}, \end{equation} and $r_0$ is (up to the factor of $1/\sqrt{2}$) the Planck length. While the SN set has well-known scaling properties (see~\cite{Moroz,Giulinibound}) that allow a single numerical result to be relevant for a wide variety of mass and length scales, the nonlinearity introduced in the self-coupled form of gravity spoils the scaling, so that the numerical results refer only to the mass/length scales used. We know that the self-coupled scalar gravity reduces to Newtonian gravity for small masses, so the results of previous work will hold at those relevant mass scales (around $10^{10}$ u, for example). Our goal is to probe the higher mass regime, in which the relativistic correction provided by the self-coupling of the gravitational field is significant, and these scales are basically forced upon us numerically -- the choices in~\refeq{scales} uniquely render the gravitational field equation with unit coefficients. We'll start with a spherically symmetric Gaussian wave function: \begin{equation}\label{Psinitial} \Psi(r,0) = \of{ \pi \, a^2 }^{-3/4} \, e^{-r^2/(2 \, a^2)} \end{equation} where $a^2$ is the variance (up to constants) of the initial distribution. Then our initial, dimensionless $\bar P$ is \begin{equation}\label{barPinitial} \bar P(R,0) = r_0 \, R \, \Psi(r_0 \, R,0)/ P_0 = 2 \, \of{\frac{2}{\pi}}^{1/4} \, A^{-3/2} \, R \, e^{-R^2/(2\, A^2)} \end{equation} with $a = r_0\, A$. The normalization of the wave function, in the dimensionless setting, is \begin{equation}\label{normit} \int_0^\infty \bar P^* \, \bar P \, d R = \frac{1}{4 \, \pi \, P_0^2 \, r_0} = \sqrt{2}. \end{equation} For our initial Gaussians, we will take $a = r_0$, so that $A = 1$. While we can make $A$ larger to spread out the initial distribution of mass as a source for gravity, there is no natural multiple of $r_0$ to use -- one might try to extend the distribution beyond, for example, its Schwarzschild radius (at $2 \, \sqrt{2} \, \bar m$ in these dimensionless units) -- but then the mass required to achieve collapse also increases, and the initial distribution ends up inside the Schwarzschild radius again~\cite{NOTE1}. In order to compare with potential experiments, the relevant scale is $a = .5 \times 10^{-6}$ m (as in~\cite{Salzman}), but in our units, this leads to $A \sim 4 \times 10^{28}$, inappropriately large for numerical work. At the low densities implied by taking $a = .5\, \mu$ m, we know that the predictions of the self-coupled form of gravity match the Newtonian case. Choosing $A = 1$ allows us to probe the regime in which Newtonian gravity must be augmented by the self-gravity of the field (and additional, as yet unknown, physics). \section{Numerical Method} The collapse dynamics of SN have been studied in~\cite{Giulinibound, HarrisonN, Meter, Salzman}, and all use similar methods to time-evolve initial Gaussians: some variant of Crank-Nicolson and a solver for the gravitational Poisson problem in iterative combination. Our method is similar, when applied to SN, although we use Verlet to find the gravitational field (as opposed to quadrature or a pseudo-spectral method). Verlet is easy to apply to the nonlinearity present in the self-coupled gravitational field equation, with its more complicated boundary conditions. The pieces (Crank-Nicolson and Verlet) can be described separately, but then an iterative step must be involved to achieve a self-consistent solution. We start by discretizing in space and time via $R_j = j \, \Delta R$ and $T_n = n \, \Delta T$ for constant spacings $\Delta R$, $\Delta T$. We'll call the value of $\bar P$ (at location $R_j$ and time $T_n$) $\bar P(R_j, T_n) \equiv \bar P^n_j$, and similarly $\bar \phi(R_j, T_n) \equiv \bar \phi^n_j$. The forward-Euler discretization in time, for the Schr\"odinger piece, reads \begin{equation}\label{fed} \bar P^{n+1}_j =\bar P^n_j -\frac{i}{\bar m} \, \Delta T \, \left[ -\frac{\bar P^{n}_{j+1} - 2 \, \bar P^n_j + \bar P^n_{j-1}}{\Delta R^2} + \bar m^2 \, \bar \phi^n_j \, \bar P^n_j \right]. \end{equation} This equation holds for all grid points, and we understand that at $j = 0$, we have $\bar P^n_0 = 0$ for all $n$, that's the boundary condition at the origin (for $\Psi$ finite at the origin, as it should be, $P = r\, \Psi$ will be zero at the origin). The spatial grid will extend to $R_\infty = N \, \Delta R$ for integer $N$, our choice of numerical infinity, and out there we'll again set $\bar P^n_{N+1} = 0$; the wave function should vanish. Let the vector $\bar{\bf P}^n$ contain the (unknown) spatial values at time level $n$: \begin{equation} \bar{\bf P}^n \dot = \left( \begin{array}{c} P^n_1 \\ P^n_2 \\ \vdots \\ P^n_{N} \end{array} \right). \end{equation} and similarly for the vector $\bar{\bm \phi}^n$. Then we can write the forward Euler discretization (together with the boundary conditions) in terms of a matrix-vector multiplication: \begin{equation} \bar{\bm P}^{n+1} = \of{\mat I - i\, \Delta T \, \mat H(\bar{\bm \phi}^n)} \, \bar{\bm P}^n \end{equation} where $\mat I$ is the identity matrix, $\mat H(\bar{\bm \phi}^n)$ is defined by~\refeq{fed}, and we highlight its dependence on the gravitational potential. The backwards Euler version of the problem is \begin{equation} \of{\mat I + i\, \Delta T \, \mat H(\bar{\bm \phi}^{n+1}) } \, \bar{\bf P}^{n+1} = \bar{\bf P}^n, \end{equation} and then the Crank-Nicolson method is defined by \begin{equation}\label{CN} \of{\mat I + i\, \frac{\Delta T}{2} \, \mat H(\bar{\bm \phi}^{n+1}) } \, \bar{\bf P}^{n+1} = \of{\mat I - i\, \frac{\Delta T}{2} \, \mat H(\bar{\bm \phi}^n)} \, \bar{\bf P}^n. \end{equation} For the gravitational field portion, we'll use Verlet, although the details will change slightly between the two forms of gravity for reasons that will become clear as we go. For Newtonian gravity, we start at ``spatial infinity" (out at $R_N$) with the Newtonian limiting form: $\bar \phi^n_{N} = 1 - \sqrt{2}/R_N$ and $\bar \phi^n_{N+1} = 1 - \sqrt{2}/R_{N+1}$ -- the constant term provides a constant offset ($c^2$ when units are introduced) that doesn't effect the probability density here, but we introduce it for comparison with the modified gravity. Starting at $N$, we move inwards according to the Verlet update: \begin{equation} \bar{\phi}^{n+1}_{j-1} =\frac{1}{R_{j-1}} \, \of{ 2 \, \bar \phi^{n+1}_j \, R_j - \bar \phi^{n+1}_{j+1} \, R_{j+1}+ \Delta R^2 \, \of{ \frac{\bar m}{R_j} \, \left\vert P^{n+1}_j \right\vert^2 }}. \end{equation} The procedure for modified gravity is a little different -- at spatial infinity, we know that Newtonian gravity, for a spherically symmetric source of mass $m$, must limit to $-\frac{G \, m}{r}$ (or $c^2 - \frac{G \, m}{r}$ if a constant offset is desired). But for the modified gravitational field, we have $c^2 - \frac{G \, \tilde m}{r}$ as the leading contribution at spatial infinity -- the $c^2$ is required so that the modified solutions become Newtonian in the non-relativistic limit (see~\cite{FranklinAJP}), and the ``mass" $\tilde m$ depends on the details of the central distribution (for example, a point mass $m$ at the origin and a sphere of homogeneous mass density and total mass $m$, lead to different values for $\tilde m$). Since the central distribution of mass will change here, the value for $\tilde m$ is a function of time, a complication we'd like to avoid. Instead, we'll focus on the value of the field as $r \rightarrow 0$. For a sphere with homogenous mass density, the internal field $\Phi(r)$ looks like (see~\cite{GiuliniSC,FranklinAJP}) \begin{equation} \Phi = \left[ \frac{c}{\cosh(R/r_0)} \, \frac{\sinh(r/r_0)}{r/r_0}\right]^2, \end{equation} where $R$ is the radius of the sphere and $r_0$ is a constant related to the mass. As $r \rightarrow 0$, $\phi$ goes to a constant bounded by $c^2$, and the derivative of $\phi$ goes to zero. Since we expect there to be some non-zero density near the origin, these are reasonable boundary conditions for our numerical solution, i.e.\ $\bar \phi^n_0 = C$ a constant $\in [0,1]$ and $\bar \phi^n_1 = C$, so that the numerical derivative is approximately zero. We will pick $C$ so that $\bar\phi^n_N = 1$, its limiting value, at spatial infinity (the best we can do here) by shooting -- i.e.\ we will run forward Verlet: \begin{equation} \sqrt{\bar{\phi}^{n+1}_{j+1}} =\frac{1}{R_{j+1}} \, \left[ 2 \, \sqrt{\bar \phi^{n+1}_j } \, R_j - \sqrt{\bar \phi^{n+1}_{j-1}} \, R_{j-1}+ \Delta R^2 \, \of{ \frac{\bar m}{2 \, R_j} \, \left\vert P^{n+1}_j \right\vert^2 \, \sqrt{\bar \phi^{n+1}_j} } \right]. \end{equation} for different values of $C= \sqrt{\phi^{n+1}_0} = \sqrt{\phi^{n+1}_1}$ until $\bar \phi^n_N \approx 1$, using bisection to determine $C$ accurately. In both of these cases, Newtonian and modified, we must iterate at each time level to achieve a self-consistent solution -- notice that the left-hand side of~\refeq{CN} depends on $\bar {\bm \phi}^{n+1}$, which we can only get once $\bar{\bf P}^{n+1}$ is known -- but we can't {\it find} $\bar{\bf P}^{n+1}$ without $\bar{\bm \phi}^{n+1}$. To break out of the recursion, we will define an iterative index $k$ -- let $\ ^k\bar{\bf P}^{n+1}$ and $\ ^k\bar{\bm \phi}^{n+1}$ be the $k$ iteration at time-level $n+1$. For $k = 0$, we define $\ ^0\bar {\bf P}^{n+1} = \bar{\bf P}^n$ and $\ ^0\bar{\bm \phi}^{n+1} = \bar{\bm \phi}^n$. Now, at level $k$, we update (using the Newtonian update for simplicity) according to: \begin{equation} \begin{aligned} \of{\mat I + i\, \frac{\Delta T}{2} \, \mat H(\ ^{k}\bar{\bm \phi}^{n+1}) } \, \ ^{k+1} \bar{\bf P}^{n+1} &= \of{\mat I - i\, \frac{\Delta T}{2} \, \mat H(\bar{\bm \phi}^n)} \, \bar{\bf P}^n \\ \ ^{k+1}\bar{\phi}^{n+1}_{j-1} &=\frac{1}{R_{j-1}} \, \of{ 2 \, \ ^{k+1} \bar \phi^{n+1}_j \, R_j - \ ^{k+1} \bar \phi^{n+1}_{j+1} \, R_{j+1}+ \Delta R^2 \, \of{ \frac{\bar m}{R_j} \, \left\vert \ ^{k+1}P^{n+1}_j \right\vert^2 }} \end{aligned} \end{equation} where the top line defines the new value for the wave function, and the second line updates the gravitational field. We proceed with this iteration until \begin{equation} \| \ ^{k+1}\bar{\bm P}^{n+1} - \ ^k \bar{\bm P}^{n+1} \| < \epsilon \end{equation} where $\epsilon$ is given -- i.e.\ we continue to iterate until the wave function has stopped changing significantly. Once we have achieved (numerical) convergence, we set $\bar{\bm P}^{n+1} = \ ^{k+1} \bar{\bm P}^{n+1}$ and $\bar{\bm \phi}^{n+1} = \ ^{k+1} \bar{\bm \phi}^{n+1}$, and we're ready to move on to the next time step. \section{Critical Mass Estimate} The goal of this section is to establish mass values for which the behavior of the initial Gaussian shifts from ``mainly quantum", with the initial Gaussian spreading out over time, to ``mainly gravitational", with the initial Gaussian becoming more localized. One simple way to estimate this mass, from~\cite{Giulinibound}, is to take the free particle solution for the initial Gaussian, which is: \begin{equation} \Psi(r,t) = \of{\pi \, a^2}^{-3/4} \, \of{1 + \frac{i \, \hbar \, t}{m\, a^2}}^{-3/2} \, e^{-\frac{r^2}{2 \, a^2 \, \of{1 + \frac{i \, \hbar \, t}{m \, a^2}}}} \end{equation} and note that the peak of $r^2 \, \Psi^*(r,t) \, \Psi(r,t)$ is located at \begin{equation} r_p(t) = \sqrt{a^2 + \of{\frac{\hbar \, t}{a \, m}}^2}. \end{equation} With no gravitational component, $\ddot r_p(0) = \frac{\hbar^2}{a^3 \, m^2}$, the initial acceleration of the most-likely position depends only on $m$ (and the initial variance). With a gravitational force in place, we have: \begin{equation}\label{masstimate} \ddot r_p(0) + \of{-\frac{d}{dr} \, \Phi(r_p(0))} = a_{\hbox{\tiny{net}}}(0), \end{equation} where $ a_{\hbox{\tiny{net}}}(0)$ is the net acceleration (treating the most-likely position as the particle position), and we could arrange to have $ a_{\hbox{\tiny{net}}}(0) = 0$ by taking: \begin{equation}\label{cusp} \ddot r_p(0) = \frac{d}{dr} \, \Phi(r_p(0)). \end{equation} The $\Phi(r)$ that we use depends on both our choice to consider Newtonian or self-coupled gravity, and the $\rho$ that we decide to use to approximate the initial distribution of ``mass" (in~\cite{Giulinibound}, for example, a point particle at the origin is used to perform this estimate~\cite{NOTE2}). Since we have a Gaussian profile, we can take $\rho = m \, \Psi^* \, \Psi$ for the initial $\Psi$ given in~\refeq{Psinitial} and use that to solve for $\Phi(r)$. For Newtonian gravity, the field associated with this source is \begin{equation} \Phi(r) = -\frac{G \, m}{r} \, \hbox{erf}\of{\frac{r}{a}}, \end{equation} and using this in~\refeq{cusp} with $r = a$ (the initial value) gives \begin{equation} \frac{h^2}{a^3 \, m^2} + \frac{2 \, G \, m}{a^2 \, e \, \pi} = \frac{G \, m}{a^2} \, \hbox{erfc}(1). \end{equation} Since we've taken $a = r_0$, we have $a = \frac{\hbar^2}{\sqrt{2} \, G \, m_0^3}$ (in terms of the Planck mass $m_0$), and we can get rid of $\hbar$ using the Planck mass definition, $\hbar = \frac{G\, m_0^2}{c}$; then the solution to this equation is \begin{equation} m = \frac{2^{1/6}}{1 - \frac{2}{e \, \pi} - \hbox{erfc}(1)} \, m_0 \approx 1.5 \, m_0. \end{equation} For the modified form of gravity, we cannot find $\Phi(r)$ explicitly, so we turn to a numerical approach. Given the numerical parameters we will use below, we compute the $\bar{\bm \Phi}$ from the initial source (the dimensionless $\bar m \, \bar P^* \, \bar P$ with $\bar P$ and $A = 1$ from~\refeq{barPinitial}, projected onto our numerical grid) using the Verlet method described in Section IV, then approximate the derivative using finite difference (suitably dimensionless, which throws in a factor of $2$) and evaluate that at $\bar r = 1$ ($a$ in our dimensionless units), we subtract $\frac{4}{\bar m^2}$ (the dimensionless form of $\ddot r_p(0)$ here) and then find $\bar m$ such that the difference is close to zero (to within $\epsilon = 10^{-5}$). A plot of the difference: \begin{equation}\label{zdef} z \equiv \frac{4}{\bar m^2} -2 \, \frac{\bar \phi_{p+1} - \bar \phi_{p-1}}{2 \, \Delta R} \end{equation} with $p \, \Delta R \approx 1$ is shown in~\reffig{fig:zplot}, where we can see that the root lies in between $\bar m =3$ and $4$. A bisection of $z$ gives $\bar m \approx 3.3$ as the mass associated with the onset of contracting behavior. \begin{figure}[htbp] \centering \includegraphics[width=3in]{zplot} \caption{The dimensionless numerical acceleration, $z$, from~\refeq{zdef}, as function of $\bar m$.} \label{fig:zplot} \end{figure} \section{Numerical Dynamics} The numerical results agree well with the predictions from above. In all cases, we take $N = 1000$ spatial steps, with $R_\infty = 50$, and set $\Delta T = 0.1$. We can plot the probability densities as functions of time, for the $n=1$, $50$ and $100$ steps to see what sort of evolution is happening. Following~\cite{Giulinibound}, we also plot the radius in which $90\%$ of the probability lies, this ``$R_{90}(T)"$ value allows us to track the general evolution in time. We will plot that together with the value associated with a free Gaussian, so we can see what effect gravity (in its various forms) has. We can further characterize the dynamics by calculating the overlap of the wave function with the ground state (calculated using the methods of~\cite{us}) as a function of time. The Crank-Nicolson method we use here is not obviously norm-preserving, unlike the original one. That lack of manifest norm preservation comes from the time-dependence of the matrix operator $\mat H$ appearing on the left and right sides of~\refeq{CN}. Yet in practice, the norm is preserved well in all the runs, with the maximum difference between the numerical norm and $\sqrt{2}$ (the appropriate normalization from~\refeq{normit}) on the order of $10^{-13}$. For SN, the probabilities are shown in~\reffig{fig:SNmasses} for $\bar m = 1$, $1.5$, $2$ and $3$, and a plot of $R_{90}(T)$ for each case is shown in~\reffig{fig:SN90}. There are four different behaviors shown in the plots of $R_{90}(T)$: 1.\ for $\bar m = 1$, the Gaussian spreads out, 2.\ for $\bar m = 1.5$, the Gaussian is oscillating, but with peak position that is further from the origin than at time $T = 0$, 3.\ $\bar m = 2$ has an oscillating solution, where the peak gets closer to the origin and then comes back out and 4.\ a collapse (with minimal oscillation) for $\bar m = 3$ (and greater). From these plots, the critical mass is somewhere between $1.5$ and $2$, since at $1.5$ we have oscillation above the initial value of $R_{90}(0)$, and at $2$ the oscillation occurs with values less than the initial $R_{90}(0)$. This estimate of the critical mass basically agrees with our prediction from the previous section, where we found the critical mass to be $\sim 1.5$. \begin{figure}[htbp] \centering \includegraphics[width=4in]{SNmasses} \caption{Probability density as a function of position for SN masses $\bar m = 1$, $1.5$, $2$, and $3$. Snapshots are shown at $T = 1\, \Delta T$, $50 \, \Delta T$ and $100\, \Delta T$ (left to right) in each case.} \label{fig:SNmasses} \end{figure} In~\reffig{fig:SN90}, the solid line shows the value of $R_{90}(T)$ for a free Gaussian (of appropriate mass) for comparison. As expected, the gravitational coupling makes the spreading behavior slow down compared to the free particle case. \begin{figure}[htbp] \centering \includegraphics[width=4in]{SNR90} \caption{The values of $R_{90}(T)$ for SN at $\bar m = 1$, $1.5$, $2$ and $3$ are shown as points. The line is the $R_{90}(T)$ for a free Gaussian.} \label{fig:SN90} \end{figure} In~\cite{Meter}, the dynamics of SN is described as a ``partial collapse" to the ground state -- we can calculate the overlap of the wave function at time level $T$ with the ground state, $O(T) = |\langle \Psi(T)| \Psi_0 \rangle|$, and the plot of that overlap is shown in~\reffig{fig:olapplot}. Notice that as the mass increases, the amount of overlap with the ground state increases. For the lower masses, it is not clear what a longer temporal run would do (oscillate about some fixed, non-unity value, or increase towards full overlap), but for $\bar m = 2$ and $3$, a clear trend towards collapse to the ground state is shown. \begin{figure}[htbp] \centering \includegraphics[width=4in]{SNgstateolap} \caption{The overlap of the wave function at time $T$ with the ground state (at appropriate mass) for SN.} \label{fig:olapplot} \end{figure} Making the same plots for the self-coupled gravity case (with densities in~\reffig{fig:SCmasses} and $R_{90}(T)$ shown in~\reffig{fig:SC90}), at masses $\bar m = 2$, $\bar m = 3$, $\bar m = 4$ and $\bar m = 10$, we again see the spreading behavior at $\bar m =2$, and at $\bar m = 3$, oscillation has begun. This oscillation does not represent collapse, though, as can be seen in~\reffig{fig:SC90}, the oscillation occurs at values {\it above} the initial $R_{90}(0)$ -- there is no contraction here. It isn't until $\bar m =4$ that oscillation with values {\it below} the initial $R_{90}(0)$ occurs. So we would put the critical mass somewhere between $\bar m = 3$ and $4$, again agreeing with our estimate $\sim 3.3$. What is surprising in this case is the lack of decay we saw in, for example, $\bar m = 3$ of SN (both in the plot of $R_{90}(T)$ and in $O(T)$). Instead, in the self-coupled case, all masses display oscillatory behavior without ``settling down" (we have run up to masses of $\bar m = 20$, but still see no sign of a collapse to the ground state). \begin{figure}[htbp] \centering \includegraphics[width=4in]{SCmasses} \caption{Probability density as a function of position for self-coupled gravity masses $\bar m = 2$, $3$, $4$ and $10$. Snapshots are shown at $T = 1 \,\Delta T$, $50 \, \Delta T$ and $100 \, \Delta T$ (left to right) in each case. (Note the change in vertical scale).} \label{fig:SCmasses} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=4in]{SCR90} \caption{The values of $R_{90}(T)$ for the self-coupled form of gravity at $\bar m = 2$, $3$, $4$ and $10$ are shown as points. The line is the $R_{90}(T)$ for a free Gaussian.} \label{fig:SC90} \end{figure} This lack of convergence can also be seen in the plots of the overlap with the ground state (calculated, appropriately, for the self-coupled case), shown in~\reffig{fig:scolap}. Instead of oscillating towards an overlap of $1$ with the ground state, as in SN, the overlap in the self coupled case does not increase (on average) over time (for the time scales considered here). As another contrasting feature -- in~\reffig{fig:olapplot}, the amount of (time-averaged) overlap increases with mass, while in~\reffig{fig:scolap}, the magnitude of the overlap increases, but then decreases as mass gets larger. \begin{figure}[htbp] \centering \includegraphics[width=4in]{SCgstateolap} \caption{The overlap of the wave function at time $T$ with the ground state (at appropriate mass) for the self-coupled case.} \label{fig:scolap} \end{figure} \section{Conclusion} The inclusion of the self-coupling for gravity changes the spherical dynamics at large masses; while the expected qualitative behavior, free spreading and oscillation, occur in the expanded gravitational setting, the mass scales at which they occur are roughly twice those of Newtonian gravity. We estimated the mass scales using a simple equivalence of quantum mechanical ``acceleration" and the gravitational field associated with our initial Gaussian wave function, and that estimate agreed fairly well with the numerical solutions. The collapse to the ground state, apparent for SN at masses above $\bar m = 2$ here, is absent from the self-coupled case (at the time scales considered here -- time scales which are relevant for the SN case, at least). Because we are using a form of gravity inspired by special relativistic mass-energy equivalence, we first calculated the energy spectrum of the quantum-mechanical/self-coupled gravitational system using the Dirac equation, to compare with the previously published Schr\"odinger spectrum, and found that, for the masses of interest to us at collapse, the error in the ground state energy is $\sim 10\%$, this suggests we can use the Schr\"odinger equation to evolve the initial Gaussian forward in time without incurring too much error. For comparison, the difference between the ground state energy for SN and Dirac with self-coupled gravity is $\sim 600\%$. Self-coupled gravity does not appear to collapse to its ground state (or any other); the wave function does not achieve a relatively static steady state, as it does in SN, nor does it ``converge" (in overlap) to its ground state. It would be interesting to establish, analytically, that the ground state in the self-coupled form of gravity is dynamically unstable, leading to the observed oscillation without the decay to the ground state present in SN. Another potential issue is our use of the Schr\"odinger equation -- perhaps at higher mass values, where the Dirac equation is relevant, we would find a damped-oscillatory collapse for the self-coupled gravity.
2,877,628,089,352
arxiv
\section{Introduction} \label{intro} Unitarity of the time evolution of an isolated quantum system and in particular of the associated $S$-matrix is one of the cornerstones of quantum field theory. In practical perturbative calculations however, $S$-matrix unitarity is always approximate and asymptotic. Nonetheless, significant violations of unitarity at low orders in perturbation theory are heralds of a strongly-coupled system and can be used to constrain the range of validity of a given (effective) quantum field theory description. Perhaps most famously, constraints imposed by perturbative unitarity in $WW$ scattering have been used in the past to infer an upper bound on the Higgs boson mass or, alternatively, on the scale where the standard model (SM) description of weak interactions would need to be completed in the ultraviolet (UV) in terms of some new strongly coupled dynamics \cite{Lee:1977yc,Lee:1977eg}. Correspondingly it allowed to narrow down the relevant mass search window and motivate the construction of the LHC with capabilities that ensured the eventual Higgs boson discovery (cf.~\cite{Djouadi:2005gi} for a review). More generally, perturbative unitarity constraints on the validity of a certain theoretical description are applicable both in non-renormalizable as well as renormalizable models. In both cases they allow to assess the limitations of a perturbative expansion. In the non-renormalizable effective field theory (EFT) approach this amounts to a truncated power expansion in $(E/\Lambda)$, where $E$ is a typical energy in a process and $\Lambda$ is the EFT cut-off scale. Violations of perturbative unitarity signal the breakdown of such an expansion, when the leading powers do not represent a good approximation to the physical result. A notable standard example is the pion scattering in chiral perturbation theory, where the loop and power expansion are adequate at low enough scattering energies but violate perturbative unitarity at higher energies and eventually need to be UV completed with the inclusion of dynamical vector resonances. On the other hand within renormalizable models, the expansion proceeds in terms of positive powers of the renormalizable couplings. Sizable violations of unitarity at leading (tree) order signal the breakdown of such an expansion and the onset of strongly coupled dynamics. Here the most renown case is that of the aforementioned $WW$ boson scattering in presence of a heavy SM Higgs boson. The recently rekindled interest in new physics (NP) in the form of (possibly broad) di-photon resonances~\cite{ATLAS-CONF-2015-081, CMS:2015dxe, CMS:2016owr, ATLAS:Moriond, CMS:Moriond} at the LHC prompt us to reconsider the implications of perturbative unitarity for EFT interpretations of resonances decaying to di-boson final states. In particular, focusing on promptly produced scalar SM singlets decaying to two SM gauge bosons we aim to address the following questions: at which maximal energies do we expect the effective description in terms of the SM supplemented by a single scalar to break down? What can we learn about the possible UV completions of such effective theory from unitarity arguments? In particular, whether and under which conditions can a potential di-boson signal be accommodated within weakly-coupled models? We further motivate the endeavor with the observation that in perturbative weakly-coupled models, decays of a scalar singlet into two transverse SM gauge bosons can only arise at loop level involving massive charged and/or colored particles leading to a suppression factor of $\Gamma_{V_T V_T}/M \propto \alpha_{V}^2/16\pi^3$. Even in the case of QCD $\Gamma_{V_T V_T}/M \gtrsim 10^{-4}$ would require large couplings and/or large multiplicies of new states contributing in the loop. Both possibilities are potentially subject to constraints coming from perturbative unitarity. In particular, we will show how they enter the amplitudes of $2 \to 2$ scatterings of the new degrees of freedom. Similar considerations have already triggered several studies addressing the issue of the predictivity and calculability within weakly-coupled perturbative models of di-photon resonances.\footnote{For a broad survey of such models cf.~\cite{Staub:2016dxq}.} These include studying the renormalization group equations (RGE) of the models~\cite{Goertz:2015nkp} or the actual appearance of Landau poles~\cite{Son:2015vfl,Franceschini:2015kwy,Cao:2015twy}. For marginal operators such as those corresponding to the gauge couplings, Yukawas or the scalar quartic, both effects are however only logarithmically sensitive to the UV cut-off scale of the theory. The resulting constraints can also be circumvented if the models can be UV completed into theories exhibiting an infrared (IR) fixed point behavior. In case of scalar extensions, the stability of the scalar potential has also been used~\cite{Salvio:2016hnf,Ge:2016xcq}. In this case the possibility of a metastable vacuum with its intricate relations to the cosmological history of the Universe requires additional assumptions going beyond quantum field theory arguments. Some aspects of partial wave unitarity for di-photon resonances which partially overlap with our work have already been discussed in~\cite{Murphy:2015kag,Fabbrichesi:2016alj,Cynolter:2016jxv}, however with a different focus with respect to our analysis. The rest of the paper is structured as follows: \sect{reviewPWU} contains a brief recap of partial wave unitarity arguments, which we first apply in \sect{EFT} to the EFT case where a di-boson resonance is the only new degree of freedom beyond the SM. In \sect{wcmodels} we then consider weakly-coupled benchmark models with either new fermionic or scalar degrees of freedom coupling to a di-boson resonance and inducing the EFT operators in the low-energy limit. Our main results are summarized in~\sect{concl}. Finally, some relevant technical details of our computations can be found in \app{Amplitudes}. \section{Brief review on partial wave unitarity} \label{reviewPWU} Let us denote by $\mathcal{T}_{fi} (\sqrt{s},\cos\theta)$ the matrix element of a $2\to 2$ scattering amplitude in momentum space, defined via \begin{equation} \label{defT} (2\pi)^4 \delta^{(4)} (P_i - P_f) \mathcal{T}_{fi} (\sqrt{s},\cos\theta) = \langle f | T | i \rangle \, , \end{equation} where $T$ is the interacting part of the $S$-matrix, $S = 1+ i T$. The dependence of the scattering amplitude on $\cos\theta$ is eliminated by projecting it onto partial waves of total angular momentum $J$ (see e.g.~\cite{Itzykson:1980rh,Chanowitz:1978mv,Schuessler:2007av}) \begin{equation} \label{PWprojprev} a^J_{fi} = \frac{\beta_f^{1/4}(s,m^2_{f1},m^2_{f2}) \beta_i^{1/4}(s,m^2_{i1},m^2_{i2})}{32 \pi s} \int_{-1}^{1} d(\cos\theta) \, d^J_{\mu_i\mu_f}(\theta) \, \mathcal{T}_{fi} (\sqrt{s},\cos\theta) \, , \end{equation} where $d^J_{\mu_i\mu_f}$ is the $J$-th Wigner $d$-function appearing in the Jacob-Wick expansion \cite{Jacob:1959at}, while $\mu_i = \lambda_{i1} - \lambda_{i2}$ and $\mu_f = \lambda_{f1} - \lambda_{f2}$ are defined in terms of the helicities of the initial ($\lambda_{i1}, \lambda_{i2}$) and final ($\lambda_{f1}, \lambda_{f2}$) states. The function $\beta(x,y,z) = x^2 + y^2 + z^2 - 2xy - 2yz - 2zx$ is a kinematical factor related to the momentum (to the fourth power) of a given particle in the center of mass frame. The right hand side of \eq{PWprojprev} must be further multiplied by a $\frac{1}{\sqrt{2}}$ factor for any identical pair of particles either in the initial or final state. When restricted to a same-helicity state (zero total spin), the Wigner $d$-functions reduce to the Legendre polynomials, i.e.~$d^J_{00} = P_J$. In practice, we will only focus on $J=0$ ($d^0_{00} = P_0 = 1$), since higher partial waves typically give smaller amplitudes, unless $J=0$ amplitudes are suppressed or vanish for symmetry reasons. Hence, the quantity we are interested in is \begin{equation} \label{PWproj} a^0_{fi} = \frac{\beta_f^{1/4}(s,m^2_{f1},m^2_{f2}) \beta_i^{1/4}(s,m^2_{i1},m^2_{i2})}{32 \pi s} \int_{-1}^{1} d(\cos\theta) \, \mathcal{T}_{fi} (\sqrt{s},\cos\theta) \, . \end{equation} In the high-energy limit, $\sqrt{s} \rightarrow \infty$, one has $\beta_f^{1/4}\beta_i^{1/4} / s \to 1$. The unitarity condition on the $S$-matrix, $SS^\dag=1$, gives \begin{equation} \label{unitS} \frac{1}{2i} \left( a^J_{fi} - a^{J*}_{if} \right) \geq \sum_h a^{J*}_{hf} a^{J}_{hi} \, , \end{equation} where the sum over $h$ is restricted to 2-particle states, which slightly underestimates the left hand side. For $i=f$ \eq{unitS} reduces to \begin{equation} \label{UnitBoundprev} \mbox{Im}\, a^J_{ii} \geq |a^J_{ii}|^2 \, . \end{equation} Hence, $a^J_{ii}$ must lie inside the circle in the Argand plane defined by (cf.~also \fig{Argand}) \begin{equation} \label{UnitBoundprev2} \left(\mbox{Re}\, a^J_{ii}\right)^2 + \left(\mbox{Im}\, a^J_{ii} - \frac{1}{2} \right)^2 \leq \frac{1}{4} \, , \end{equation} which implies \begin{equation} \label{UnitBoundprev3} |\mbox{Im}\, a^J_{ii} | \leq 1 \qquad \text{and} \qquad |\mbox{Re}\, a^J_{ii} | \leq \frac{1}{2} \, . \end{equation} Under the assumption that the tree-level amplitude is real, \eq{UnitBoundprev3} suggests the following perturbativity criterium \begin{equation} \label{UnitBound} |\mbox{Re}\, (a^J_{ii})^{\text{Born}} | \leq \frac{1}{2} \, . \end{equation} In fact, a Born value of $\mbox{Re}\, a^J_{ii} = \frac{1}{2}$ and $\mbox{Im}\, a^J_{ii} = 0$ needs at least a correction of $40\%$ in order to restore unitarity (cf.~\fig{Argand}). \begin{figure}[!ht] \begin{center} \includegraphics[width=.40\textwidth]{Argand} \end{center} \caption{\label{Argand} Unitarity constraint in the Argand plane. A Born value of $\mbox{Re}\, a^J_{ii} = \frac{1}{2}$ and $\mbox{Im}\, a^J_{ii} = 0$ (red line) requires a correction (blue line) which amounts to at least the $\sqrt{2}-1\simeq 40 \%$ of the the tree-level value in order to come back inside the unitarity circle.} \end{figure} In reality, one expects to have issues with perturbativity even before saturating the bound in \eq{UnitBound}, which is hence understood to be a conservative one. Stronger constraints can be obtained by considering the full transition matrix connecting all the possible 2-particle states, which amount to applying \eq{UnitBound} to the highest eigenvalue of $|\mbox{Re}\, (a^J_{if})^{\text{Born}} |$. \section{Effective field theory of a scalar resonance} \label{EFT} We consider the EFT of a gauge singlet spin-0 resonance, $S$ with mass $M_S$, coupled to the SM fields. Assuming CP invariance, we choose $S$ to transform as a scalar.\footnote{The pseudo-scalar case leads to analogous conclusions as far as unitarity bounds are concerned, hence in the following we will not consider it separately.} The only renormalizable terms couple $S$ to the Higgs in the scalar potential \begin{equation} \mathcal{L}^{(4)}_{\text{int.}} = - \mu_S S H^\dag H - \frac{\lambda_S}{2} S^2 H^\dag H \, , \label{eq:L4} \end{equation} where $\mu_S \lesssim s_\alpha m_S^2/v \lesssim m_S^2/600$\,GeV. In the inequality we have introduced $v\simeq 246$~GeV and $s_\alpha\lesssim 0.4$~\cite{Falkowski:2015swt, ATLASCONF2015044} as the sine of the mixing angle between $S$ and the Higgs, $h$ (in the unitary gauge $H = (0, v+h)/\sqrt{2}$). While for a CP-even $S$ the $ \mu_S$ term can contribute to the $S \to hh,W_LW_L,~Z_LZ_L$ widths, it is not relevant for unitarity bounds in the high-energy limit. The $d=5$ Lagrangian instead reads \begin{align} \label{effLSM} \mathcal{L}^{(5)}_{\text{int.}} = &- \frac{g_3^2}{2 \Lambda_g} S G^2_{\mu\nu} - \frac{g_2^2}{2 \Lambda_W} S W^2_{\mu\nu} - \frac{g_1^2}{2 \Lambda_B} S B^2_{\mu\nu} - \frac{1}{\Lambda_H} S \left( D_\mu H \right)^\dag D^\mu H - \frac{1}{\Lambda'_H} S \left( H^\dag H \right)^2 \nonumber \\ &- \frac{1}{\Lambda_d} S \overline{Q}_L d_R H - \frac{1}{\Lambda_u} S \overline{Q}_L u_R H^c - \frac{1}{\Lambda_e} S \overline{L}_L e_R H + \text{h.c.} \, , \end{align} where we have suppressed flavor indices. This parametrization makes it clear that apart from the $\mu_S$ term in Eq.~\eqref{eq:L4}, the interactions of a scalar singlet with the SM fields, directly relevant for di-boson resonances at the LHC, are all due to non-renormalizable $d=5$ operators. Their effects are thus expected to be enhanced at high energies eventually leading to the breakdown of perturbative unitarity. In order to quantify this simple observation in the following subsections we evaluate the relevant scattering amplitudes involving SM gauge bosons, Higgs and quarks at the respective leading orders in perturbation theory. Moreover, since we are interested in studying $2 \to 2$ scattering processes at energies $\sqrt{s} \gg M_S \gg v$, we can safely set all the massive parameters (including $M_S$) to zero and work within the unbroken SM theory. This also implies that we can neglect any $h$-$S$ mixing effects and set the masses of the final state SM particles to zero. We distinguish between two classes of tree-level processes characterized by a different energy scaling of the amplitude: scalar mediated scatterings and $d=5$ contact interactions. \subsection{Scalar mediated boson scattering} \label{gggamgaminit} Let us start, as an example, by considering the $\gamma\gamma \to \gamma\gamma$ scattering amplitude due to the effective operator \begin{equation} - \frac{e^2}{2 \Lambda_\gamma} S F^2_{\mu\nu} \, , \end{equation} whose matching with the operators in \eq{effLSM} is given by \begin{equation} \frac{1}{\Lambda_\gamma} = \frac{1}{\Lambda_B} + \frac{1}{\Lambda_W} \, . \end{equation} The calculation is detailed in \app{EFTscattering}. In the ($++,--$) helicity basis we find \begin{equation} \label{TEFT} \mathcal{T} = -\frac{e^4}{\Lambda^2_{\gamma}} \left( \begin{array}{cc} \frac{s^2}{s-M_S^2} & \frac{s^2}{s-M_S^2} + \frac{t^2}{t-M_S^2} + \frac{u^2}{u-M_S^2} \\ \frac{s^2}{s-M_S^2} + \frac{t^2}{t-M_S^2} + \frac{u^2}{u-M_S^2} & \frac{s^2}{s-M_S^2} \end{array} \right) \overset{\sqrt{s} \ \gg \ M_S}{\simeq} -\frac{e^4s}{\Lambda^2_{\gamma}} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \, , \end{equation} where in the last step we took the high-energy limit. Note that only the $s$-channel survives at high energies. The projection on the $J=0$ partial waves is obtained by applying \eq{PWproj} and by multiplying by a $1/2$ factor which takes into account the presence of identical particles both in the initial and final states. In the high-energy limit we get \begin{equation} a^0 \simeq - \frac{e^4 s}{32 \pi \Lambda^2_{\gamma}} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \, , \end{equation} which, confronted with \eq{UnitBound}, leads to the tree-level unitarity bound \begin{equation} \sqrt{s} \lesssim \sqrt{16 \pi} \frac{\Lambda_{\gamma}}{e^2} \, . \end{equation} As a matter of fact, the bound above can be made stronger if one considers the full $VV \to V'V'$ scattering matrix, where $V$ and $V'$ are any of the $8+3+1$ (transversely polarized) SM gauge bosons of the effective Lagrangian in \eq{effLSM}. In such a case, the previous calculation is readily generalized in the high-energy limit where only the $s$-channel survives. To this end, we note that a scattering amplitude in the $s$-channel can be written as \begin{equation} \label{factmij} m_{ij}= \frac{a_i a_j}{s-M^2_S} \, , \end{equation} where $a_i$ and $a_j$ are obtained by cutting any $i \to j$ diagram in two parts along the $s$-channel propagator. The matrix in \eq{factmij} has rank 1 and its non-zero eigenvalue is given by the trace. Hence, denoting by $\tilde{a}^0$ the eigenvalue of the $VV \to V'V'$ scattering matrix, in the high-energy limit we get \begin{equation} \tilde{a}^0 \simeq - \frac{s}{32 \pi} \left( \frac{8 g_3^4}{\Lambda^2_{g}} + \frac{3 g_2^4}{\Lambda^2_{W}} + \frac{g_1^4}{\Lambda^2_{B}} \right) \, . \end{equation} Correspondingly, the tree-level unitarity bound is given by \begin{equation} \label{boundVV} \frac{s}{32 \pi} \left(8 \frac{g_s^4}{\Lambda^2_g} + 3 \frac{g_2^4}{\Lambda^2_W} + \frac{g_1^4}{\Lambda^2_B} \right) \lesssim \frac{1}{2} \, . \end{equation} We remark that in deriving these bounds we consider only the transverse polarizations of the $W$ and $Z$ gauge bosons. Generally, scattering amplitudes involving longitudinally polarized massive vector bosons can grow as positive powers of $E/m_{W,Z}$ implying apparently stronger dependence on $s$. However, as it can be easily verified (through an explicit calculation of the processes at hand or more generally via a clever gauge choice~\cite{Wulzer:2013mza}), the scattering amplitudes involving longitudinally polarized states sourced by the gauge field strengths in Eq.~\eqref{effLSM} are \textit{suppressed} by powers of $m_{W,Z}/E$ and thus do not lead to relevant unitarity constraints at high $s$. In the $v \to 0$ limit there is just one additional tree-level $s$-channel contribution leading to $2 \to 2$ scatterings of SM particles from \eq{effLSM}, that is due to the operator \begin{equation} \label{Higgsscattop} \frac{1}{\Lambda_H} S (D_{\mu} H)^{\dagger} D^{\mu} H \supset \frac{S}{\Lambda_H} \partial_{\mu}H_i^{\dagger} \partial^{\mu} H_i \end{equation} where we have neglected vertices with 4 or 5 particles and $H^T = (H_1, H_2)$. In the ($|H^\dag_1 H_1\rangle$, $|H^\dag_2 H_2 \rangle$) basis, the $J=0$ partial wave matrix at $\sqrt{s} \gg M_S$ is found to be \begin{equation} a^0 \simeq -\frac{s}{64 \pi \Lambda_H^2} \left( \begin{array}{cc} 1 & 1\\ 1 & 1 \end{array} \right) \, . \end{equation} Imposing the unitarity bound on the highest eigenvalue we get \begin{equation} \sqrt{s} \lesssim \sqrt{32 \pi} \, \Lambda_H \, . \end{equation} Note that in the EW broken vacuum the constraint corresponds to scattering of both the physical Higgs bosons as well as the longitudinally polarized massive EW gauge bosons. Considering thus also $|H^\dag_i H_i\rangle$ as possible initial and final states, \eq{boundVV} is generalized into \begin{equation} \label{boundVVgeneral} \frac{s}{32 \pi} \left(8 \frac{g_s^4}{\Lambda^2_g} + 3 \frac{g_2^4}{\Lambda^2_W} + \frac{g_1^4}{\Lambda^2_B} + \frac{1}{2\Lambda^2_H} \right) \lesssim \frac{1}{2} \, . \end{equation} \subsection{Fermion-scalar contact interactions} Next we consider the contact interaction \begin{equation} \label{effopbbbar} - \frac{1}{\Lambda_d} S \, \overline{Q}_L d_R H = \left[ - \frac{1}{\Lambda_d} \delta^b_a \delta^j_i \right] S \, (\overline{Q}_L)^{ai} (d_R)_{b} H_j \, , \end{equation} where we have explicitly factored out the color and $SU(2)_L$ group structure. In this case the leading scattering process is $\overline{Q} d \to S H$. By explicitly writing the polarization and gauge indices in the amplitude, one finds \begin{equation} \mathcal{T} = - \frac{\delta^b_a \delta^j_i}{2 \Lambda_d} \overline{v}^s(k) \left( 1 + \gamma_5 \right) u^r(p) \, . \end{equation} Only the $++$ and $--$ polarizations survive. By explicit evaluation (cf.~\app{psipsibarpsipsibar} for the expression of the spinor polarizations) we get \begin{align} \mathcal{T}_{++} &= \frac{\delta^b_a \delta^j_i}{\Lambda_d} (E + p^3) \overset{\sqrt{s} \ \gg \ M_S}{\simeq} \delta^b_a \delta^j_i \frac{\sqrt{s}}{\Lambda_d} \, , \\ \mathcal{T}_{--} &= \frac{\delta^b_a \delta^j_i}{\Lambda_d} (E - p^3) \overset{\sqrt{s} \ \gg \ M_S}{\simeq} 0 \, . \end{align} At high energies the $J=0$ partial wave is obtained by considering the color singlet channel for a state in the linear combination $\frac{1}{\sqrt{2}} \left( | \overline{Q} d \rangle + | S H \rangle \right)$, which gives \begin{equation} a^0 \simeq \frac{1}{16 \pi} \frac{\sqrt{s}}{\Lambda_d} \, . \end{equation} Correspondingly, the tree-level unitarity bound reads \begin{equation} \label{boundbbbarEFT} \sqrt{s} \lesssim 8 \pi \Lambda_d \, . \end{equation} Similarly, from the other two contact interactions in the last row of \eq{effLSM} we get $\sqrt{s} \lesssim 8 \pi \Lambda_u$ and $\sqrt{s} \lesssim 8 \pi \Lambda_e$. \subsection{Unitarity bounds} As an exemplification we consider a scalar resonance $S$ with mass $M_S$ and total width $\Gamma_S$ appearing in a di-photon final state at the LHC.\footnote{Analogous analysis can be performed also for other EW gauge boson final states with the slight complication of disentangling the transverse and longitudinal gauge boson polarizations, as they are sourced by different terms in the EFT Lagrangian ($\Lambda_{B,W}$ and $\Lambda_H$, respectively).} Expanding the effective Lagrangian in \eq{effLSM} around the broken electroweak (EW) vacuum, the part relevant for $S$ production at the LHC is \begin{equation} \label{effL} \mathcal{L}^{(5)}_{\text{int.}} \supset - \frac{g_3^2}{2 \Lambda_g} S G^2_{\mu\nu} - \frac{e^2}{2 \Lambda_\gamma} S F^2_{\mu\nu} - \sum_q y_{qS} S \overline{q} q \, , \end{equation} whose operators give rise to the decay widths \begin{align} \label{Ggammagamma} \Gamma_{\gamma\gamma} &\equiv \Gamma(S \to \gamma\gamma) = \pi \alpha_{\rm EM}^2 \frac{M_S^3}{\Lambda_\gamma^2} \, , \\ \label{Ggg} \Gamma_{gg} &\equiv \Gamma(S \to gg) = 8 \pi \alpha_s^2 \frac{M_S^3}{\Lambda_g^2} \, , \\ \label{Gbbbar} \Gamma_{q\overline{q}} &\equiv \Gamma(S \to q \overline{q}) = \frac{3}{8\pi} y_{qS}^2 M_S \left({1-\frac{4m_q^2}{M_S^2}}\right)^{3/2} \, . \end{align} The matching between the operators in \eq{effL} and \eq{effLSM} then yields \begin{equation} \label{matchEFT} \frac{1}{\Lambda_\gamma} = \frac{1}{\Lambda_B} + \frac{1}{\Lambda_W} \, , \qquad y_{qS} = \frac{v}{\sqrt 2 \Lambda_q} \, . \end{equation} In the narrow width approximation the prompt $S$ production at the LHC can also be fully parametrized in terms of the relevant decay widths \begin{equation} \sigma(pp \to S) = \frac{1}{M_S s} \left[ \sum_{\mathcal P} C_{\mathcal P \bar{\mathcal P}} \Gamma_{\mathcal P \bar{\mathcal P}} \right]\,, \end{equation} where $\sqrt s$ is the LHC $pp$ collision energy and $C_{\mathcal P \bar{\mathcal P}}$ parametrize the relevant parton luminosities. For illustration purposes in the following we consider in turn either $gg$ and $\gamma\gamma$ induced processes or alternatively $b\overline{b}$ and $\gamma\gamma$ rates at a benchmark mass of $M_S=750$~GeV. The remaining possibilities lie in between these two limiting cases considering the values of relevant parton luminosities (their values at $\sqrt s=8$ TeV and 13 TeV LHC can be found e.g.~in~\cite{Franceschini:2015kwy}). In the former case given a 13 TeV cross-section $\sigma_{\gamma\gamma} \equiv \sigma (pp \to S ) \mathcal B_{\gamma\gamma}$ one obtains the relation \begin{equation} \label{xsecgg} \frac{\Gamma_{\gamma \gamma} }{M_S} \frac{\Gamma_{gg} }{M_S} \simeq 1.4 \times 10^{-7} \frac{\sigma_{\gamma\gamma}}{\rm fb}\frac{\Gamma_S}{M_S} \, , \end{equation} while for the latter we obtain \begin{equation} \label{xsecbbbar} \frac{\Gamma_{\gamma \gamma} }{M_S} \frac{\Gamma_{b\overline{b}} }{M_S} \simeq 2.3 \times 10^{-5} \frac{\sigma_{\gamma\gamma}}{\rm fb} \frac{\Gamma_S}{M_S} \, . \end{equation} These relations define the phenomenological benchmarks for the resonance partial widths into gauge boson and quark final states, to be subjected to constraints from perturbative unitarity. To make contact with the EFT unitarity discussion of the preceding subsections we use \eqs{Ggammagamma}{Ggg} and trade $\Lambda_ g$, $\Lambda_W$ and $\Lambda_B$ for $\Gamma_{gg}$, $\Gamma_{\gamma \gamma}$ and the ratio $r \equiv \Lambda_B/ \Lambda_W$. In particular, we get \begin{align} \frac{1}{\Lambda^2_g} &= \frac{\Gamma_{gg}}{8 \pi \alpha^2_s M_S^3} \, , \\ \frac{1}{\Lambda^2_W} &= \frac{\Gamma_{\gamma \gamma}}{\pi \alpha_{\rm EM}^2 M_S^3} \left( \frac{r}{1+r}\right)^2 \, , \\ \frac{1}{\Lambda^2_B} &= \frac{\Gamma_{\gamma \gamma}}{\pi \alpha_{\rm EM}^2 M_S^3} \left( \frac{1}{1+r}\right)^2 \, , \end{align} which inserted back into \eq{boundVV} yield \begin{equation} \label{boundVV2} \sqrt{s} \lesssim M_S \left( \frac{\Gamma_{gg}}{M_S} + f(r) \frac{\Gamma_{\gamma \gamma}}{M_S} \right)^{-1/2} \, , \end{equation} with \begin{equation} f(r) = \frac{ 3 r^2 s^{-4}_W + c_W^{-4}}{ (1+r)^2} \, . \end{equation} Barring the fine-tuned region around $r=-1$ (corresponding to $1/\Lambda_\gamma = 0$), the function $f(r)$ has the global minimum 1.6 for $r=0.030$ and reaches asymptotically the maximum 57 for $r \to \pm \infty$. Hence, we can set the following unitarity bounds \begin{align} \label{boundEFTgg} \sqrt{s} &\lesssim 32 \, M_S \left(\frac{\Gamma_{gg}/M_S}{10^{-3}}\right)^{-1/2} \, , \\ \label{boundEFTgammagamma} \sqrt{s} &\lesssim (13 \div 79) \, M_S \left(\frac{\Gamma_{\gamma \gamma}/M_S}{10^{-4}}\right)^{-1/2} \, , \end{align} where the values 13 and 79 in the last equation correspond respectively to the boundary values $r \to \pm \infty$ and $0.030$. Generally, these bounds can be interpreted as the indication of the mass scale of new degrees of freedom UV completing the effective low-energy description and regularizing (unitarizing) the amplitude growth. If $S$ is a member of a new strongly coupled sector (i.e.~a composite state)~\cite{Franceschini:2015kwy, Harigaya:2015ezk,Nakai:2015ptz,Pilaftsis:2015ycr,Belyaev:2015hgo,Bian:2015kjt,Molinaro:2015cwg,Barrie:2016ntq,Craig:2015lra,Draper:2016fsr,Redi:2016kip,Kamenik:2016izk}, the above results imply upper bounds on its compositeness scale.\footnote{It is an interesting question whether there could be an UV model where new dynamics shows up only at the scale of the ultimate unitarity violation, as e.g.~in \eq{boundEFTgammagamma}. A possibility would be for instance an SU$(N_{\rm TC})$ model of vector-like confinement (along the lines of Ref.~\cite{Redi:2016kip}) with a large $N_{\rm TC}$. Since the anomaly coefficients are enhanced by $N_{\rm TC}$, this would allow to obtain a parametrically large di-boson signal while keeping a relatively high confinenment scale $\Lambda_{\rm TC}$. A detailed study of the feasibility of such scenario goes beyond the scope of the present paper.} Unfortunately, in this context unless a prospective $\mathcal O(\rm TeV)$ mass di-boson resonance would have a very large di-boson decay width, the bounds do not appear strong enough to guarantee observable effects at LHC energies and a prospective future 50-100~TeV hadron-hadron collider~\cite{Assadi:2014nea, Tang:2015qga} would be called for. On the other hand, in perturbative weakly-coupled realizations discussed in the next section, where $S$ remains an elementary particle in the UV, its couplings to SM gauge field strengths cannot be generated at the tree level. Thus one expects new dynamics to appear much below the above conservative unitarity estimates. In the case of quark scattering, we use \eq{Gbbbar} and \eq{matchEFT}. Thus the bound in \eq{boundbbbarEFT} translates into \begin{equation} \label{boundEFTbbbar} \sqrt{s} \lesssim 2 \sqrt{3 \pi} v \left( \frac{\Gamma_{q\bar{q}}}{M_S} \right)^{-{1}/{2}} \simeq 4.8 \ \text{TeV} \left( \frac{\Gamma_{q\bar{q}}/M_S}{0.1} \right)^{-{1}/{2}} \, , \end{equation} where on the r.h.s. we have normalized the partial width in $q\bar q$ to a broad resonance scenario. Contrary to $S$ couplings to SM gauge field strengths, its couplings to SM fermions can be easily realized in weakly-coupled renormalizable models already at the tree level. In particular, this requires (a) $S$ mixing with the SM Higgs doublet, (b) embedding $S$ into an EW doublet with the quantum numbers of the SM Higgs, or (c) the introduction of new massive fermions mixing with the SM quarks and/or leptons. Case (a) is constrained by Higgs coupling measurements~\cite{Falkowski:2015swt, ATLASCONF2015044}. In both remaining cases, the above result can be interpreted as an upper bound on the mass scale of the extra EW (and color) charged states present in the UV completions. Unfortunately, unless $S$ decay channels to SM quarks induce a sizable width, LHC energies will not necessarily be sufficient to exhaust these possibilities directly within the EFT. One should thus consider explicit UV realizations. In the case (b) which goes beyond the scope of this paper, precision Higgs boson and EW measurements can be used to provide additional handles~\cite{Angelescu:2015uiz, Becirevic:2015fmu, Han:2015qqj,Moretti:2015pbj,Han:2016bvl,Kamenik:2016tuv}. {Case (c) on the other hand, is covered in the next section.} In \fig{UBplot} we display the scale of unitarity violation $\Lambda_U$ [TeV] in the $\mathcal B_{\gamma\gamma}$ vs.~$\sigma_{\gamma\gamma}$ plane, for either $gg$ or $b\overline{b}$ production and assuming either a broad or narrow resonance. In particular, for $gg$ production we have \begin{equation} \label{LambdaUgg} \Lambda_U = M_S \left[ \frac{\Gamma_{gg}}{M_S} + f(r) \frac{\Gamma_{\gamma \gamma}}{M_S} \right]^{-1/2} \, , \end{equation} while for $b\overline{b}$ production \begin{equation} \label{LambdaUbb} \Lambda_U = \text{min} \left \{ 2 \sqrt{3 \pi} v \left( \frac{\Gamma_{b\overline{b}}}{M_S} \right)^{-1/2} , M_S \left[ f(r) \frac{\Gamma_{\gamma \gamma}}{M_S} \right]^{-1/2} \right \} \, . \end{equation} As reference values we take $M_S = 750$ GeV and $f(r) = 30$. The horizontal lines from top to bottom indicate a cross-section signal of $6$, $0.6$ and $0.2$ fb, assuming the same significance of the signal over the three integrated luminosities $\int \mathcal{L} = 3.2$, $300$ and $3000$ fb$^{-1}$. The red curve denotes instead the reference value $\Lambda = 20$ TeV, corresponding to the typical squark-gluino reach of a futuristic 100 TeV collider \cite{Golling:2016gvc}, which applies in the case of coloured new physics generating the effective operators. Hence, if a signal is observed above the red curve it basically means that a 100 TeV collider could potentially probe the physics responsible for the restoration of unitarity. We observe that such low-scale violation of unitarity are more readily obtained in the large width scenario and that for any given $\sigma_{\gamma\gamma}$ and $\mathcal B_{\gamma\gamma}$, unitarity violation sets in earlier for $b\bar b$ induced production, compared to gluon fusion processes, due to much smaller PDFs. \begin{figure}[!ht] \begin{center} \includegraphics[width=.49\textwidth]{ggproductionGlarge}~~~~ \includegraphics[width=.49\textwidth]{ggproductionGsmall} \\ \vspace*{0.5cm} \includegraphics[width=.49\textwidth]{bbproductionGlarge}~~~~ \includegraphics[width=.49\textwidth]{bbproductionGsmall} \end{center} \caption{\label{UBplot} Scale of unitarity violation $\Lambda_U$ in TeV in the $(\mathcal B_{\gamma\gamma}, \sigma_{\gamma\gamma})$ plane (cf.~\eqs{LambdaUgg}{LambdaUbb}). Upper/lower plots corresponding to $gg$/$b\overline{b}$ production, while left/right plots to the large/small width scenario. As reference values we assume $M_S = 750$ GeV and $f(r)=30$. The red curve denotes the new physics scale accessible at a futuristic 100 TeV collider, $\Lambda = 20$ TeV, while the three horizontal lines from top to bottom are three reference cross-sections, namely $6$, $0.6$ and $0.2$ fb. The yellow triangle on the top-left of each figure is the region in parameter space where $\Gamma_S / M_S > 10 \ \%$.} \end{figure} \section{Weakly-coupled models} \label{wcmodels} In this section we consider explicit UV completions of the effective operators of \sect{reviewPWU}, capturing the main features of several proposed NP models, which have recently appeared in the literature. In particular, we will assume either fermion or scalar mediators\footnote{The case of vector mediators has been suggested and analyzed in Ref.~\cite{deBlas:2015hlv} within a simplified model. A complete renormalizable UV realization of this idea requires a non-trivial extension of the SM gauge sector, subject to many additional theoretical and experimental constraints. For this reason we do not consider such a possibility in our analysis.} and CP-even couplings (the CP-odd case leads to similar conclusions as far as concerns unitarity bounds). Moreover, we restrict ourselves to the cases of $b\bar b$, $gg$ and/or $\gamma\gamma$ decays and postulate different sets of fields which separately contribute to the relevant partial widths. Note that as far as perturbativity is concerned, this latter hypothesis leads to conservative bounds. Colored mediators are experimentally much more constrained, and their masses generally need to lie close to or above the TeV scale. On the other hand, much lighter uncolored mediators are still allowed, potentially leading to resonantly enhanced one-loop contributions to radiative $S$ decays~\cite{Bharucha:2016jyr,DiChiara:2016dez}. The first model comprises new fermionic mediators (see e.g.~\cite{Goertz:2015nkp}), all singlets under $SU(2)_L$. To this end, we introduce $N_Q$ copies of electromagnetic (EM) neutral vector-like QCD triplets $Q_A\sim(3,1,0)$ (with $A=1,\ldots,N_Q$) as well as $N_E$ copies of colorless vector-like fermions $E_B$ (with $B=1,\ldots,N_E$), with (hyper)charge $Y$ ($E_B\sim (1,1,Y)$). We assume the theory to be invariant under a $U(N_Q) \otimes U(N_E)$ global symmetry and the di-boson resonance is represented by a real scalar field $S$. The Lagrangian featuring the new fermions reads \begin{align} \label{LNF} \mathcal{L}^{\rm{NF}} &= \overline{Q}_A i \slashed{D} Q_A + \overline{E}_B i \slashed{D} E_B \nonumber \\ &- \left( m_Q \overline{Q}_A Q_A + m_E \overline{E}_B E_B + y_Q S \overline{Q}_A Q_A + y_E S \overline{E}_B E_B \right) - V(S) \, , \end{align} where the details of the scalar potential are not needed for our discussion. The second model we are going to consider involves instead new scalar mediators. In analogy to the previous case, we introduce $N_{\tilde{Q}}$ copies of EM neutral QCD scalar triplets $\tilde{Q}_A\sim(3,1,0)$ and $N_{\tilde{E}}$ copies of colorless charged scalars $\tilde{E}_B\sim (1,1,Y)$, again all singlets under $SU(2)_L$. We also assume the theory to be invariant under a $U(N_{\tilde{Q}}) \otimes U(N_{\tilde{E}})$ global symmetry and the di-boson resonance is represented by a real scalar field $S$. The Lagrangian featuring the new scalars reads \begin{align} \label{LNS} \mathcal{L}^{\rm{NS}} &= |D_\mu \tilde{Q}_A|^2 + |D_\mu \tilde{E}_B|^2 \nonumber \\ &- \left( m_{\tilde{Q}} \tilde{Q}^*_A \tilde{Q}_A + m_{\tilde{E}} \tilde{E}^*_B \tilde{E}_B + A_Q S \tilde{Q}^*_A \tilde{Q}_A + A_E S \tilde{E}^*_B \tilde{E}_B \right) + \ldots \, , \end{align} where the ellipses stand for additional terms in the scalar potential which are irrelevant for our discussion. Focusing on the CP-even couplings, the contributions to $\Gamma_{\gamma\gamma}$ and $\Gamma_{gg}$ can now be written as~\cite{Franceschini:2015kwy} \begin{align} \label{PRgammagamma} \frac{\Gamma_{\gamma\gamma}}{M_S} &= \frac{\alpha_{\rm EM}^2}{16\pi^3} \left| N_E Q^2_E y_E \sqrt{\tau_E} \mathcal{S}(\tau_E) + N_{\tilde{E}} Q^2_{\tilde{E}} \frac{A_E}{2 M_S} \mathcal{F}(\tau_{\tilde{E}}) \right|^2 \, , \\ \label{PRgg} \frac{\Gamma_{gg}}{M_S} &= \frac{\alpha_s^2}{2\pi^3} \left| N_Q I_Q y_Q \sqrt{\tau_Q} \mathcal{S}(\tau_Q) + N_{\tilde{Q}} I_{\tilde{Q}} \frac{A_Q}{2 M_S} \mathcal{F}(\tau_{\tilde{Q}}) \right|^2 \, , \end{align} where $\tau_i = 4 m^2_i/M^2_S$ (for $i = E,\tilde{E},Q,\tilde{Q}$), $I_{Q}=I_{\tilde{Q}}=1/2$ is the index of the QCD representation, while $Q_{E(\tilde{E})}$ is the EM charge of $E(\tilde{E})$. The loop functions read \begin{align} \mathcal{S}(\tau) &= 1 + (1-\tau) \arctan^2(1/\sqrt{\tau-1}) \, , \\ \mathcal{F}(\tau) &= \tau \arctan^2(1/\sqrt{\tau-1}) -1 \, . \end{align} In particular, in the limit of heavy particles $(\tau \to \infty)$, they decouple as $\mathcal{S}(\tau) \simeq 2/(3 \tau)$ and $\mathcal{F}(\tau) \simeq 1/(3 \tau)$. As a reference value we fix $M_S = 750$ GeV, $\alpha_s (M_S/2) = 0.1$, $\alpha_{\rm EM} = 1/137$ and set the masses of the mediators close to the current experimental bounds from direct searches,\footnote{Stable charged leptons must be heavier than about 400 GeV in order to avoid excessive Drell-Yan production \cite{Chatrchyan:2013oca,DiLuzio:2015oha}, while the bounds on long-lived colored particles are more model dependent due to non-perturbative QCD uncertainties and typically range from few hundreds of GeV to 1 TeV \cite{Aad:2013gva,Khachatryan:2015jha}.} $m_{E,\tilde{E}} = 400$ GeV and $m_{Q,\tilde{Q}} = 1$ TeV, thus getting \begin{align} \label{GammaNF} \frac{\Gamma^{\rm{NF}}_{\gamma\gamma}}{M_S} &= 7.8 \times 10^{-8} \ N_E^2 Q^4_E y_E^2 \, , \qquad\qquad\qquad\quad \frac{\Gamma^{\rm{NF}}_{gg}}{M_S} = 2.7 \times 10^{-6} \ N_Q^2 y_Q^2 \\ \label{GammaNS} \frac{\Gamma^{\rm{NS}}_{\gamma\gamma}}{M_S} &= 1.2 \times 10^{-8} \ N_{\tilde{E}}^2 Q^4_{\tilde{E}} \left(\frac{A_E}{750 \ \rm{GeV}}\right)^2 \, , \qquad \frac{\Gamma^{\rm{NS}}_{gg}}{M_S} = 2.6 \times 10^{-8} \ N_{\tilde{Q}}^2 \left(\frac{A_Q}{750 \ \rm{GeV}}\right)^2 \, , \end{align} where we have separately considered the cases of new fermions and scalars. For heavier mediator masses the rates decouple as powers of $1/\tau_i = M^2_S/(4 m^2_i)$ and thus even larger couplings are required. For this reason, perturbativity bounds extracted using \eqs{GammaNF}{GammaNS} are understood to be conservative. Finally, we also consider a special case of the fermionic model, where at least one colored fermionic mediator has the SM gauge quantum numbers of the down-like right-handed SM quarks $\mathcal B \sim (3,1,-1/3)$ and mixes with the $b$-quark, in turn inducing $S\bar b b$ interactions.\footnote{Analogous cases for vector-like fermions mixing with other quark flavors can easily be derived using the results of~\cite{Fajfer:2013wca}.} The relevant $b-\mathcal B$ mixing Lagrangian is \begin{align} \label{VLmixing} \mathcal L^{\mathcal B-b} &= \bar Q_3 i \slashed D Q_3 + \bar b_R i \slashed D b_R + \bar{\mathcal B} i \slashed D \mathcal B - (M_{\mathcal B} + \tilde y_{\mathcal B} S) \bar{\mathcal B} \mathcal B \nonumber\\ &- y_b \bar Q_3 H b_R - y_{\mathcal B} \bar Q_3 H \mathcal B_R - \tilde y_{b} \bar{ \mathcal B}_L S b_R + \rm h.c.\,, \end{align} where $Q_3= (t_L, b_L)$, we have used reparametrization invariance to rotate away a possible $\bar {\mathcal B} b_R$ mass-mixing term, and have also neglected small CKM induced mixing terms with the first two SM generations. In the following we assume all couplings to be real in accordance with the CP-even nature of $S$. After EW symmetry breaking, the physical eigenstates $\mathcal B'$ and $b'$ are then given in terms of the above weak eigenstates as \begin{equation} \left( \begin{array}{c} b_{L,R}' \\ \mathcal B_{L,R}' \end{array}\right) = \left( \begin{array}{cc} \cos\theta^{L,R}_{\mathcal B b} & \sin\theta^{L,R}_{\mathcal B b} \\ -\sin\theta^{L,R}_{\mathcal B b} & \cos\theta^{L,R}_{\mathcal B b} \end{array} \right) \left( \begin{array}{c} b_{L,R} \\ \mathcal B_{L,R} \end{array}\right)\,, \end{equation} where \begin{align} \tan 2 \theta^L_{\mathcal B b} &= \frac{\sqrt 2 v y_{\mathcal B } M_{\mathcal B}}{M_{\mathcal B}^2 - \left[ y_b^2 + y_{\mathcal B }^2 \right] v^2/2}\,, \\ \tan 2 \theta^R_{\mathcal B b} &= \frac{v^2 y_{b} y_{\mathcal B } }{M_{\mathcal B}^2 - \left[ y_b^2 - y_{\mathcal B}^2 \right] v^2/2}\,, \end{align} and the masses are related via \begin{equation} m_b m_{\mathcal B} = M_{\mathcal B} y_b \frac{v}{\sqrt 2} \,, \qquad m_b^2 + m_{\mathcal B}^2 = M_{\mathcal B}^2 + \frac{v^2}{2} \left[ y_b^2 + y_{\mathcal B b}^2 \right]\,. \end{equation} In this basis, the $S$ interactions with $b'$ and $\mathcal B'$ are \begin{align} - \mathcal L^{\mathcal B - b} &\ni S \left[ \bar {\mathcal B'} \mathcal B' \cos\theta^L_{\mathcal B b} ( \cos\theta^R_{\mathcal B b} \tilde y_{\mathcal B} - \sin \theta^R_{\mathcal B b} \tilde y_b ) + \bar b' b' \sin\theta^L_{\mathcal B b}( \sin\theta^R_{\mathcal B b} \tilde y_{\mathcal B} + \cos \theta^R_{\mathcal B b} \tilde y_b)\right. \nonumber\\ &\left. + \bar {\mathcal B'_R} b'_L \sin \theta^L_{\mathcal B b} (\cos \theta^R_{\mathcal B b} \tilde y_{\mathcal B} - \sin\theta^R_{\mathcal B b} \tilde y_b )+ \bar {\mathcal B'_L} b'_R \cos \theta^L_{\mathcal B b} (\sin \theta^R_{\mathcal B b} \tilde y_{\mathcal B} + \cos\theta^R_{\mathcal B b} \tilde y_b ) + {\rm h.c.} \right] \,. \end{align} The $\theta^L_{\mathcal B b}$ mixing angle is constrained by EW precision measurements to $\sin\theta^L_{\mathcal B b} = 0.05(4) $~\cite{Fajfer:2013wca}, while $\theta^R_{\mathcal B b}$ is parametrically further suppressed as $\theta^R_{\mathcal B b} \sim (m_b / m_{\mathcal B}) \theta^L_{\mathcal B b}$. The $S\to b \bar b$ decay width can thus be written compactly as \begin{equation} \label{Sbb} \frac{\Gamma_{b\bar b}}{M_S} = \frac{3}{8\pi} \sin^2 \theta^L_{\mathcal B b} \tilde y_b^2 = 3\times 10^{-4} \left(\frac{\sin \theta^L_{\mathcal B b}}{0.05}\right)^2 \tilde y_b^2 \,, \end{equation} up to terms suppressed as $m_b^2/\left\{M_S^2, m^2_{\mathcal B}\right\}$\,. Note that contrary to the loop induced decay modes, $\Gamma_{b\bar b}$ does not explicitly depend on the mediator mass. On the other hand, its implicit dependence through $\theta_{\mathcal B b}^{L} \sim v/m_{\mathcal B}$ is well constrained experimentally. The resulting unitarity constraints based on \eq{Sbb} and saturating the upper bound on $\theta_{\mathcal B b}^{L} $ can thus again be considered as conservative. \subsection{Single fermion case} \label{fermmed} Let us first consider a simplified model featuring a real scalar singlet $S$ and a non-colored Dirac fermion $\psi$, with the interaction Lagrangian \begin{equation} \label{intSpsibarpsi} \mathcal{L}_I \supset - y S \overline{\psi} \psi \, . \end{equation} We denote the masses of $S$ and $\psi$, respectively as $M_S$ and $m_\psi$. Focusing on the $J=0$ sector, the most relevant scattering amplitude is given by $\psi \overline{\psi} \to \psi \overline{\psi}$ (cf.~\app{psipsibarpsipsibar}). In particular, the matrix of scattering amplitudes in the ($++,--$) helicity basis\footnote{$+-$ and $-+$ have zero projection on the $J=0$ sector.} is found to be \begin{equation} \label{TFSfull} \mathcal{T} = -y^2 \left( \begin{array}{cc} \frac{4 (p^3)^2}{s-M^2_S} + \frac{-4 m^2 \cos^2\frac{\theta}{2}}{t - M^2_S} & \frac{4 (p^3)^2}{s-M^2_S} + \frac{4 E^2 \cos^2\frac{\theta}{2}}{t - M^2_S} \\ \frac{4 (p^3)^2}{s-M^2_S} + \frac{4 E^2 \cos^2\frac{\theta}{2}}{t - M^2_S} & \frac{4 (p^3)^2}{s-M^2_S} + \frac{-4 m^2 \cos^2\frac{\theta}{2}}{t - M^2_S} \end{array} \right) \overset{\sqrt{s} \ \gg \ M_S, \, m_\psi}{\simeq} - y^2 \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \, , \end{equation} where in the last step we took the high-energy limit. The projection on the $J=0$ partial waves is readily obtained by applying \eq{PWproj}. We report here the expression in the high-energy limit (for the full expression see \eqs{a0++++full}{a0++--full} in \app{psipsibarpsipsibar}) \begin{equation} a^0 \simeq - \frac{y^2}{16 \pi} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \, , \end{equation} which, confronted with \eq{UnitBound}, yields the tree-level unitarity bound \begin{equation} y \lesssim \sqrt{8 \pi} \, . \end{equation} The behaviour of $|\mbox{Re}\, a^0_{++++}|$ and $|\mbox{Re}\, a^0_{++--}|$ with the full kinematical dependence is displayed in \fig{FermScat}, for the reference values $M_S = 750$ GeV, $m_\psi = 400$ GeV and $y = \sqrt{8 \pi}$. \begin{figure}[!ht] \begin{center} \includegraphics[width=.42\textwidth]{FS_svstch_pppp}~~~~ \includegraphics[width=.42\textwidth]{FS_svstch_ppmm} \end{center} \caption{\label{FermScat} Full kinematical dependence of $|\mbox{Re}\, a^0_{++++}|$ (left panel) and $|\mbox{Re}\, a^0_{++--}|$ (right panel), for the reference values $M_S = 750$ GeV, $m_\psi = 400$ GeV and $y = \sqrt{8 \pi}$. Dashed, dotted and full (red) lines represent respectively $s$-, $t$-channel and full contribution to the partial wave. Asymptotically, for $\sqrt{s} \gg M_S, m_\psi$, the values $|\mbox{Re}\, a^0_{++++}| \simeq \frac{1}{2}$ and $|\mbox{Re}\, a^0_{++--}| \simeq 0$ are reached.} \end{figure} A remarkable feature emerging from \fig{FermScat} is that, for e.g.~the asymptotic value $y = \sqrt{8 \pi}$, tree-level unitarity is violated already at scales not far from the resonance at $M_S = 750$ GeV. In particular (cf.~left panel in \fig{FermScat}), the $t$-channel contribution in $|\mbox{Re}\, a^0_{++++}|$ has a non-negligible effect at low energy, so that the maximal violation of unitarity turns out to be at scales not far from threshold. Conversely (cf.~right panel in \fig{FermScat}), the $s$- and $t$-channel tend to cancel each other in $|\mbox{Re}\, a^0_{++--}|$. Hence, due to the subleading contribution of the $|\mbox{Re}\, a^0_{++--}|$ partial wave in all the relevant kinematical region, the highest eigenvalue of $|\mbox{Re}\, a^0|$ is always dominated by $|\mbox{Re}\, a^0_{++++}|$. In \fig{FS_yvssqrts_pppp} we show the tree-level unitarity bound in the approximation where it is dominated by $|\mbox{Re}\, a^0_{++++}|$, for the three reference values $m_\psi = 250$, 400 and 1000 GeV. \begin{figure}[!ht] \begin{center} \includegraphics[width=.32\textwidth]{FS_Bound_250}~~ \includegraphics[width=.32\textwidth]{FS_Bound_400}~~ \includegraphics[width=.32\textwidth]{FS_Bound_1000} \end{center} \caption{\label{FS_yvssqrts_pppp} Saturation of the tree-level unitarity bound, $|\mbox{Re}\, a^0_{++++}|=1/2$, in the $(\sqrt{s},y)$ plane for $M_S = 750$ GeV and the three reference values $m_\psi = 250$, 400 and 1000 GeV. Dashed, dotted and full (red) lines denote respectively the $s$-, $t$-channel and full contribution to the partial wave. The light-green shaded area in the first plot corresponds to the region where $\Gamma_S/M_S > 10 \%$, while the grey-level vertical bands are contours of possible finite width effects defined in \eq{cutsa0} with $\alpha=3$, 4, 5. The dashed (black) horizontal line indicates the asymptotic value $y = \sqrt{8 \pi} \simeq 5$, while the full (black) line is the perturbativity bound obtained from the RGE criterium $\beta_y / y < 1$ (cf.~\eq{RGEbound}). } \end{figure} The above discussion prompts us to investigate resonance width effects, which can also become important very close to the scattering poles and effectively regulate the formally diverging tree-level amplitudes. Since such effects necessarily go beyond the tree-level approximation (they can be viewed as the absorptive part of the resummed self-energy contributions of $S$), we do not attempt to include them explicitly.\footnote{For a different approach see Refs.~\cite{Dawson:1988va,Dawson:1989up}.} Instead we superimpose contours of constant $s$ (in shades of grey) where the (on-shell) width effects parametrized as\footnote{For a similar approach see Refs.~\cite{SchuesslerDiplom,Schuessler:2007av,Betre:2014fva}.} \begin{equation} \label{cutsa0} \alpha = \frac{ |s-M_S^2|}{\Gamma_S M_S}\,, \end{equation} are expected to become important. Unitarity constraints derived in such regions cannot be considered meaningful. The parameter $\alpha$ in \eq{cutsa0} can be viewed as a measure of the relative error $\Delta$ introduced by using the tree-level propagator in the squared amplitude instead of one corrected in a Breit-Wigner approximation. In particular, we have $\alpha = \sqrt{1/\Delta-1}$. So, for example, $\alpha = 3$ corresponds to $\Delta = 10\%$. For concreteness we fix $\Gamma_S/M_S=0.10$. Note that due to the scaling of \eq{cutsa0}, smaller $S$ decay widths can only lead to more stringent constraints (derived closer to the resonance poles). The bounds derived in this way can thus be considered conservative. For $m_\psi = 250$ GeV, $S$ can directly decay into $\psi \overline{\psi}$, thus giving the following contribution to the total decay rate \begin{equation} \label{GammaSfermion} \Gamma_S = \frac{y^2}{8\pi} M_S \left( 1 - \frac{4 m^2_\psi}{M_S} \right)^{3/2} \, . \end{equation} In fact the requirement $\Gamma_S/M_S < 10 \%$ is always more constraining than the tree-level unitarity bound whenever the $s$-pole resonance is above threshold, $M_S > 2 m_\psi$ (cf.~shaded light-green region in the first plot of \fig{FS_yvssqrts_pppp}). On the other hand, for cases where the $s$-pole resonance is below threshold, tree-level unitarity is violated (for e.g.~the asymptotic value $y = \sqrt{8 \pi} \simeq 5$) above $1.2$ TeV (for $m_\psi = 400$ GeV) and $2.2$ TeV (for $m_\psi = 1000$ GeV). Importantly in these cases, both energies lie safely away from the region where resonance width effects can become relevant. It is interesting to compare the tree-level unitarity bounds in \fig{FS_yvssqrts_pppp} with those obtained via the RGE criterium \cite{Goertz:2015nkp} \begin{equation} \label{RGEbound} \frac{\beta_y}{y} = \frac{5 y^2}{16 \pi^2} < 1 \, . \end{equation} The latter agrees up to an $\mathcal{O}(1)$ factor with the bound based on tree-level unitarity in the asymptotic high-energy regime $y < \sqrt{8 \pi}$. Finally we note that in addition to $\psi\bar \psi$ scattering, in bounding tree-level unitarity within the fermionic mediator model one can also consider other elastic channels, such as $\psi S$ or $\psi \psi$. It turns out however, that the corresponding $J=0$ partial wave amplitudes vanish in the $\sqrt{s} \to \infty$ limit and also do not receive possible enhancements due to nearby $s$-channel resonance poles, thus leading to no additional constraints. \subsection{Single scalar case} \label{scalarmed} Let us next consider the scalar resonance $S$ interacting with a complex scalar field $\phi$ via \begin{equation} \label{ASpps} \mathcal{L}_I \supset - A S \phi^* \phi \, , \end{equation} where $A$ is a massive coupling and the masses of $S$ and $\phi$ are denoted as $M_S$ and $m_\phi$, respectively. The amplitude for the $\phi \phi^* \to \phi \phi^*$ scattering reads \begin{equation} \mathcal{T}_{\phi \phi^* \to \phi \phi^*} = - A^2 \left( \frac{1}{s - M_S^2} + \frac{1}{t - M_S^2} \right) \, . \end{equation} Correspondingly, the $J=0$ partial wave is found to be \begin{equation} \label{a0scalar} a^0_{\phi \phi^* \to \phi \phi^*} = - A^2 \frac{\sqrt{s (s - 4 m_\phi^2)}}{16 \pi s} \left( \frac{1}{s - M_S^2} - \frac{\log\frac{s - 4 m_\phi^2 + M_S^2}{M_S^2}}{s - 4 m_\phi^2} \right) \, , \end{equation} whose behaviour is shown in the left (right) panel of \fig{SS_avssqrts} for the reference values $M_S = 750$ GeV, $m_\phi = 400$ GeV ($1000$ GeV) and $A/M_S = 5$ ($10$). \begin{figure}[!ht] \begin{center} \includegraphics[width=.42\textwidth]{SS_avssqrts_400}~~~~ \includegraphics[width=.42\textwidth]{SS_avssqrts_1000} \end{center} \caption{\label{SS_avssqrts} Full kinematical dependence of $|\mbox{Re}\, a^0_{\phi \phi^* \to \phi \phi^*}|$, for the reference values $M_S = 750$ GeV, $m_\phi = 400$ GeV and $A/M_S = 5$ (left panel). Same for $m_\phi = 1000$ GeV and $A/M_S = 10$ (right panel). Dashed, dotted and full (red) lines represent respectively $s$-, $t$-channel and the full contribution to the partial wave. Asymptotically, for $\sqrt{s} \gg M_S, m_\phi$, $|\mbox{Re}\, a^0_{\phi \phi^* \to \phi \phi^*}|$ approaches zero for any value of the coupling $A$.} \end{figure} Note that, differently from the fermion mediators' case, the unitarity bound is never relevant in the high-energy regime $\sqrt{s} \gg M_S, m_\phi$. Such situation is expected since the scalar interaction in Eq.~\eqref{ASpps} is in the form of a relevant operator, whose tree-level contribution to $a^0$ vanishes as $1/s$ in the $s\to\infty$ limit. Thus tree-level unitarity in this case cannot be used to bound the validity of the leading order perturbative description at high energies. It can nonetheless identify problematic kinematical regions in vicinity of scattering poles. \fig{SS_AoMvssqrts} shows the unitarity bound for the three reference values $m_\phi = 250$, 400 and 1000 GeV. \begin{figure}[!ht] \begin{center} \includegraphics[width=.32\textwidth]{SS_Bound_250}~~ \includegraphics[width=.32\textwidth]{SS_Bound_400}~~ \includegraphics[width=.32\textwidth]{SS_Bound_1000} \end{center} \caption{\label{SS_AoMvssqrts} Saturation of the tree-level unitarity bound, $|\mbox{Re}\, a^0_{\phi \phi^* \to \phi \phi^*}|=1/2$, in the $(\sqrt{s},A/M_S)$ plane for $M_S = 750$ GeV and the three reference values $m_\phi = 250$, 400 and 1000 GeV. Dashed, dotted and full (red) lines represent respectively $s$-, $t$-channel and the full contribution to the partial wave. The light-green shaded area in the first plot corresponds to the region where $\Gamma_S/M_S > 10 \%$, while the grey-level vertical bands are the cuts due to finite width effects defined in \eq{cutsa0} with $\alpha=3$, 4, 5. The full (black) line is the perturbativity bound obtained from the finite trilinear vertex correction $\Delta A / A < 1$ (cf.~\eq{pertboundfinite}). } \end{figure} For $m_\phi = 250$ GeV, the $S \to \phi \phi^*$ decay channel contributes to the total width of $S$ via \begin{equation} \label{GammaSscalar} \Gamma_S = \frac{1}{16 \pi} \frac{A^2}{M_S} \sqrt{1-\frac{4 m_\phi^2}{M_S^2}} \, . \end{equation} Analogously to the fermionic case, whenever the $s$-pole resonance is above threshold, $M_S > 2 m_\phi$, the requirement $\Gamma_S/M_S < 10 \%$ is always more constraining than the tree-level unitarity bound (cf.~light-green shaded area in the first plot of \fig{SS_AoMvssqrts}). Below threshold, the issue of the $s$-pole resonance width is treated in a similar way as for the fermionic case, by identifying and avoiding kinematical regions in $\sqrt{s}$ via \eq{cutsa0} where finite width effects can become important. For $m_{\phi}=400$ $(1000)$ GeV, tree-level unitarity is then violated for values of $A/M_S \gtrsim 6.6$ ($11$), at scales of $\sqrt{s} \simeq 920$ GeV ($2.2$ TeV). Comparing the above tree-level unitarity bound with a complementary perturbativity criterium, we notice that in this case the RGEs cannot be used since, $A$ being associated to a relevant operator, by dimensional reasons it cannot enter its beta function alone. However, $A$ does give a finite perturbative correction to the trilinear scalar vertex $S\phi\phi^*$. By evaluating the one-loop correction at zero external momentum we find \begin{equation} \label{pertboundfinite} \Delta A = \frac{1}{16 \pi^2} \frac{A^3}{m_\phi^2 - M^2_S} \left( 1+ \frac{M^2_S \log{\frac{M^2_S}{m_\phi^2}} }{m_\phi^2 - M^2_S} \right) \, . \end{equation} In the $m_\phi \gg M_S$ limit we have \begin{equation} \Delta A = \frac{1}{16 \pi^2} \frac{A^3}{m^2_\phi} + \mathcal{O}\left(\frac{M_S}{m_\phi}\right)^2 \, , \end{equation} while for $M_S \gg m_\phi$ \begin{equation} \Delta A = \frac{1}{16 \pi^2} \frac{A^3}{M_S^2} \left ( 1 + \log{\frac{m_\phi^2}{M^2_S}} \right) + \mathcal{O}\left(\frac{m_\phi}{M_S}\right)^2 \, . \end{equation} We can hence define a perturbativity criterium via the relation $\Delta A / A < 1$. In any of the two limits above, the bound $\Delta A / A < 1$ is approximately given by\footnote{A similar estimate of the onset of the non-pertubative regime, based on naive dimensional analysis, has been suggested in \cite{Baratella:2016daa}.} \begin{equation} \label{approxDeltaoA} \frac{A}{\text{max} \, \{ m_\phi, M_S \}} < 4 \pi \, , \end{equation} which agrees within an $\mathcal{O}(1)$ factor with the bound based on tree-level unitarity (cf.~also \fig{SS_AoMvssqrts}). We also note that a conceptually different bound could be inferred by requiring that $A$ does not destabilize too much the $d=2$ operators.\footnote{This is essentially a hierarchy problem, not related to perturbativity.} For instance, by inspecting the beta function of $M^2_S$ (see e.g.~\cite{Martin:1993zk}) \begin{equation} \beta_{M^2_S} = \frac{A^2}{8 \pi^2} \, , \end{equation} we might require $\beta_{M^2_S} / M^2_S = A^2 / 8 \pi^2 < 1$, which yields a bound very similar to that in \eq{approxDeltaoA}. On the other hand, an interesting feature of the mass-hierarchy bound is that, unlike the one obtained via the finite vertex correction, it gets enhanced by the multiplicity $N$ of fields $\phi$ coupling to $S$, via the replacement $A^2 \to N A^2$. Finally, in addition to the $\phi \phi^*$ channel, one could also consider the $\phi S$ or $\phi \phi$ scatterings. However, for reasons similar to the fermionic case, these processes do not lead to additional constraints and we do not discuss them any longer. \subsection{Generalization in flavor space} The results of the previous two subsections can be readily generalized to the case of $N$ copies of the mediators. The same conclusions apply for fermion and scalar mediators, but for definiteness we are going to explicitly discuss them for fermions only. To this end, let us consider $N$ copies of fermion fields, $\psi_i$ ($i=1,\ldots,N$), interacting via the Lagrangian term \begin{equation} \mathcal{L}_I \supset - y_{ij} S \overline{\psi}_i \psi_j \, , \end{equation} where $y_{ij}$ is understood in the mass basis. Let us assume then some flavor structures for $y_{ij}$ and study the corresponding form of the unitarity bound: \begin{enumerate} \item $y_{ij} = y$ ($\forall$ $i$ and $j$) In such a case the amplitude matrix in \eq{TFSfull} gets generalized into \begin{equation} \mathcal{T} \otimes J_N \, , \end{equation} where $\otimes$ denotes Kronecker product and $J_N$ is the $N$-dimensional matrix made all by 1's. Since the only non-zero eigenvalue of $J_N$ is equal to $N$ (recall that $J_N$ is a rank-1 matrix), all the results of the previous section are readily generalized by the replacement $y \to \sqrt{N} y$. \item $y_{ij} = y \delta_{ij}$ This case corresponds to the weakly-coupled models discussed at the beginning of \sect{wcmodels}. The Lagrangian features an extra $U(N)$ global symmetry which can be conveniently used to label the irreducible sectors of the $\psi \overline{\psi} \to \psi \overline{\psi}$ scattering amplitudes. Since $N \otimes \bar{N} = \mathbf{1} \oplus \text{Adj}_N$, a general two-particle state $|\psi_i \overline{\psi}_j \rangle$ can be decomposed into a singlet channel \begin{equation} |\psi \overline{\psi}\rangle_\mathbf{1} = \frac{1}{\sqrt{N}} \sum_i |\psi_i \overline{\psi}_i\rangle \, , \end{equation} and an adjoint one \begin{equation} |\psi \overline{\psi}\rangle_\mathbf{\text{Adj}}^A = T^A_{ij} |\psi_i \overline{\psi}_i\rangle \, , \end{equation} where $T^A$, with $A = 1, \ldots, N^2-1$, are $SU(N)$ generators in the fundamental representation (in the normalization $\mbox{Tr}\, T^A T^B = \delta^{AB}$) and we properly took into account the normalization of the states. Due to the specific flavor structure, $y_{ij} = y \delta_{ij}$, one has \begin{equation} \langle \psi_k \overline{\psi}_l |S|\psi_i \overline{\psi}_j\rangle = i \mathcal{T}_s \, \delta_{ij} \delta_{kl} + i \mathcal{T}_t \, \delta_{ik} \delta_{jl} \, , \end{equation} where $\mathcal{T}_s$ and $\mathcal{T}_t$ denote respectively the $s$- and $t$-channel contribution to the scattering amplitudes in \eq{TFSfull}. Let us hence discuss in turn the non-zero scattering amplitudes. For the singlet-singlet channel one finds \begin{equation} \label{singletchamp} _\mathbf{1} \langle \psi \overline{\psi} |S|\psi \overline{\psi}\rangle_\mathbf{1} = \frac{1}{N} \sum_{ik} \langle \psi_k \overline{\psi}_k |S|\psi_i \overline{\psi}_i\rangle = \frac{1}{N} \sum_{ik} (i \mathcal{T}_s \, \delta_{ii} \delta_{kk} + i \mathcal{T}_t \, \delta_{ik} \delta_{ik}) = i \mathcal{T}_s \, N + i \mathcal{T}_t \, . \end{equation} In the asymptotic limit, $\sqrt{s} \gg M_S, m_\psi$, the $t$-channel decouples and one recovers the same multiplicity suppression in the unitarity bound, $\sqrt{N} y \leq \sqrt{8 \pi}$, as in case 1. The results in the low-energy region are instead displayed in \fig{FSflav_yvssqrts_pppp}, which shows the tree-level unitarity bound in the $(\sqrt{s},\sqrt{N}y)$ plane, for different values of $N$. Notice that, in this normalization, the $s$-channel contribution is not affected by $N$, while the $t$-channel contribution is suppressed like $1/N$ (cf.~\eq{singletchamp}). Hence, for large enough $N$ the unitarity bound coincides with the $s$-channel one and becomes relevant only in the asymptotic region $\sqrt{s} \gg M_S, m_\psi$. \begin{figure}[!ht] \begin{center} \includegraphics[width=.40\textwidth]{FS_Bound_400_Flavor} \end{center} \caption{\label{FSflav_yvssqrts_pppp} Tree-level unitarity bound in the $(\sqrt{s},\sqrt{N}y)$ plane for the reference values $M_S = 750$ GeV and $m_\psi = 400$ GeV. The dashed (red) line denotes the $s$-channel contribution (independent from $N$ in this normalization). The full (red) lines, labelled by the value of $N=1,2,3,4$, denote instead the full contribution. The value $y = \sqrt{8 \pi} \simeq 5$, indicated by the dashed (black) horizontal line, is reached asymptotically.} \end{figure} The other non-zero scattering amplitude is the adjoint-adjoint one, which is found to be \begin{align} \label{adjointchamp} ^{\ \ B}_\mathbf{\text{Adj}} \langle \psi \overline{\psi} |S|\psi \overline{\psi}\rangle_\mathbf{\text{Adj}}^A &= T^{B\dag}_{kl} T^A_{ij} \langle \psi_k \overline{\psi}_l |S|\psi_i \overline{\psi}_j\rangle = T^{B}_{lk} T^A_{ij} (i \mathcal{T}_s \, \delta_{ij} \delta_{kl} + i \mathcal{T}_t \, \delta_{ik} \delta_{jl}) \nonumber \\ &= \mbox{Tr}\, (T^{B}) \mbox{Tr}\, (T^A) (i \mathcal{T}_s) + \mbox{Tr}\, (T^{B} T^A) (i \mathcal{T}_t) = i \mathcal{T}_t \, \delta^{AB} \, . \end{align} Hence, we conclude that the adjoint-adjoint scattering is phenomenologically less relevant: only the subleading $t$-channel contributes, without the high-multiplicity enhancement. \item $y_{ij} = y_i \delta_{ij}$ This is the most general case relevant for a di-boson resonance, for which the mediators' couplings enter the partial width $\Gamma_{\gamma\gamma}$ as $\abs{\sum_i y_i}^2$. On the other hand, the unitarity bound on the $2\to 2$ scatterings applies to the combination $\sum_i \abs{y_i}^2$. Hence, at fixed value of $\abs{\sum_i y_i}^2$, the sum that enters in the amplitude for the $2 \to 2$ scattering is minimized when $y_i = y$ ($\forall$ $i$). In this way the bound from unitarity is minimized too. \end{enumerate} Finally, we briefly discuss the case where the mediators carry extra gauge quantum numbers, as e.g.~color. This exactly matches the identity-$y$ scenario and thus all the previous results carry over. In particular, given an $N_R$-dimensional irreducible representation of the gauge group, the state corresponding to the gauge singlet combination always features an $N_R$ enhancement in the $s$-channel. \subsection{Application to mediator models} We are now ready to discuss the implication of the unitarity bounds on the required partial widths needed to reproduce any given $\gamma\gamma$ signal at the LHC. In particular, in the case of $gg$-initiated production processes (at $M_S=750$~GeV) the constraints to be fulfilled are the following: \begin{itemize} \item Fermion mediators (model in \eq{LNF}): \begin{align} \label{NEFcond} N_E y_E^2 &< 8 \pi \, , \\ \label{NQFcond} 3 N_Q y_Q^2 &< 8 \pi \, , \\ \label{NFcond} N^2_E N^2_Q y_E^2 y_Q^2 Q^4_E &= 6.6 \times 10^4 \left(\frac{\sigma_{\gamma\gamma}}{\rm fb}\right) \left( \frac{\Gamma_S/M_S}{0.1} \right) \, . \end{align} The flavor and color enhancement of the bounds in \eqs{NEFcond}{NQFcond} hold in the asymptotic region $\sqrt{s} \gg M_S, m_{E,Q}$, where the partial wave is $s$-channel dominated, while in deriving \eq{NFcond} we used \eq{xsecgg} and \eq{GammaNF}. \item Scalar mediators (model in \eq{LNS}): \begin{align} \label{NEScond} N_{\tilde{E}} \left( \frac{A_E}{M_S} \right)^2 & < 25 \, , \\ \label{NQScond} 3 N_{\tilde{Q}} \left( \frac{A_Q}{M_S} \right)^2 & < 400 \, , \\ \label{NScond} N^2_{\tilde{E}} N^2_{\tilde{Q}} \left( \frac{A_E}{M_S} \right)^2 \left( \frac{A_Q}{M_S} \right)^2 Q^4_{\tilde{E}} &= 4.5 \times 10^7 \left(\frac{\sigma_{\gamma\gamma}}{\rm fb}\right) \left( \frac{\Gamma_S/M_S}{0.1} \right) \, , \end{align} The values in \eqs{NEScond}{NQScond} refer to the $s$-channel bounds of \fig{SS_AoMvssqrts}, for which the flavor and color enhancement apply, while in deriving \eq{NScond} we have used \eq{xsecgg} and \eq{GammaNS}. On the other hand, the following constraints (obtained by looking at the full partial wave amplitude in \fig{SS_AoMvssqrts}) \begin{equation} \left( \frac{A_E}{M_S} \right)^2 < 44 \, , \qquad \left( \frac{A_Q}{M_S} \right)^2 < 120 \, , \end{equation} hold irrespectively of the flavor and color copies. Note that the bounds on $A_Q$ are weaker then on $A_E$ because the partial wave amplitudes are decreasing fast for heavy mediators (away from the poles). Thus, contrary to the fermionic case, unitarity bounds on these scalar couplings crucially depend on the assumed mediator masses. Nevertheless, the bounds cannot be circumvented by decoupling the mediator masses (for fixed $M_S$) since the decoupling of the partial rates in \eqs{PRgammagamma}{PRgg} is faster than that of the partial wave amplitude (cf.~\eq{a0scalar}). \end{itemize} In the case of fermion mediators we have 5 parameters ($y_E$, $y_Q$, $N_E$, $N_Q$ and $Q_E$) entering the expression in \eq{NFcond} corresponding to a particular di-photon signal strength. Hence, a possible way to display the tree-level unitarity bounds in \eqs{NEFcond}{NQFcond} is to choose a value of $Q_E$ and fix $y_Q=y_E$. \fig{Excl_FSQ} (upper side plots) displays iso-curves reproducing the benchmark signal of $\sigma_{\gamma\gamma}=1$~fb and $\Gamma_S / M_S = 0.1$ in the $N_Q$ vs.~$N_E$ plane and the associated perturbativity bounds for different values of $Q_E$. A very similar discussion applies to the case of scalar mediators (cf.~lower side plots). \begin{figure}[ht] \begin{center} \includegraphics[width=.32\textwidth]{Excl_F_Q1} \includegraphics[width=.32\textwidth]{Excl_F_Q2} \includegraphics[width=.32\textwidth]{Excl_F_Q3} \includegraphics[width=.32\textwidth]{Excl_S_Q1} \includegraphics[width=.32\textwidth]{Excl_S_Q2} \includegraphics[width=.32\textwidth]{Excl_S_Q3} \end{center} \caption{\label{Excl_FSQ} Contours of constant Yukawa couplings $y_Q=y_E$ in the $N_Q$ vs.~$N_E$ plane (upper side plots) and constant scalar trilinears $A_Q / M_S = A_E / M_S$ in the $N_{\tilde{Q}}$ vs.~$N_{\tilde{E}}$ plane (lower side plots) for parameter points predicting a $\sigma_{\gamma\gamma}=1$~fb di-photon resonance with $M_S = 750$ GeV and $\Gamma_S / M_S = 0.1$ (cf.~\eq{NFcond} and \eq{NScond}). The different cases are associated to values of the EM charge of $Q_E$ and $Q_{\tilde{E}}$ from 1 to 3, while the exclusion regions correspond to the tree-level unitarity bounds in \eqs{NEFcond}{NQFcond} (upper side plots) and \eqs{NEScond}{NQScond} (lower side plots).} \end{figure} As it emerges from \fig{Excl_FSQ}, the only possibilities to accommodate the benchmark di-photon signal within weakly-coupled models are either via exotically-large EM charges\footnote{To this end, it would be relevant to consider scattering amplitudes providing unitarity constraints on the EM charge of the colorless mediators, e.g.~via hypercharge-mediated scatterings. However, unitarity arguments cannot be straightforwardly applied in presence of long-range forces, since the amplitudes are plagued by IR singularities (cf.~the case of Bhabha scattering in the forward region \cite{Peskin:1995ev}).} and/or a very large number of mediators' copies. These two latter options are also bounded by usual RGE arguments, which however are not sufficient to exclude such possibilities (see e.g.~\cite{Goertz:2015nkp}). We finally discuss the case of the model in \eq{VLmixing} where the production of $S$ is due to $b\overline{b}$-initiated processes. Using \eq{xsecbbbar} and \eq{Sbb} we obtain \begin{equation} \label{VLmixcond} \left(\frac{\sin \theta^L_{\mathcal B b}}{0.05}\right)^2 \tilde y_b^2 = 77 \left(\frac{\sigma_{\gamma\gamma}}{\rm fb}\right) \left( \frac{\Gamma_S / M_S}{0.1} \right) \left( \frac{\Gamma_{\gamma \gamma} / M_S}{10^{-4}} \right)^{-1} \, , \end{equation} to be confronted with the tree-level unitarity bound \begin{equation} \tilde{y}_b^2 < \frac{8 \pi}{3} \, , \end{equation} where we also took into account the color enhancement of the $s$-channel. In this case, the perturbative unitarity constraint is very severe (see \fig{bbar_bounds}). In particular for our benchmark it excludes the possibility for $S\to b\bar b$ decays to saturate a large decay width. \begin{figure}[!ht] \begin{center} \includegraphics[width=.40\textwidth]{bbar_bounds} \end{center} \caption{\label{bbar_bounds} Contours of constant $\Gamma_{b\overline{b}}/M_S$ in the $({\sin\theta^L_{\mathcal{B}b}}, \tilde{y}_b)$ plane. The values of $\Gamma_{b\overline{b}}/M_S$ are varied between $0.1$ and $0.001$. The vertical (grey) band denotes the 1-$\sigma$ upper bound on $\sin\theta^L_{\mathcal{B}b}$, while the full (red) line is the tree-level unitarity bound. } \end{figure} \section{Conclusions} \label{concl} Perturbative unitarity is a powerful theoretical tool for inferring the range of validity of a given EFT, with notable examples of applications both in the physics of strong and electroweak interactions. The continued interest in di-boson resonances at the LHC motivated us to investigate the implications of partial wave unitarity for the theoretical description of such possible signals both in the minimal EFT extension of the SM as well as in its renormalizable UV completions. In the case of a TeV-scale scalar di-boson resonance observable at the LHC we have, under some very basic and natural assumptions on the structure of the EFT (mainly that $S$ is a spin-0 SM gauge singlet and that the $\text{dim}=5$ operators in \eq{effLSM} are the most relevant ones for the decay of $S$), demonstrated a potential violation of tree-level unitarity in the scattering of SM fields at energy scales of few tens of TeV. One should stress, however, that in many models (both weakly and strongly coupled) predicting observable di-boson resonances, new states are typically predicted to lie much below our energy estimates. In a similar way one can use perturbative unitarity in order to estimate the range of validity of perturbation theory in explicit renormalizable UV completions of the low-energy EFT and accordingly set perturbativity bounds on the relevant model couplings. Especially in the case of a large total $S$ width, the inferred bounds are typically very constraining, and are in particular endangering the calculability of many weakly-coupled models present in the literature. Interestingly, tree-level unitarity bounds are important not only at high energies but also close to thresholds of new physics. This is especially crucial for scalars interacting via relevant operators, since the corresponding unitarity bounds are always saturated at finite scattering energies relatively close to threshold. Other perturbativity criteria such as those based on Landau poles are only logarithmically sensitive to the energy scale and typically need a few decades of running before hitting the singularity of the Landau pole. Finally, we find that our perturbative bounds are sensitive not only to the strengths of the couplings ($y$) of the mediators to a di-boson resonance but also to the multiplicity $N$ of the mediator states. For example, for fermions the bounds scale as $N y^2$, exhibiting a similar 't Hooft scaling as the perturbative bounds obtained by analyzing the RGE flow of the couplings~\cite{Goertz:2015nkp}. We conclude that in the event of an experimental observation of a scalar di-boson resonance at the LHC, while our estimates cannot provide a guarantee to see on-shell effects of additional new degrees of freedom at the LHC, they would immediately imply the existence of additional phenomena within the energy reach of the next generation 50-100~TeV hadron colliders, thus making a strong physics case for their construction. { \section*{Note added} While completing this paper we came across Ref.~\cite{Cynolter:2016jxv}. Though part of our work overlaps with it, we reach different conclusions. } \section*{Acknowledgments} We thank Ramona Gr\"{o}ber, Jacobo L\'opez-Pav\'on, David Marzocca, Christopher W.~Murphy, Enrico Nardi, and Filippo Sala for helpful discussions. The work of L.D.L.~is supported by the Marie Curie CIG program, project number PCIG13-GA-2013-618439. J.F.K. acknowledges the financial support from the Slovenian Research Agency (research core funding No.\ P1-0035).
2,877,628,089,353
arxiv
\section{Introduction} \label{sec:intro} The anomalous magnetic moment of the muon, $a_\mu=(g_\mu-2)/2$, is a remarkable example of a quantity that can be studied with very high accuracy on both the experimental and the theoretical sides. The 0.5 ppm uncertainty of the current experimental value allows to probe contributions from electromagnetic, strong and weak interactions. The Standard Model (SM) prediction has reached a comparable precision\,~\cite{Agashe:2014kda}, \begin{align} \label{eq:gm2res} \hspace*{-1.4cm} a_\mu^{\rm th} &= 116\,591\,803\,{(42)}\,(26)\,(01)~\cdot 10^{-11}~\,{[0.4\,{\rm ppm}]}\,,\nonumber\\ a_\mu^{\rm exp} &= 116\,592\,091\,(54)\,(33)~\cdot 10^{-11} ~~~\,{[0.5\,{\rm ppm}]}\,. \end{align} The deviation between theory and experiment currently amounts to a 3.6 $\sigma$ effect. The next generation of experiments at Fermilab~\cite{Carey:2009zzb} and J-PARC~\cite{Benayoun:2014tra} aims at a reduction of the uncertainty in $a_\mu$ by a factor of four. Such a precision will substantially enhance the sensitivity to physics beyond the SM. It is, however, equally important to examine the reliability of the current SM prediction and to attempt to reduce its uncertainty to the level of the forthcoming experimental results. The SM result has recently profited from an outstanding achievement in determining the QED contribution up to 5-loop order~\cite{Aoyama:2012wk}. Theory errors in eq.~(\ref{eq:gm2res}) arise from lowest-order hadronic (HLO), higher-order hadronic and electroweak contributions, respectively. The SM error is thus markedly dominated by QCD dynamics and, in particular, by HLO vacuum polarisation effects. The HLO vacuum polarisation contribution, $a_\mu^{\rm HLO}$, can be obtained by a dispersive approach that combines basic properties of the theory -- such as analyticity and unitarity -- with experimental input. A collection of recent measurements~\cite{Davier:2010nc,Hagiwara:2011af,Benayoun:2012wc} of inclusive hadronic cross-sections, $\sigma(e^+e^- \to {\rm hadrons})$, has allowed to reach a $0.6\%$ precision on the LO hadronic contribution, $a_\mu^{\rm HLO} = 6923(42) \cdot 10^{-11}$~\cite{Agashe:2014kda}. A persistent $\sim 3\sigma$ deviation between the analyses of the $\pi^+\pi^-$ channel by BaBar and KLOE has an impact on the SM prediction. This is being investigated by several experiments~\cite{Benayoun:2014tra}. Conversely, the tension in the results for $a_\mu^{\rm HLO}$ based on $e^+e^-$ and $\tau$ data has recently been reduced below the 2\,$\sigma$ level~\cite{Jegerlehner:2011ti,Benayoun:2012wc}. Since the dispersion relation results largely depend on experimental data, it is desirable to consider also an independent approach based on first principles. A determination of $a_\mu^{\rm HLO}$ along these lines can be achieved through lattice QCD. A number of studies~\cite{Blum:2002ii,Gockeler:2003cw,Aubin:2006xv,Feng:2011zk,DellaMorte:2011aa,Boyle:2011hu,Burger:2013jya} have demonstrated the potential of this approach. It is nonetheless still a considerable challenge for the lattice studies to reach the sub-percent accuracy of the dispersion relation result. There has recently been an intense activity in order to device new ways of improving the accuracy of the lattice determinations of $a_\mu^{\rm HLO}$. \section{Lattice QCD Determinations of $a_\mu^{\rm HLO}$} \label{sec:latamu} The hadronic vacuum polarisation tensor, depending on the Euclidean momentum $Q$, reads \begin{equation}\label{eq:pol_tensor} \Pi_{\mu\nu}(Q)= \int d^4x \,e^{iQx} \,\langle J_\mu(x)J_\nu(0) \rangle \,, \end{equation} where the flavour singlet vector current is given by, \begin{equation} J_\mu(x)=\sum_{{\rm f}=u,\,d,\,s,\,c,\dots} \, Q_{\rm f} \, \overline \psi_{\rm f}(x)\gamma_\mu \psi_{\rm f}(x) \,. \label{eq:current} \end{equation} $Q_{\rm f}$ is the electric charge of the quark flavour ${\rm f}$. Euclidean invariance and current conservation imply, \begin{equation} \Pi_{\mu\nu}(Q)=(Q_\mu Q_\nu - \delta_{\mu\nu} Q^2) \,\Pi(Q^2)\,. \label{eq:vpcont} \end{equation} The VPF $\Pi(Q^2)$ can be decomposed into non-singlet and singlet contributions. The subtracted VPF, $\widehat{\Pi}(Q^2) = \Pi(Q^2)-\Pi(0)$, is free of ultraviolet divergences and can be convoluted with a known analytic kernel function $K(Q^2,m_\mu)$ to derive the {\it standard representation} for $a_\mu^{\rm HLO}$~\cite{deRafael:1993za,Blum:2002ii} currently being used on the lattice, \begin{equation} a_\mu^{\rm HLO} = 4\alpha^2 \, \int_{0}^{\infty} dQ^2 \, K(Q^2,m_\mu) \,\widehat{\Pi}(Q^2)\,, \label{eq:amulat} \end{equation} where $m_\mu$ is the muon mass. A comparison of lattice QCD determinations of $a_\mu^{\rm HLO}$~\cite{Feng:2011zk,DellaMorte:2011aa,Aubin:2006xv,Boyle:2011hu,Burger:2013jya} is shown in Fig.~\ref{fig:gm2comp}. \begin{figure}[t!] \centering \includegraphics[scale=0.69]{g-2_compare.pdf} \caption{Comparison of lattice determinations of $a_\mu^{\rm HLO}$~\cite{Feng:2011zk,DellaMorte:2011aa,Aubin:2006xv,Boyle:2011hu,Burger:2013jya}. The number of flavours in the sea is labelled by $N_{\rm f}$ while the flavour content in the valence sector, appearing in eq.~(\ref{eq:current}), is denoted by $u$,~$d$,~$s$ and $c$. The dispersion relation approach -- with a $0.6\%$ relative precision~\cite{Agashe:2014kda} -- is the denoted by the yellow vertical band.} \label{fig:gm2comp} \end{figure} The present uncertainty from the lattice computations is larger than the $0.6\%$ precision of the dispersion relation approach. With the current accuracy, it is still challenging to isolate the relative contributions from dynamical strange and charm quarks. However, the $s$ and $c$ valence contributions -- while being significantly smaller than those from $u,d$ quark flavours -- can be determined with relatively good precision~\cite{Burger:2013jya,Chakraborty:2014mwa}. The uncertainties of the lattice QCD results of $a_\mu^{\rm HLO}$ have multiple origins. In the next section we outline the main sources of errors affecting these computations and report about the recent proposals to address them. \section{Behaviour of the VPF at Low $Q^2$} \label{sec:lowq2} A crucial aspect of the computation of $a_\mu^{\rm HLO}$ is to constrain with accurate lattice data the $Q^2$ region where the integrand in eq.~(\ref{eq:amulat}) is large. In practice, this region is in the neighbourhood of $Q^2 \approx m_\mu^2/4 \approx 0.003\,{\rm GeV}^2$. However, this low-$Q^2$ regime poses serious problems for lattice studies based on an evaluation of $\Pi(Q^2)$ from eq.~(\ref{eq:vpcont}), since the transverse projector on the r.h.s vanishes at $Q^2=0$. In finite volume with periodic boundary conditions, the minimal momentum is quantised in units of the lattice size $L$, by $Q^2_{\rm min} = (2\pi/L)^2$. Directly probing the dominant region, $Q^2 \approx m_\mu^2/4$, would require values of $L \approx 20$\,fm that are far beyond what is achievable with present-day resources. Furthermore, in this small $Q^2$ regime, long-distance QCD effects induce large fluctuations on the VPF. To illustrate these effects an auxiliary observable, $\bar{a}_\mu^{\rm had}(Q^2_{\rm ref})$, is defined as follows, \begin{equation} \hspace*{-0.45cm} \bar{a}_\mu^{\rm HLO}(Q^2_{\rm ref}) = 4\alpha^2 \int_{Q^2_{\rm ref}}^{\infty} dQ^2 K(Q^2) \left[\Pi(Q^2) - {\Pi(Q^2_{\rm ref})}\right]. \label{eq:amubar} \end{equation} This quantity coincides with $a_\mu^{\rm HLO}$ in the limit $Q^2_{\rm ref} \to 0$. Fig.~\ref{fig:amubar} shows the dependence of $\bar{a}_\mu^{\rm HLO}$ on $Q^2_{\rm ref}$. The $u,d$ valence quark contribution is dominated by the region $Q^2_{\rm ref} \lesssim 0.4\,{\rm GeV}^2$. Moreover, the relative error on $\bar{a}_\mu^{\rm HLO}$ increases when reducing $Q^2_{\rm ref}$. A clear hierarchy is observed in the size of the $(u, d)$, $s$ and $c$ valence contributions. In spite of that, the current accuracy is at a level that renders the inclusion of these effects appropriate. \begin{figure}[t!] \centering \includegraphics[scale=0.69]{amubar_vs_q2_cont_chi_udsc.pdf} \caption{Momentum dependence of $\bar{a}_\mu^{\rm HLO}(Q^2_{\rm ref})$, defined in eq.~(\ref{eq:amubar}), and coinciding with $a_\mu^{\rm HLO}$ at $Q^2_{\rm ref}=0$. The y-axis is normalised by $a_\mu^{\rm HLO}[u,d]$. The region, $Q^2_{\rm ref} \gtrsim 0.4\,{\rm GeV}^2$, is observed to contribute very little to $a_\mu^{\rm HLO}$. When increasing the quark mass from the mass-degenerate $u,d$ quark sector to the strange and charm regions, a strong suppression of the contribution to $a_\mu^{\rm HLO}$ is observed. The current accuracy however requires these various contributions to be included.} \label{fig:amubar} \end{figure} A number of ideas have been recently put forward to tackle the issue of reaching the small $Q^2$ regime. \paragraph{Partially Twisted Boundary Conditions} To circumvent the limitation of having access only to a restricted set of low momentum values, periodic boundary conditions for the valence quark fields can be replaced by twisted boundary conditions~\cite{Bedaque:2004kc,deDivitiis:2004kq,Sachrajda:2004mi}. A denser set of momenta can thus be reached~\cite{DellaMorte:2011aa} in a region closer to $Q^2 \approx m_\mu^2$, at the price of additional numerical effort and of small systematic effects from the breaking of isospin symmetry~\cite{Aubin:2013daa,Horch:2013lla,Gregory:2013taa}. The increasing fluctuations in $\Pi(Q^2)$ at small $Q^2$ values are however still present when adopting this procedure. \paragraph{Extrapolation to $Q^2=0$} The integrand in eq.~(\ref{eq:amulat}) is peaked at low $Q^2$ where lattice data are not available. Moreover, an extrapolation of $\Pi(Q^2)$ to $Q^2 \to 0$ is needed when relying on the {\it standard representation} for $a_\mu^{\rm HLO}$. The estimate of the systematic effects associated with this extrapolation is one of the crucial aspects of present lattice calculations. Parametrisations of the $Q^2$ dependence of the VPF based on vector meson dominance can introduce a model dependence that is difficult to quantify. Alternatively, Pad\'e approximants supply a model-independent and systematically improvable description of the $Q^2$ behaviour of $\Pi(Q^2)$~\cite{Aubin:2012me,Golterman:2013vca,DellaMorte:2011aa}. Correlations among $Q^2$ data points and the increasing number of fit parameters limits the order of the Pad\'e approximants that can be reached for the purpose of testing the convergence properties of the series. This problem can be alleviated by restricting the use of Pad\'e fits to the low momentum region, $Q^2 \lesssim 0.4\,{\rm GeV}^2$, which is known to provide the bulk on the contribution to $a_\mu^{\rm HLO}$, see Fig.~\ref{fig:amubar}. By splitting the bounds of the integral in eq.~(\ref{eq:amulat}) into low and high $Q^2$ intervals, a dedicated analysis of each of these regions can lead to an additional handle on the assessment of systematic effects~\cite{Golterman:2014ksa,Golterman:2014wfa}. \paragraph{Momentum Derivatives of the Vacuum Polarisation} A complementary way to scrutinise the difficulties encountered in the low-$Q^2$ region is to consider derivatives with respect to momentum of the vacuum polarisation. By applying derivatives of the vacuum polarisation tensor in eq.~(\ref{eq:vpcont}) with respect to $Q_\mu$ and $Q_\nu$, it is possible to extract $\Pi(Q^2)$ and, in particular, to isolate $\Pi(0)$. These derivatives have formally been applied in order to rewrite $\Pi(0)$ in terms of suitable correlation functions involving the integrated insertion of currents~\cite{deDivitiis:2012vs}. The availability of $\Pi(0)$ then allows to reach the dominant momentum region, $Q^2 \approx m_\mu^2$, through an interpolation. The derivative of the VPF with respect to $Q^2$ is free of ultraviolet divergences. The Adler function~\cite{Adler:1974gd,De Rujula:1976au} is a related physical quantity, defined as follows, \begin{equation} D(Q^2)=12\,\pi^2\,Q^2\, \frac{d\Pi(Q^2)}{dQ^2}\,. \label{eq:adler} \end{equation} The Adler function can be combined with an appropriate kernel function to derive an alternative representation for $a_\mu^{\rm HLO}$~\cite{Jegerlehner:2008zza,Lautrup:1971jf}, \begin{equation} a_\mu^{\rm HLO} = \frac{\alpha^2}{6\pi^2} \int_{0}^{1} dx~ \frac{(1-x)(2-x)}{x} \,D\left(\frac{x^2m_\mu^2}{1-x}\right)\,, \label{eq:amuadler} \end{equation} where the substitution $Q^2 \to x^2m_\mu^2/(1-x)$ was applied. In this way, lattice determinations of $D(Q^2)$~\cite{Renner:2012fa,Francis:2013fzp,Horch:2013lla} can be used to directly compute $a_\mu^{\rm HLO}$~\cite{DellaMorte:2014rta}. The idea of taking the derivative of $\Pi(Q^2)$ with respect to $Q^2$ can be extended to include higher order derivatives at $Q^2=0$, computed via Euclidean-time moments of the vector correlation function, eq.~(\ref{eq:veccor}), at vanishing spatial momentum~\cite{Chakraborty:2014mwa}. The subtracted VPF can then be constructed from its Taylor expansion. Long-distance effects are enhanced when increasing the order of the moments. For the $u,d$ contribution, these effects are expected to be sizeable since they are related to the two-pion decay channel of the $\rho$-meson. A new integral representation for $a_\mu^{\rm HLO}$ based on the Mellin transform of the hadronic spectral function~\cite{deRafael:2014gxa} relies on the calculation of the moments ${\cal M}(-n)$, \begin{equation} \label{eq:momeucl} {\cal M}(-n)= \frac{(-1)^{n+1} }{(n+1)!}( m_{\mu}^2 )^{n+1} \left.\frac{d^{n+1}}{(dQ^2 )^{n+1}}\widehat{\Pi}(Q^2)\right|_{Q^2 =0}\,, \end{equation} with $n=\{0,1,2, \dots\}.$ In this approach, the subtracted VPF also appears in the evaluation of integrals over $Q^2$, which are, however, better suited than e.g. eq.~(\ref{eq:amulat}) for the regime of momenta accessible on the lattice. An evaluation based on a phenomenological model~\cite{Bernecker:2011gh} indicates that already for the order $n=3$, a $1\%$ deviation from a determination of $a_\mu^{\rm HLO}$ based on the dispersion relation approach could be achieved~\cite{deRafael:2014gxa}. \paragraph{Mixed (Time-Momentum) Representation} Different representations for $a_\mu^{\rm HLO}$ can provide alternative means to monitor the leading systematic effects present in lattice computations -- a few examples have been mentioned above. These integral representations can differ by the weight given to the integrand by a particular $Q^2$ region or by the relative size of the long-distance contributions. A representation could thus be better suited for lattice QCD studies provided that it is more constrained by the region where data is available and sufficiently accurate. A {\it mixed-representation} of the subtracted VPF involving the time-momentum dependence of the vector correlation function $G(x_0,\,\vec k)$, \begin{equation} {G(x_0,\,\vec k)} = {\int d^3 x\;} e^{i\vec k \vec x} \, \langle\, J_\mu(x_0,\vec x)\, J_\mu(0)\, \rangle \,, \label{eq:veccor} \end{equation} can be written as follows~\cite{Bernecker:2011gh}, \begin{equation} \widehat{\Pi}(Q^2) = \int_{0}^\infty dx_0\, {G(x_0,\vec k=0)} \left[x_0^2 - \frac{4}{Q^2}\sin^2\left(\frac{1}{2} Q x_0\right) \right]\,. \label{eq:mixrep} \end{equation} The subtracted VPF determined in this way preserves a continuous dependence on $Q^2$, in particular in the neighbourhood of $Q^2=0$~\cite{Feng:2013xsa,Francis:2013fzp,Francis:2014qta}. The integration bounds in eq.~(\ref{eq:mixrep}) imply that long-distance effects in $G(x_0,\, \vec k=0)$ will contribute. For $u,d$ quarks, they are governed by the resonance nature of the $\rho$-meson. This necessitates the incorporation of interpolating operators which couple efficiently to two-pion states into the vector correlation function. An appealing feature of the mixed-representation is that quark-disconnected diagrams, which arise from the singlet contribution to the vector correlation function, can be evaluated straightforwardly using efficient noise reduction techniques~\cite{Francis:2014hoa}. Since different representations can lead to an improved control of the uncertainties in distinct $Q^2$ intervals, it is beneficial to combine the use of these representations to reduce the overall error on $a_\mu^{\rm HLO}$. In general, a mixture of methods based on previously discussed ideas -- used in combination with variance reduction techniques~\cite{Blum:2012uh,Shintani:lat14} -- is expected to lead to a more accurate lattice result for $a_\mu^{\rm HLO}$. \begin{table}[t!] \begin{center} \begin{tabular}{cccccr} \hline Ens. & $a\,[\mathrm{fm}]$ & $V/a^4$ & $M_\pi$ & $M_\pi L$ & $N_{\rm meas}$\\ \hline $\sf A3$ & $0.079$ & $64 \times 32^3$ & $473$ & $6.0$ & $1004$\\ $\sf A4$ & & $64 \times 32^3$ & $363$ & $4.7$ & $1600$\\ $\sf A5$ & & $64 \times 32^3$ & $312$ & $4.0$ & $1004$\\ $\sf B6$ & & $96 \times 48^3$ & $267$ & $5.1$ & $1224$\\ \hline $\sf E5$ & $0.063$ & $64 \times 32^3$ & $456$ & $4.7$ & $4000$\\ $\sf F6$ & & $96 \times 48^3$ & $325$ & $5.0$ & $1200$\\ $\sf F7$ & & $96 \times 48^3$ & $277$ & $4.2$ & $1000$\\ $\sf G8$ & & $128 \times 64^3$ & $193$ & $4.0$ & $820$\\ \hline $\sf N5$ & $0.050$ & $96 \times 48^3$ & $430$ & $5.2$ & $1392$\\ $\sf N6$ & & $96 \times 48^3$ & $340$ & $4.1$ & $2236$\\ $\sf O7$ & & $128 \times 64^3$ & $261$ & $4.4$ & $552$\\ \hline \end{tabular} \end{center} \caption{Ensembles of O$(a)$ improved Wilson fermions used in the determination of $a_\mu^{\rm HLO}$ by the Mainz group. Approximate values of the lattice spacing $a$ and of the pion mass $M_\pi$ (in $\mathrm{MeV}$) together with information about the lattice volume and the number of measurements $N_{\rm meas}$ are given.} \label{tab:ens} \end{table} \section{Reaching the Physical Point} \label{sec:physpoint} We already mentioned that various sea and valence quark flavours contributing to $a_\mu^{\rm HLO}$ are now being incorporated in the lattice simulations (see Fig.~\ref{fig:gm2comp}). In addition, simulations with non-degenerate $u$ and $d$ quark masses~\cite{Gregory:2013taa} or studies of the valence $b$-quark contribution with NRQCD~\cite{Colquhoun:2014ica} are also being considered. The approach to the physical point in the $u$,\,$d$ sector can be a source of sizeable systematic effects. The light-quark mass dependence of $a_\mu^{\rm HLO}$ is linked to the resonance nature of the $\rho$-meson and is thus expected to become more important when approaching the chiral limit. Different fit forms, often inspired by chiral effective theories, have been used to estimate the uncertainty from the chiral extrapolation. The explicit measurement of the vector meson mass has also been used in the calculation of $a_\mu^{\rm HLO}$ to modify its chiral behaviour~\cite{Feng:2011zk}. Studies including simulations in the neighbourhood of the physical point have recently been reported~\cite{Burger:2013jva,Gregory:2013taa,Chakraborty:2014mwa}. For sufficiently large volumes, the physical effect of the $\rho$-meson decay will contribute and a dedicated effort will be needed to address the associated fluctuations. \begin{figure}[t!] \centering \includegraphics[scale=0.69]{D_vs_Mps_q2_1.33_dat_fit.pdf} \caption{Pion-mass dependence of the Adler function $D(Q^2)$ at fixed $Q^2=1.3\,{\rm GeV^2}$. The upper band, denoted by `C.L.' in the legend, is the continuum limit estimate. The leftmost (filled) symbols refer to the extrapolated values at the physical pion mass.} \label{fig:adler_mps} \end{figure} \section{Studies of $a_\mu^{\rm HLO}$ with improved Wilson fermions} \label{sec:amuwilson} The lattice group in Mainz has developed a dedicated research program aiming at a precise determination of physical observables related to the VPF~\cite{DellaMorte:2011aa,Horch:2013lla,Francis:2013fzp,DellaMorte:2014rta,Francis:2014qta,Francis:2014hoa,Shintani:lat14,Herdoiza:2014jta,Francis:2014yga,DellaMorte:2012cf}. We report some recent developments in the study of $a_\mu^{\rm HLO}$ where several of the previously discussed advances have been implemented. The lattice QCD ensembles (c.f. table~\ref{tab:ens}) with two dynamical flavours of non-perturbatively O$(a)$ improved Wilson fermions were produced as part of the CLS initiative. They include three values of the lattice spacing $a$, large volumes and pion masses down to $M_\pi\approx 190\,{\rm MeV}$. A substantial increase in the number of measurements $N_{\rm meas}$ has also been achieved recently. \begin{figure}[t!] \centering \includegraphics[scale=0.69]{D_vs_q2_cont_chi_lim_long_q2_fit_udsc_sum.pdf} \caption{Contributions from $(u,d)$ and from partially quenched strange $s_Q$ and charm $c_Q$ quark flavours to the Adler function after having performed the continuum and chiral extrapolations. The $(u,d)$ contribution shows a good agreement with the phenomenological model of ref.~\cite{Bernecker:2011gh} denoted by the blue `+' symbols. For the cases where $s_Q$ and $c_Q$ are included, a comparison to perturbative QCD results from the \texttt{pQCDAdler} package~\cite{pqcdadler} is shown.} \label{fig:adler_fla} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.40\textwidth]{amu_adler_compa.pdf} \caption{Preliminary results for the $u,d$ contribution to $a_\mu^{\rm HLO}$ based on the determination of the Adler function and on the use of the representation in eq.~(\ref{eq:amuadler}). The $Q^2$ dependence is examined by the use of Pad\'e approximants of order $[1,2]$ and $[2,2]$. Fits where pion masses, $M_\pi \geq 400\,{\rm MeV}$, have been included/excluded are used to study systematic effects in the light-quark mass dependence of $a_\mu^{\rm HLO}$. We observe that lattice artefacts are under control by performing separate analyses with fit ans\"atze including either O$(a)$ or O$(a^2)$ terms~\cite{DellaMorte:2014rta}.} \label{fig:amu_adler} \end{figure} The VPF is extracted through eq.~(\ref{eq:vpcont}) from a lattice determination of the vacuum polarisation tensor, using local-conserved vector currents in the r.h.s of eq.~(\ref{eq:pol_tensor}). A high density of $Q^2$ points for the VPF is attained by the use of partially twisted boundary conditions. We take advantage of this in order to derive the Adler function $D(Q^2)$ in eq.~(\ref{eq:adler}) from numerical derivatives of the VPF~\cite{Horch:2013lla,DellaMorte:2014rta}. The $Q^2$ dependence of the Adler function is then analysed in terms of Pad\'e approximants of various orders. This study is integrated into a global analysis of $D(Q^2)$ combining all the ensembles listed in table~\ref{tab:ens}. An estimate of $D(Q^2)$ in the continuum limit and at the physical point can be obtained in this way. An illustration of the pion-mass dependence of the non-singlet $(u,d)$ contribution to the Adler function at fixed $Q^2$ is shown in Fig.~\ref{fig:adler_mps}. Systematic effects due to lattice artefacts and from the extrapolation of the light-quark mass to the physical point are explored by considering various fit forms and by repeating the analysis on subsets of the available ensembles~\cite{DellaMorte:2014rta}. The light $(u,d)$ as well as the partially quenched strange $s_Q$ and charm $c_Q$ contributions to $D(Q^2)$ are displayed in Fig.~\ref{fig:adler_fla}. Some interesting applications of the Adler function include the matching to perturbation theory to determine the QCD coupling constant $\alpha_s$ or the study of the hadronic contribution to the running of QED coupling~\cite{Herdoiza:2014jta,Francis:2014yga}. Preliminary results for $a_\mu^{\rm HLO}$ from the use of the Adler function representation in eq.~(\ref{eq:amuadler}) are shown in Fig.~\ref{fig:amu_adler}. The determination of the subtracted VPF from the {\it mixed-representation} in eq.~(\ref{eq:mixrep}) can be compared to the more standard procedure where $\Pi(Q^2)$ is extracted from eq.~(\ref{eq:vpcont}) and then extrapolated to $Q^2=0$ to determine $\widehat{\Pi}(Q^2)$~\cite{Francis:2014qta}. The top panel of Fig.~\ref{fig:mixrep} shows an example of this comparison for the subtracted VPF over a large $Q^2$ interval. The agreement is corroborated by the lower panel of Fig.~\ref{fig:mixrep} where the extrapolated estimate for $\Pi(0)$ is checked against the mixed-representation method. \begin{figure}[t!] \centering \includegraphics[width=0.40\textwidth]{combine_G8_tw.png} \caption{Comparison of the determinations of the subtracted VPF from the mixed-representation method (MRM) in eq.~(\ref{eq:mixrep}) and from the more standard approach (STD) based in eq.~(\ref{eq:vpcont}) and an extrapolation of $\Pi(Q^2)$ to $Q^2=0$. The upper panel shows the consistency among these methods for $\widehat{\Pi}(Q^2)$ over a large $Q^2$ interval. The lower panel shows the corresponding difference, $\Pi(Q^2)_{\rm STD} -\widehat{\Pi}(Q^2)_{\rm MRM}$ and demonstrates the stability of the derived values of $\Pi(0)$. Data from an ensemble with $M_\pi \approx 190\,{\rm MeV}$ are shown but similar results are observed for heavier pion masses up to $\sim 450\,{\rm MeV}$~\cite{Francis:2014qta}.} \label{fig:mixrep} \end{figure} The flavour singlet currents in eq.~(\ref{eq:current}) require the presence of Wick contractions involving both quark-connected and quark-disconnected contributions to the vector correlation functions. The latter suffer from large statistical fluctuations and are therefore often neglected in present lattice computations due to their high computational cost. It is however crucial to put a bound on their expected size. The {\it mixed-representation} correlator $G(x_0,\,\vec k=0)$ in eq.~(\ref{eq:veccor}) is dominated in the large Euclidean time limit by the lowest energy state corresponding to the isovector channel, i.e. $G^{\rho\rho}(x_0)$. This leads to the following asymptotic behaviour~\cite{Francis:2013fzp,Francis:2014hoa} of the quark-disconnected vector correlation $G_{\rm disc}^{\ell s}(x_0)$ involving light $\ell=u,d$ and strange $s$ quarks, \begin{equation} \frac{1}{9}\,\frac{G_{\rm disc}^{\ell s}(x_0)}{G^{\rho\rho}(x_0)} \stackrel{x_0\to\infty}{\longrightarrow} -\frac{1}{9}\,, \label{eq:discon} \end{equation} in agreement with the expectation based on ChPT~\cite{DellaMorte:2010aq}. A lattice evaluation of the l.h.s. of eq.~(\ref{eq:discon}) as a function of $x_0$ is shown in Fig.~\ref{fig:discon}. A significant reduction of the statistical fluctuations in $G_{\rm disc}^{\ell s}(x_0)$ was obtained by using the same stochastic sources for the light and strange quark contributions. The signal is compatible with zero with an error approaching $1/9$ at $x_0 \approx 15a \approx 1\,{\rm fm}$. By assuming that the asymptotic value in eq. (12) is reached at this distance, a conservative upper bound on the disconnected contribution of $\sim 4\%$ can be inferred. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ratioestimate.pdf} \caption{Lattice evaluation of the Euclidean time dependence of the ratio of the quark-disconnected vector correlation $G_{\rm disc}^{\ell s}(x_0)$, involving light $\ell=u,d$ and strange $s$ quarks, to the isovector $\rho$-meson correlation function $G^{\rho\rho}(x_0)$~\cite{Francis:2014hoa}. The asymptotic value $-1/9$ in eq.~(\ref{eq:discon}) is denoted by the blue horizontal line for $x_0/a \geq 15$. Approximately $4 \cdot 10^{5}$ inversions of the Dirac operator are needed to achieve the accurary shown in this figure.} \label{fig:discon} \end{figure} \section*{Conclusions} In the next few years, a new generation of experiments is expected to improve the determination of the anomalous magnetic moment of the muon $a_\mu$ by a factor of four. A similar improvement in the SM prediction would greatly enhance the sensitivity to physics beyond the SM. Leading order hadronic effects are responsible for the largest theoretical uncertainty in $a_\mu$, coming from a phenomenological approach based on a combination of dispersive techniques and experimental data. Lattice QCD provides a first principles determination that can lead to an independent and valuable check. We have presented some recent ideas and applications that are expected to lead to an improved determination of $a_\mu^{\rm HLO}$. Higher-order hadronic effects from light-by-light scattering are the second largest source of error in the SM prediction of $a_\mu$. We refer to ref.~\cite{Shintani:ichep14} for a review presented at this conference about the recent progress in using lattice QCD to determine these contributions. \section*{Acknowledgements} We thank Michele Della Morte, Andreas J\"{u}ttner and Andreas Nyffeler for useful discussions. Our calculations were performed on the ``Wilson'' and ``Clover'' HPC Clusters at the Institute of Nuclear Physics, University of Mainz. We thank Dalibor Djukanovic and Christian Seiwerth for technical support. This work was granted access to the HPC resources of the Gauss Center for Supercomputing at Forschungzentrum J\"ulich, Germany, made available within the Distributed European Computing Initiative by the PRACE-2IP, receiving funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-283493 (project PRA039). We are grateful for computer time allocated to project HMZ21 on the BG/Q JUQUEEN computer at NIC, J\"ulich. This research has been supported in part by the DFG in the SFB~1044. We thank our colleagues from the CLS initiative for sharing the ensembles used in this work. G.H. acknowledges support by the Spanish MINECO through the Ram\'on y Cajal Programme and through the project FPA2012-31686 and by the Centro de Excelencia Severo Ochoa Program SEV-2012-0249.